**Question 1**. This was a homework question, and it was (partially) discussed in class. Here I will write a shorter solution than was discussed in class.

Suppose is an isomorphism. To check that the inverse function is linear we want to show

for all vectors and for all scalars . Since is onto, we have that for any such vectors , there exist vectors such that . Note that this is equivalent to .

We then have

since is linear. By definition of inverse function, this equals

establishing the desired equation.

**Question 2**. (a) In general, we know that the matrix representation of any homomorphism — with respect to bases and — is given by

where, as in our problem, the basis contains the vectors . When we apply this to the transformation for and bases and we find

since, by definition of the identity map, . As discussed in class, the representation of a vector with respect to the standard basis is given by

.

Hence, the columns of the matrix representation of the identity map are precisely the vectors .

(b) The columns of the representation matrix are given by

As a result, the matrix representation has the given diagonal form.

**Question 3**. (a) To prove that is linear, let be arbitrary and let be arbitrary, too. Then

.

(b) is not one-to-one since the two constant polynomials and are distinct, but .

(c) From part (b) it follows that, since is not one-to-one, the null space of contains, in addition to the zero vector, other non-trivial vectors. This means that . By the Rank-Nullity Theorem, then . Since the domain and co-domain are the same space, this tells us that the rank of is smaller than the dimension of the co-domain, which, in turn, tells us that is not onto.

One particular element that is not in the range space of is the polynomial . One can argue this from the definition of , since every output it produces is a multiple of the polynomial . Thus, any polynomial in that is not a multiple of cannot be an output.

Alternatively, one may argue that this particular polynomial, , is not in the range space by setting up a relevant system of equations. This can be achieved by noting that for some polynomial . This happens, though, if and only if

which corresponds to the augmented system of equations

which has no solutions.

**Question 4**. (a) Let us notate

so that the column space of can be notated as the span of the set .

We want to prove that . To do this, we must show that every element of is also an element of and, conversely, we must show that every element of is also an element of .

Alternatively, we can also argue that these two sets coincide by trying to prove that the range space and both have the same spanning set. In particular, since is spanned by , it would suffice to prove that was also spanned by . This can be proved in two steps.

(1) First, we can show that the -th column of matrix — what we’ve labelled — is always the result of the matrix product

where

is the -th vector in the standard basis . This fact follows from the definition of matrix multiplication. For instance, let’s check that :

Similar computations show that .

(2) We can now use the linearity of matrix-multiplication to show that every vector in can be expressed as a linear combination of the columns .

Indeed, let be arbitrary. Then, using the standard basis for , we can uniquely express

and so

which shows that every vector in the range space can be written as a linear combination of the columns . In other words, this shows that the range space is spanned by , and so .

(b) To show that the dimension of the column space equals the dimension of the row space we can resort to some observations made at the beginning of this course. In particular, every matrix has a reduced row echelon form wherein all rows of ‘s are listed at the “bottom” of the reduced matrix, and non-zero rows are organized so that leading ‘s appear to the left of leading ‘s in rows below.

Note that row-reducing a matrix is the computational step we would take to answer a very abstract-sounding question: Is the matrix (regarded as a homomorphism) onto? To answer this question we would let be arbitrary and attempt to solve

for the column vector . Written in matrix form, this gives us

which can be represented by the augmented matrix

The answer to this question depends on the row-reduced echelon form of . If there are no rows of ‘s, then the system will have a solution and so is onto. Indeed, the number of rows of ‘s determines the number of restrictions we have on the possible output vectors . Each non-zero row corresponds to “one degree of freedom” and so we see that the number of linearly independent rows equals the dimension of the range space.

**Question 5**. Suppose is an isomorphism and that is a basis for for . To prove that is a basis for we must show that this set both spans and is linearly independent.

Quick note before proceeding with the proof: we already know that if two vector spaces are isomorphic, then they must have the same dimension. It is at least plausible, then, that the set is a basis for since it, too, contains vectors.

To prove that is linearly independent suppose

where . We need to show that . By the linearity of the above equation can be rewritten as

.

Because is assumed to be an isomorphism, is one-to-one. As proven on a previous homework, this implies that . The vector expression above is in the null space of and so we learn that

but because the vectors are assumed to form a basis they are linearly independent and so as desired.

To prove that spans let be arbitrary. Because is an isomorphism, is onto and so so that . Because is a basis for we can (uniquely!) express . We then find

and so the set spans .

**Question 6**. (a) To prove that is linear we let and be arbitrary. We will denote the entries of and by and , respectively. We then have

demonstrating that is linear.

(b) Using a previous homework problem, we know that is one-to-one . Therefore, suppose

This equation implies that for all , and so the original matrix consists entirely of entries. This means was the zero matrix, and so is one-to-one.

(c ) Since, by part (b), the nullity of is 0 and the dimension of the domain space is , the Rank Nullity theorem implies that the rank of is 4. (For this question, this implies that is onto!)

(d) The matrix representation (with respect to the standard bases) is given by

**Question 7**. For ease of notation, let use denote the function by . Let us also name the elements in the basis as and in the basis as .

Let . Then the homomorphism has as its representation matrix (with respect to the given bases)

By definition of we have that . Moreover, because the function is linear, we see that each column in the above matrix is equal to

Hence, by the definition of matrix addition, we see that

.

Similar reasoning can be used to argue why for any homomorphism and any scalar . Indeed, the -th column of the matrix representation will be given by

Lastly, we need to demonstrate that is both one-to-one and onto. For the former, suppose that for some we have . This would mean that every entry of the matrix representation of equals so that, in particular

which implies that . Since this happens for every basis vector , we find that for any arbitrary vector

.

Since for all inputs, this implies that is the zero-homomorphism, i.e. the zero vector in the space . Therefore, since the mapping only has a trivial null space it is one-to-one.

To show that is onto let be arbitrary. We then need to construct an input, , so that . Let us denote the entries of by the doubly-indexed expressions . The first column of matrix is then given by

and the general -th column of is therefore given by

In order for this matrix to be the representation of a homomorphism with respect to the bases , it must follow that

and, more generally,

.

As discussed in class (and in the textbook and via various homework exercises), *a homomorphism *** is completely determined by its action on a basis**. That is, specifying the outputs of uniquely defines the desired homomorphism . Therefore, given a matrix , the homomorphism that satisfies is the one defined by

.

**Question 8**. (a) A spanning set for the range space is given by

.

To determine if this set is a basis we need to determine if it is linearly independent. To do this we solve the vector equation

for the coefficients . This sets up a system of equations which can be represented by the augmented matrix

The echelon form of this matrix shows that there are infinitely many solutions for . Indeed, is a free variable and we have and . The first two column vectors are not multiples of one another, and so they do form a linearly independent spanning set for the range space. This means we may take as a basis.

(b) The null space consists of all the vectors so that . To determine the vectors in the null space, we solve the system of equations that was solved in part (a). From that work we learned that the null space consists of all multiples of the vector . That is

.

(c ) From parts (a) and (b), it follows that . To actually compute it we can expand along the first row and find

(d) As discussed in our work for part (a), the columns of are linearly dependent. This corresponds to the fact that the rank of is 2.

(e) The row vectors are also linearly dependent, as can be seen by noting that . Alternatively, since we computed and we know that , we see that the columns of are linearly dependent, but the columns of are precisely the rows of .

**Question 9**. (a) If is non-singular then is an isomorphism which implies that is one-to-one, and we have already established that .

(b) If is non-singular then is an isomorphism which implies that is onto which is equivalent to the range space of coinciding with the entire codomain .

**Question 10**. (a) applied to the given matrix is the polynomial .

(b) (skipped for now)

(c ) Suppose satisfies . This means that the six entries of must satisfy the equations

One can encode this linear system as an augmented matrix and row reduce. Doing so Reveals one free variable, say (with and all other variables equalling zero). The null-space is one dimensional since one finds

(d) Since the nullity of is one, the rank nullity theorem implies that the rank of equals .

(e) From part (d) above, we know that is not onto, and so it is possible that this polynomial is not in the range space. Indeed, since the coefficient in front of the term does not equal the one in front of the term, this polynomial is not an output of .

**Question 11. **(a) is an eigenvector with eigenvalue for the matrix means that and that .

(b) First note that by definition . Moreover, suppose . This means that . Let , then since . Lastly, suppose . Then

which shows that .

is therefore a subspace of .

(c ) As we learned (or will learn) in our FOM classes, two sets are equal when each is a subset of the other. Therefore, to prove that we can first show that and then show that .

To prove that let . By definition of null space this means that

which can be rewritten as

which implies that is an eigenvector with eigenvalue (or that ). That is, , as desired.

To prove that , let . By the given definition this means that either or that for the specified scalar . In the first case, since . In the second case we have

and so .

Note: Although proving two sets are equal can always be accomplished by demonstrating that one is a subset of the other (in fact, this is the definition of set equality), sometimes this can be demonstrated by concluding that each set can be described in exactly the same way. This can be done for this problem (and probably feels like a shorter solution). By definition of we have that

.

**Question 12**. (a) One finds that

(b) By the definition of matrix representation, we have that

Given a vector we first need to compute . As discussed in class, this representation is easy:

.

We then multiply the given matrix against this vector to obtain

We now set this output equal to a linear combination of vectors from and solve:

**Question 13.** (a) To determine if the given set of vectors is linearly independent or linearly dependent we need to determine which scalars can be used to solve

.

This vector equation corresponds to a linear system of equations (in the variables ) that can be encoded by the matrix

.

This augmented matrix can be row-reduced to the matrix

from which we conclude that and . It follows that all three vectors are linearly independent.

(b) The given set of polynomials correspond to the set of vectors from part (a) under the isomorphism where an arbitrary degree-2 (or less) polynomial is mapped to

As discussed in class and in our textbook, representation functions are isomorphisms. This implies that, since the image of the given set of polynomials under this isomorphism is a basis for , the given set of polynomials is also a basis for .

(c ) To solve for the null space of , one solves the equation for the input vector . This leads to a linear system which can be encoded by the augmented matrix

.

This augmented matrix was already handled in part (a); the only solution possible is the zero vector which implies that .

(d) We can use part (c ) and the rank nullity theorem to conclude that is onto.

(e) One can note that the set from part (a) coincides with the set of outputs

and since this set of vectors was proven to be a basis for .

(f) Since is both one-to-one and onto it is an isomorphism and so is invertible. This means its columns (and its rows) are linearly independent which implies that .

Alternatively, one can explicitly compute that .

**Question 14**. (a) More generally we can prove that given any matrix the set

is a subspace of . First note that the zero matrix is always an element of since . Next, suppose . We then have

and so . Finally, if and then

which implies that . Therefore is a subspace.

(b) For the specific matrix given in the problem we have that an arbitrary matrix is an element of when

This matrix equation produces the following equations

Which can be rewritten and encoded by the augmented matrix

This tells us that is free, which implies that the dimension of this subspace is . In particular, we have that and so that

(c ) There are lots of matrices with . For instance, one could use the zero matrix, , the identity matrix or, in fact, any multiple of the identity matrix .

**Question 15**. (a) Suppose is linear. The range space of is defined by

.

Since , the zero vector is an element of the range space. Suppose and that . Then since

for some

and so which implies that .

Similarly, suppose . This means that there exist so that . By the linearity of we have that

and so .

(b) The null space of is defined as . First note that since . Next, suppose and suppose . Then . Hence

(c ) Let be a subspace. We want to show that the set

is a subspace of . First note that since and, since is a subspace, . Therefore .

Next, suppose and let . By the definition of this means that such that and . Observe that

which implies that since, by virtue of being a subspace, .

(d) Let be a subspace. The set

can be shown to be a subspace. First, since (because is a subspace) and .

Next, suppose and let . Since is a subspace of it follows that . Because is linear this expression equals

and so .