Linear MidTerm 2 Answers

Question 1.  This was a homework question, and it was (partially) discussed in class.  Here I will write a shorter solution than was discussed in class.

Suppose h \in \mathcal{L}(V, W) is an isomorphism.  To check that the inverse function h^{-1} is linear we want to show

h^{-1}(r\cdot\vec{w}_1+s\cdot\vec{w}_2) = r\cdot h^{-1}(\vec{w}_1) + s\cdot h^{-1}(\vec{w}_2)

for all vectors \vec{w}_1, \vec{w}_2 \in W and for all scalars r, s \in \mathbb{R}.  Since h is onto, we have that for any such vectors \vec{w}_i \in W, there exist vectors \vec{v}_i \in V such that h(\vec{v}_i) = \vec{w}_i.  Note that this is equivalent to \vec{v}_i = h^{-1}(\vec{w}_i).

We then have

h^{-1}(r\cdot\vec{w}_1+s\cdot\vec{w}_2) = h^{-1}(r\cdot h(\vec{v}_1) + s\cdot h(\vec{v}_2)) = h^{-1}(h(r\cdot \vec{v}_1 + s\cdot \vec{v}_2))

since h is linear.  By definition of inverse function, this equals

= r\cdot \vec{v}_1 + s\cdot \vec{v}_2 = r\cdot h^{-1}(\vec{w}_1) + s\cdot h^{-1}(\vec{w}_2)

establishing the desired equation.  \square

 

Question 2.  (a) In general, we know that the matrix representation of any homomorphism T \in \mathcal{L}(V, W) — with respect to bases \mathcal{B} and \mathcal{D} — is given by

\text{Rep}_{\mathcal{B}, \mathcal{D}}(T) = \left[ \begin{array}{cccc} | & | & & | \\ \text{Rep}_{\mathcal{D}} T(\vec{v}_1) & \text{Rep}_{\mathcal{D}} T(\vec{v}_2) & \cdots & \text{Rep}_{\mathcal{D}} T(\vec{v}_n) \\ | & | & & | \end{array}\right]

where, as in our problem, the basis \mathcal{B} contains the vectors \mathcal{B} = \{\vec{v}_1, \cdots, \vec{v}_n\}.  When we apply this to the transformation T = \text{id}_{\mathbb{R}^n} for V = \mathbb{R}^n = W and bases \mathcal{B} and \mathcal{D} = \mathcal{E}_n we find

\text{Rep}_{\mathcal{D}} T(\vec{v}_i) = \text{Rep}_{\mathcal{E}_n} \text{id}_{\mathbb{R}^n}(\vec{v}_i) = \text{Rep}_{\mathcal{E}_n} \vec{v}_i

since, by definition of the identity map, \text{id}_{\mathbb{R}^n}(\vec{v}_i) = \vec{v}_i.  As discussed in class, the representation of a vector \vec{v}_i \in \mathbb{R}^n with respect to the standard basis is given by

\text{Rep}_{\mathcal{E}_n} \vec{v}_i = \left[ \begin{array}{c} | \\ \vec{v}_i \\ | \end{array}\right].

Hence, the columns of the matrix representation of the identity map are precisely the vectors \vec{v}_i.

(b) The columns of the representation matrix \text{Rep}_{\mathcal{B}, \mathcal{B}}(h) are given by

\text{Rep}_{\mathcal{B}} h(\vec{v}_i) = \text{Rep}_{\mathcal{B}} \lambda_i \vec{v}_i = \left[ \begin{array}{c} 0 \\ 0 \\ \vdots \\ 0 \\ \lambda_i \\ 0 \\ \vdots \\ 0 \end{array} \right]

As a result, the matrix representation has the given diagonal form.

 

Question 3.  (a) To prove that T is linear, let p(x), q(x) \in \mathcal{P}_2 be arbitrary and let r, s \in \mathbb{R} be arbitrary, too.  Then

T(r\cdot p(x) + s\cdot q(x)) = x^2\left(rp(x) + sq(x)\right)'' = x^2(rp''(x) + sq''(x)) = rx^2p''(x) + sx^2q''(x) = r\cdot T(p(x)) + s\cdot T(q(x)).

(b) T is not one-to-one since the two constant polynomials p(x) = 5 and q(x) = 7 are distinct, but T(5) = 0 = T(7).

(c) From part (b) it follows that, since T is not one-to-one, the null space of T contains, in addition to the zero vector, other non-trivial vectors.  This means that \text{dim}\,\mathcal{N}(T) > 0.  By the Rank-Nullity Theorem, then \text{dim}\,\mathcal{R}(T) < \text{dim} \mathcal{P}_2 = 3.  Since the domain and co-domain are the same space, this tells us that the rank of T is smaller than the dimension of the co-domain, which, in turn, tells us that T is not onto.

One particular element that is not in the range space of T is the polynomial 1+x.  One can argue this from the definition of T, since every output it produces is a multiple of the polynomial x^2.  Thus, any polynomial in \mathcal{P}_2 that is not a multiple of x^2 cannot be an output.

Alternatively, one may argue that this particular polynomial, 1+x, is not in the range space \mathcal{R}(T) by setting up a relevant system of equations.  This can be achieved by noting that 1+x \in \mathcal{R}(T) \iff T(a + bx + cx^2) = 1+x for some polynomial a + bx + cx^2 \in \mathcal{P}_2.  This happens, though, if and only if

x^2(2c)  = 1 + x

which corresponds to the augmented system of equations

\left( \begin{array}{ccc|c} 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 2 & 0 \end{array} \right)

which has no solutions.

Question 4.  (a) Let us notate

A = \left[ \begin{array}{cccc} | & | & & | \\ \vec{v}_1 & \vec{v}_2 & \cdots & \vec{v}_n \\ | & | & & | \end{array} \right]

so that the column space of A can be notated as the span of the set S = \{\vec{v}_1, \cdots, \vec{v}_n\}.

We want to prove that [S] = \mathcal{R}(A).  To do this, we must show that every element of [S] is also an element of \mathcal{R}(A) and, conversely, we must show that every element of \mathcal{R}(A) is also an element of [S].

Alternatively, we can also argue that these two sets coincide by trying to prove that the range space and [S] both have the same spanning set.  In particular, since [S] is spanned by S, it would suffice to prove that \mathcal{R}(A) was also spanned by S.  This can be proved in two steps.

(1) First, we can show that the i-th column of matrix A — what we’ve labelled \vec{v}_i — is always the result of the matrix product

A\vec{e}_i = \vec{v}_i where \vec{e}_i = (0, 0, \cdots, 0, 1, 0, \dots, 0)^T

is the i-th vector in the standard basis \mathcal{E}_n.  This fact follows from the definition of matrix multiplication.  For instance, let’s check that A\vec{e}_1 = \vec{v}_1:

A\vec{e}_1 = \left[ \begin{array}{cccc} | & | & & | \\ \vec{v}_1 & \vec{v}_2 & \cdots & \vec{v}_n \\ | & | & & | \end{array}\right] \, \left[ \begin{array}{c} 1 \\ 0 \\ \vdots \\ 0 \end{array}\right]  = \left[ \begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1m} \\ a_{21} & a_{22} & \cdots & a_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nm} \end{array} \right] \, \left[ \begin{array}{c} 1 \\ 0 \\ \vdots \\ 0 \end{array}\right] = \left[ \begin{array}{c} a_{11} \\ a_{21} \\ \vdots \\ a_{n1} \end{array}\right] = \vec{v}_1

Similar computations show that \vec{v}_i = A\vec{e}_i.

(2) We can now use the linearity of matrix-multiplication to show that every vector in \mathcal{R}(A) can be expressed as a linear combination of the columns \vec{v}_i = A\vec{e}_i.

Indeed, let \vec{x} \in \mathbb{R}^n be arbitrary.  Then, using the standard basis for \mathbb{R}^n, we can uniquely express

\vec{x} = c_1\vec{e}_1 + c_2\vec{e}_2 + \cdots + c_n\vec{e}_n

and so

A\vec{x} = A(c_1\vec{e}_1 + c_2\vec{e}_2 + \cdots + c_n\vec{e}_n) = c_1A\vec{e}_1 + c_2A\vec{e}_2 + \cdots + c_nA\vec{e}_n = c_1\vec{v}_1 + c_2\vec{v}_2 + \cdots + c_n\vec{v}_n

which shows that every vector A\vec{x} in the range space can be written as a linear combination of the columns \vec{v}_i.  In other words, this shows that the range space is spanned by S = \{\vec{v}_1, \cdots, \vec{v}_n\}, and so \mathcal{R}(A) = [S].

(b) To show that the dimension of the column space equals the dimension of the row space we can resort to some observations made at the beginning of this course.  In particular, every m \times n matrix has a reduced row echelon form wherein all rows of 0‘s are listed at the “bottom” of the reduced matrix, and non-zero rows are organized so that leading 1‘s appear to the left of leading 1‘s in rows below.

Note that row-reducing a matrix is the computational step we would take to answer a very abstract-sounding question: Is the matrix (regarded as a homomorphism) onto?  To answer this question we would let \vec{y} \in \mathbb{R}^m be arbitrary and attempt to solve

A\vec{x} = \vec{y}

for the column vector \vec{x} \in \mathbb{R}^n.  Written in matrix form, this gives us

\left[ \begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1m} \\ a_{21} & a_{22} & \cdots & a_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nm} \end{array} \right] \, \left[ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array}\right] = \left[ \begin{array}{c} y_1 \\ y_2 \\ \vdots \\ y_n \end{array} \right]

which can be represented by the augmented matrix

\left[ \begin{array}{cccc|c} a_{11} & a_{12} & \cdots & a_{1m} & y_1 \\ a_{21} & a_{22} & \cdots & a_{2m} & y_2 \\ \vdots & \vdots & \ddots & \vdots  & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nm} & y_m \end{array} \right]

The answer to this question depends on the row-reduced echelon form of A.  If there are no rows of 0‘s, then the system will have a solution and so A is onto.  Indeed, the number of rows of 0‘s determines the number of restrictions we have on the possible output vectors \vec{y}.  Each non-zero row corresponds to “one degree of freedom” and so we see that the number of linearly independent rows equals the dimension of the range space.

Question 5.  Suppose h \in \mathcal{L}(V, W) is an isomorphism and that \mathcal{B} = \{\vec{v}_1, \dots, \vec{v}_n\} is a basis for for V.  To prove that \mathcal{D} = \{h(\vec{v}_1), \cdots, h(\vec{v}_n)\} is a basis for W we must show that this set both spans W and is linearly independent.

Quick note before proceeding with the proof: we already know that if two vector spaces are isomorphic, then they must have the same dimension.  It is at least plausible, then, that the set \mathcal{D} is a basis for W since it, too, contains n vectors.

To prove that \mathcal{D} is linearly independent suppose

c_1h(\vec{v}_1) + \cdots + c_nh(\vec{v}_n) = \vec{0}

where c_i \in \mathbb{R}.  We need to show that c_1 = c_2 = \cdots = c_n = 0.  By the linearity of h the above equation can be rewritten as

h\left(c_1\vec{v}_1 + \cdots + c_nh(\vec{v}_n)\right) = \vec{0}.

Because h is assumed to be an isomorphism, h is one-to-one.  As proven on a previous homework, this implies that \mathcal{N}(h) = \{\vec{0}\}.  The vector expression above is in the null space of h and so we learn that

c_1\vec{v}_1 + \cdots + c_n\vec{v}_n = \vec{0}

but because the vectors \vec{v}_i are assumed to form a basis they are linearly independent and so c_1 = c_2 = \cdots = c_n = 0 as desired.

To prove that \mathcal{D} spans W let \vec{w} = W be arbitrary.  Because h is an isomorphism, h is onto and so \exists \, \vec{v} \in V so that h(\vec{v}) = \vec{w}.  Because \mathcal{B} is a basis for V we can (uniquely!) express \vec{v} = c_1\vec{v}_1 + \cdots + c_n\vec{v}_n.  We then find

\vec{w} = h(\vec{v}) = h\left(c_1\vec{v}_1 + \cdots + c_n\vec{v}_n\right) = c_1h(\vec{v}_1) + \cdots + c_nh(\vec{v}_n)

and so the set \mathcal{D} spans W.

Question 6. (a) To prove that h is linear we let A, B \in \mathcal{M}_{2\times 2} and c_1, c_2 \in \mathbb{R} be arbitrary.  We will denote the entries of A and B by a_{ij} and b_{ij}, respectively.  We then have

h \left( c_1A + c_2B \right) = \left(\begin{array}{cc} c_1a_{11} + c_2b_{11} & c_1a_{12} + c_2b_{12} \\ c_1 a_{21} + c_2 b_{21} & c_1 a_{22} + c_2 b_{22} \end{array} \right)^T = \left( \begin{array}{cc} c_1a_{11} + c_2b_{11} & c_1 a_{21} + c_2 b_{21} \\ c_1a_{12} + c_2b_{12} & c_1 a_{22} + c_2 b_{22} \end{array}\right)

= \left( \begin{array}{cc} c_1 a_{11} & c_1 a_{21} \\ c_1 a_{12} & c_1 a_{22} \end{array}\right) + \left( \begin{array}{cc} c_2 b_{11} & c_2 b_{21} \\ c_2 b_{12} & c_2 b_{22} \end{array}\right) = c_1 \left( \begin{array}{cc} a_{11} & a_{21} \\ a_{12} & a_{22} \end{array} \right) + c_2\left( \begin{array}{cc} b_{11} & b_{21} \\ b_{12} & b_{22} \end{array}\right)

= c_1\left( \begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \end{array}\right)^T + c_2\left( \begin{array}{cc} b_{11} & b_{12} \\ b_{21} & b_{22} \end{array}\right)^T = c_1 h(A) + c_2 h(B)

demonstrating that h is linear.

(b)  Using a previous homework problem, we know that h is one-to-one \iff \left( h(A) = 0 \right) \Rightarrow A = 0.  Therefore, suppose

h(A) = A^T = \left( \begin{array}{cc} a_{11} & a_{21} \\ a_{12} & a_{22} \end{array}\right) = \left( \begin{array}{cc} 0 & 0 \\ 0 & 0 \end{array}\right)

This equation implies that a_{ij} = 0 for all 1 \leq i, j \leq 2, and so the original matrix A consists entirely of 0 entries.  This means A was the zero matrix, and so h is one-to-one.

(c ) Since, by part (b), the nullity of h is 0 and the dimension of the domain space is \text{dim}\mathcal{M}_{2\times 2} = 4, the Rank Nullity theorem implies that the rank of h is 4.  (For this question, this implies that h is onto!)

(d) The matrix representation (with respect to the standard bases) is given by

\text{Rep}_{\mathcal{E}_{n\times n}, \mathcal{E}_{n\times n}} (h) = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right)

Question 7.  For ease of notation, let use denote the function \text{Rep}_{\mathcal{B},\mathcal{D}} by R: \mathcal{L}(V,W) \to \mathcal{M}_{m\times n}.  Let us also name the elements in the basis \mathcal{B} as \mathcal{B} = \{\vec{v}_1, \cdots, \vec{v}_n \} and in the basis \mathcal{D} as \mathcal{D} = \{\vec{w}_1, \cdots, \vec{w}_m\}.

Let h_1, h_2 \in \mathcal{L}(V,W).  Then the homomorphism h_3 = (h_1+h_2) has as its representation matrix (with respect to the given bases)

R(h_3) = \left[ \begin{array}{cccc} | & | & & | \\ \text{Rep}_{\mathcal{D}}h_3(\vec{v}_1) & \text{Rep}_{\mathcal{D}}h_3(\vec{v}_2) & \cdots & \text{Rep}_{\mathcal{D}}h_3(\vec{v}_n) \\ | & | & & | \end{array}\right]

By definition of h_3 we have that h_3(\vec{v}_i) = (h_1 + h_2)(\vec{v}_i) = h_1(\vec{v}_i) + h_2(\vec{v}_i).  Moreover, because the function \text{Rep}_{\mathcal{D}} : W \to \mathbb{R}^m is linear, we see that each column in the above matrix is equal to

\text{Rep}_{\mathcal{D}} h_3(\vec{v}_i) = \text{Rep}_{\mathcal{D}} (h_1(\vec{v}_i) + h_2(\vec{v}_i)) = \text{Rep}_{\mathcal{D}}h_1(\vec{v}_i) + \text{Rep}_{\mathcal{D}}h_2(\vec{v}_i)

Hence, by the definition of matrix addition, we see that

R(h_3) = \left[ \begin{array}{ccc} | &  & | \\ \text{Rep}_{\mathcal{D}}h_3(\vec{v}_1)  & \cdots & \text{Rep}_{\mathcal{D}}h_3(\vec{v}_n) \\ | &  & | \end{array}\right]

= \left[ \begin{array}{ccc} | &  & | \\ \text{Rep}_{\mathcal{D}}h_1(\vec{v}_1)  & \cdots & \text{Rep}_{\mathcal{D}}h_1(\vec{v}_n) \\ | &  & | \end{array}\right] + \left[ \begin{array}{ccc} | &  & | \\ \text{Rep}_{\mathcal{D}}h_2(\vec{v}_1)  & \cdots & \text{Rep}_{\mathcal{D}}h_2(\vec{v}_n) \\ | &  & | \end{array}\right]

= R(h_1) + R(h_2).

Similar reasoning can be used to argue why R(c\,h) = cR(h) for any homomorphism h \in \mathcal{L}(V,W) and any scalar c \in \mathbb{R}.  Indeed, the i-th column of the matrix representation R(c\,h) will be given by

R(c\,h)= \left[ \begin{array}{c} | \\ \text{Rep}_{\mathcal{D}} c\cdot h(\vec{v}_i) \\ | \end{array}\right] = c\left[\begin{array}{c} | \\ \text{Rep}_{\mathcal{D}} h(\vec{v}_i) \\ | \end{array}\right] = c R(h)

Lastly, we need to demonstrate that R:\mathcal{L}(V, W) \to \mathcal{M}_{m\times n} is both one-to-one and onto.  For the former, suppose that for some h \in \mathcal{L}(V, W) we have R(h) = 0.  This would mean that every entry of the matrix representation of h equals 0 so that, in particular

\text{Rep}_{\mathcal{D}} h(\vec{v}_i) = \left[ \begin{array}{c}  0 \\ 0 \\ \vdots \\ 0 \end{array}\right]

which implies that h(\vec{v}_i) = 0\vec{w}_1 + \cdots 0\vec{w}_m = \vec{0}_W.  Since this happens for every basis vector \vec{v}_i, we find that for any arbitrary vector \vec{v} \in V

h(\vec{v}) = h(c_1\vec{v}_1 + \cdots + c_n\vec{v}_n) = c_1h(\vec{v}_1) + \cdots + c_nh(\vec{v}_n) = c_1\vec{0} + \cdots + c_n\vec{0} = \vec{0}.

Since h(\vec{v}) = \vec{0} for all inputs, this implies that h is the zero-homomorphism, i.e. the zero vector in the space \mathcal{L}(V, W).  Therefore, since the mapping R only has a trivial null space it is one-to-one.

To show that R is onto let A \in \mathcal{M}_{m\times n} be arbitrary.  We then need to construct an input, h \in \mathcal{L}(V, W), so that R(h) = A.  Let us denote the entries of A by the doubly-indexed expressions a_{ij} \in \mathbb{R}.  The first column of matrix A is then given by

\left[ \begin{array}{c} a_{11} \\ a_{21} \\ \vdots \\ a_{m1} \end{array} \right]

and the general i-th column of A is therefore given by

\left[ \begin{array}{c} a_{1i} \\ a_{2i} \\ \vdots \\ a_{mi} \end{array} \right]

In order for this matrix to be the representation of a homomorphism h \in \mathcal{L}(V, W) with respect to the bases \mathcal{B}, \mathcal{D}, it must follow that

h(\vec{v}_1) = a_{11}\vec{w}_1 + a_{21}\vec{w}_2 + \cdots + a_{m1}\vec{w}_m

and, more generally,

h(\vec{v}_i) = a_{1i}\vec{w}_1 + a_{2i}\vec{w}_2 + \cdots + a_{mi}\vec{w}_m.

As discussed in class (and in the textbook and via various homework exercises), a homomorphism h \in \mathcal{L}(V, W) is completely determined by its action on a basis.  That is, specifying the outputs of h(\vec{v}_i) uniquely defines the desired homomorphism h.  Therefore, given a matrix A \in \mathcal{M}_{m\times n}, the homomorphism h \in \mathcal{L}(V, W) that satisfies R(h) = A is the one defined by

h(\vec{v}_i) = a_{1i}\vec{w}_1 + a_{2i}\vec{w}_2 + \cdots + a_{mi}\vec{w}_m.

Question 8. (a) A spanning set for the range space is given by

T(\vec{e}_1) = \left( \begin{array}{c} 1 \\ 2 \\ 4 \end{array}\right), T(\vec{e}_2) = \left(\begin{array}{c} 1 \\ 0 \\ 2 \end{array}\right), T(\vec{e}_3) = \left(\begin{array}{c} 1 \\ 1 \\ 3 \end{array}\right).

To determine if this set is a basis we need to determine if it is linearly independent.  To do this we solve the vector equation

c_1T(\vec{e}_1) + c_2T(\vec{e}_2) + c_3T(\vec{e}_3) = \vec{0}

for the coefficients c_1, c_2, c_3 \in \mathbb{R}.  This sets up a system of equations which can be represented by the augmented matrix

\left( \begin{array}{ccc|c} 1 & 1 & 1 &  0 \\ 2 & 0 & 1 & 0 \\ 4 & 2 & 3 & 0 \end{array}\right) \to \left( \begin{array}{ccc|c} 1 & 1 & 1 & 0 \\ 0 & 2 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right)

The echelon form of this matrix shows that there are infinitely many solutions for c_i.  Indeed, c_3 is a free variable and we have c_2 = (-1/2)c_3 and c_1 = -c_2 - c_3 = (-1/2)c_3.  The first two column vectors are not multiples of one another, and so they do form a linearly independent spanning set for the range space.  This means we may take \{ T(\vec{e}_1), T(\vec{e}_2) \} as a basis.

(b) The null space consists of all the vectors \vec{v} \in \mathbb{R}^3 so that T(\vec{v}) = A\vec{v} = \vec{0}.  To determine the vectors in the null space, we solve the system of equations that was solved in part (a).  From that work we learned that the null space consists of all multiples of the vector (-1/2, -1/2, 1)^T.  That is

\mathcal{N}(T) = \left \{ c_3\left(\begin{array}{c} -1/2 \\ -1/2 \\ 1 \end{array} \right) : c_3 \in \mathbb{R} \right\}.

(c ) From parts (a) and (b), it follows that \det A = 0.  To actually compute it we can expand along the first row and find

\det A = 1\left| \begin{array}{cc} 0 & 1 \\ 2 & 3 \end{array}\right| - 1\left|\begin{array}{cc} 2 & 1 \\ 4 & 3 \end{array}\right| + 1 \left| \begin{array}{cc} 2 & 0 \\ 4 & 2 \end{array}\right| = -2 -2 + 4 = 0

(d) As discussed in our work for part (a), the columns of T are linearly dependent.  This corresponds to the fact that the rank of T is 2.

(e) The row vectors are also linearly dependent, as can be seen by noting that 2(\text{row}_1) + (\text{row}_2) = (\text{row}_3).  Alternatively, since we computed \det A = 0 and we know that \det A^T = \det A = 0, we see that the columns of A^T are linearly dependent, but the columns of A^T are precisely the rows of A.

Question 9.  (a) If A is non-singular then A is an isomorphism which implies that A is one-to-one, and we have already established that (A \text{one-to-one } ) \iff \mathcal{N}(A) = \{\vec{0}\}.

(b) If A is non-singular then A is an isomorphism which implies that A is onto which is equivalent to the range space of A coinciding with the entire codomain \mathbb{R}^n.

Question 10. (a) h applied to the given matrix is the polynomial x^5 + 2x^4 + 2x^3 + x^2 + 2x + 5.

(b) (skipped for now)

(c ) Suppose A \in \mathcal{M}_{2\times 3} satisfies h(A) = 0.  This means that the six entries of A must satisfy the equations

a = 0

b + c = 0

a + d = 0

e = 0

b + c = 0

f = 0

One can encode this linear system as an augmented matrix and row reduce.  Doing so Reveals one free variable, say c (with b = -c and all other variables equalling zero).  The null-space is one dimensional since one finds

\mathcal{N}(h) = \left\{ c\left(\begin{array}{ccc} 0 & -1 & 1 \\ 0 & 0 & 0 \end{array}\right) : c \in \mathbb{R} \right\}

(d) Since the nullity of h is one, the rank nullity theorem implies that the rank of h equals \text{dim}\mathcal{M}_{2\times 3} -1 = 5.

(e) From part (d) above, we know that h is not onto, and so it is possible that this polynomial is not in the range space.  Indeed, since the coefficient in front of the x^4 term does not equal the one in front of the x term, this polynomial is not an output of h.

Question 11. (a) \vec{v} is an eigenvector with eigenvalue \lambda for the matrix A means that \vec{v} \neq \vec{0} and that A\vec{v} = \lambda\,\vec{v}.

(b) First note that by definition \vec{0} \in E_{\lambda}.  Moreover, suppose \vec{v} \in E_{\lambda}.  This means that A \vec{v} = \lambda \vec{v}.  Let c \in \mathbb{R}, then c\vec{v} \in E_{\lambda} since A(c\vec{v}) = cA\vec{v} = c\lambda\,\vec{v} = \lambda\,(c\vec{v}).  Lastly, suppose \vec{v}, \vec{w} \in E_{\lambda}.  Then

A(\vec{v}+\vec{w}) = A\vec{v} + A\vec{w} = \lambda\vec{v} + \lambda\vec{w} = \lambda(\vec{v}+\vec{w})

which shows that (\vec{v}+\vec{w}) \in E_{\lambda}.

E_{\lambda} is therefore a subspace of \mathbb{R}^n.

(c ) As we learned (or will learn) in our FOM classes, two sets are equal when each is a subset of the other.  Therefore, to prove that \mathcal{N}(A) = E_0 we can first show that \mathcal{N}(A) \subseteq E_0 and then show that E_0 \subseteq \mathcal{N}(A).

To prove that \mathcal{N}(A) \subseteq E_0 let \vec{v} \in \mathcal{N}(A).  By definition of null space this means that

A\vec{v} = \vec{0}

which can be rewritten as

A\vec{v} = \vec{0} = 0\vec{v}

which implies that \vec{v} is an eigenvector with eigenvalue \lambda = 0 (or that \vec{v} = \vec{0}).  That is, \vec{v} \in E_0, as desired.

To prove that E_0 \subseteq \mathcal{N}(A), let \vec{v} \in E_0.  By the given definition this means that either \vec{v} = \vec{0} or that A\vec{v} = \lambda\vec{v} for the specified scalar \lambda = 0 \in \mathbb{R}.  In the first case, \vec{v} =\vec{0} \in \mathcal{N}(A) since A\vec{0} = \vec{0}.  In the second case we have

A\vec{v} = \lambda\vec{v} = 0\vec{v} = \vec{0}

and so \vec{v} \in \mathcal{N}(A).

Note: Although proving two sets are equal can always be accomplished by demonstrating that one is a subset of the other (in fact, this is the definition of set equality), sometimes this can be demonstrated by concluding that each set can be described in exactly the same way.  This can be done for this problem (and probably feels like a shorter solution).  By definition of E_0 we have that

E_0 = \{ \vec{v} \neq \vec{0} : A\vec{v} = \vec{0} \} \cup \{\vec{0}\} = \{\vec{v} : A\vec{v} = \vec{0}\} = \mathcal{N}(A).

Question 12. (a) One finds that

h(x, y, z) = (x + 2y + 3z, 4x + 5y + 7z)^T = \left(\begin{array}{ccc} 1 & 2 & 3 \\ 4 & 5 & 7 \end{array}\right)\,\left(\begin{array}{c} x \\ y \\ z \end{array}\right)

(b) By the definition of matrix representation, we have that

f(\vec{v}) = \left(\text{Rep}_{\mathcal{D}}\right)^{-1}\,\left(\left(\begin{array}{ccc} 1 & 2 & 3 \\ 4 & 5 & 7 \end{array}\right)\text{Rep}_{\mathcal{\mathcal{E}_3}}\vec{v}\,\right)

Given a vector \vec{v} = (x, y, z)^T we first need to compute \text{Rep}_{\mathcal{E}_3}\vec{v}.  As discussed in class, this representation is easy:

\vec{v} = \left(\begin{array}{c} x \\ y \\ z \end{array}\right) = \text{Rep}_{\mathcal{E}_3}\vec{v}.

We then multiply the given matrix against this vector to obtain

\left( \begin{array}{ccc} 1 & 2 & 3 \\ 4 & 5 & 7 \end{array}\right) \, \left(\begin{array}{c} x \\ y \\ z \end{array}\right) = \left(\begin{array}{c} x + 2y + 3z \\ 4x + 5y + 7z \end{array}\right)

We now set this output equal to a linear combination of vectors from \mathcal{D} and solve:

\left( \begin{array}{c} x + 2y + 3z \\ 4x + 5y + 7z \end{array}\right)  =(x + 2y + 3z) \left(\begin{array}{cc} 1 \\ 1 \end{array}\right) + (4x + 5y + 7z)\left(\begin{array}{c} 3 \\ 4 \end{array}\right) 

= \left( \begin{array}{cc}13x + 17y + 24z \\ 17x + 22y + 31z \end{array}\right)

Question 13. (a) To determine if the given set of vectors is linearly independent or linearly dependent we need to determine which scalars c_1, c_2, c_3 \in \mathbb{R} can be used to solve

c_1\left(\begin{array}{c} 1 \\ 1 \\ 1 \end{array}\right) + c_2\left(\begin{array}{c} 1 \\ 1 \\ 0 \end{array}\right) + c_3\left(\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right) = \left(\begin{array}{c} 0 \\ 0 \\ 0 \end{array}\right).

This vector equation corresponds to a linear system of equations (in the variables c_i) that can be encoded by the matrix

\left( \begin{array}{ccc|c} 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \end{array}\right).

This augmented matrix can be row-reduced to the matrix

\left( \begin{array}{ccc|c} 1 & 1 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{array}\right)

from which we conclude that c_3 = 0, c_2 = 0 and c_1 = -c_2 - c_3 = 0.  It follows that all three vectors are linearly independent.

(b) The given set of polynomials correspond to the set of vectors from part (a) under the isomorphism \text{Rep}_{\mathcal{E}_3} : \mathcal{P}_2 \to \mathbb{R}^3 where an arbitrary degree-2 (or less) polynomial is mapped to

\left(ax^2 + bx + c\right) \mapsto \left(\begin{array}{c} a \\ b \\ c \end{array}\right)

As discussed in class and in our textbook, representation functions are isomorphisms.  This implies that, since the image of the given set of polynomials under this isomorphism is a basis for \mathbb{R}^3, the given set of polynomials is also a basis for \mathcal{P}_2.

(c ) To solve for the null space of A, one solves the equation A\vec{x} = \vec{0} for the input vector \vec{x}.  This leads to a linear system which can be encoded by the augmented matrix

\left( \begin{array}{ccc|c} 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \end{array}\right).

This augmented matrix was already handled in part (a); the only solution possible is the zero vector \vec{0} which implies that \mathcal{N}(A) = \{\vec{0}\}.

(d) We can use part (c ) and the rank nullity theorem to conclude that \dim \mathcal{R}(A) = 3 = \dim \mathbb{R}^3 \Rightarrow A is onto.

(e) One can note that the set S \subset \mathbb{R}^3 from part (a) coincides with the set of outputs

\{ A\vec{e}_1, A\vec{e}_2, A\vec{e}_3 \}

and since this set of vectors was proven to be a basis for \mathbb{R}^3.

(f) Since A is both one-to-one and onto it is an isomorphism and so is invertible.  This means its columns (and its rows) are linearly independent which implies that \det A \neq 0.

Alternatively, one can explicitly compute that \det A = -1.

Question 14.  (a) More generally we can prove that given any n \times n matrix A the set

S_A = \{B \in \mathcal{M}_{n\times n} : AB = BA \}

is a subspace of \mathcal{M}.  First note that the zero matrix is always an element of S_A since 0A = 0 = A0.  Next, suppose B_1, B_2 \in S_A.  We then have

(B_1 + B_2)A = B_1A + B_2A = AB_1 + AB_2 = A(B_1 + B_2)

and so B_1+B_2 \in S_A.  Finally, if B \in S_A and c \in \mathbb{R} then

(cB)A = cBA = cAB = A(cB)

which implies that cB \in S_A.  Therefore S_A is a subspace.

(b) For the specific 2\times 2 matrix A given in the problem we have that an arbitrary matrix B \in \mathcal{M}_{2\times 2} is an element of S_A when

(AB = BA) \iff \left( \begin{array}{cc} 1 & 1 \\ 2 & 4 \end{array}\right)\,\left(\begin{array}{cc} a & b \\ c & d \end{array}\right) = \left(\begin{array}{cc} a & b \\ c & d \end{array}\right)\,\left( \begin{array}{cc} 1 & 1 \\ 2 & 4 \end{array}\right)

This matrix equation produces the following equations

a + c = a + 2b

b + d = a + 4b

2a + 4c = c + 2d

2b + 4d = c + 4d

Which can be rewritten and encoded by the 4\times 4 augmented matrix

\left( \begin{array}{cccc|c} 0 & 2 & -1 & 0 & 0 \\ 1 & 3 & 0 & -1 & 0 \\ -2 & 0 & -3 & 2 & 0 \\ 0 & -2 & 1 & 0 & 0 \end{array}\right)\to \left(\begin{array}{cccc|c} 1 & 3 & 0 & -1 & 0 \\0 & 2 & -1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right)

This tells us that c is free, which implies that the dimension of this subspace is \text{dim}S_A = 1.  In particular, we have that d = 0, b = (1/2)c and a = -(3/2) so that

S_A = \left \{ \left(\begin{array}{cc} a & b \\ c & d \end{array}\right) = c\left(\begin{array}{cc} -3/2 & 1/2 \\ 1 & 0 \end{array}\right) : c \in \mathbb{R} \right \}

(c ) There are lots of matrices M with S_M = \mathcal{M}_{2\times 2}.  For instance, one could use the zero matrix, M = 0, the identity matrix M = I_2 or, in fact, any multiple of the identity matrix M = cI_2.

Question 15.  (a) Suppose h:V\to W is linear.  The range space of h is defined by

\mathcal{R}(h) = \{h(\vec{v}) : \vec{v} \in V \}.

Since h(\vec{0}) = \vec{0}, the zero vector is an element of the range space.  Suppose c \in \mathbb{R} and that \vec{w} \in \mathcal{R}(h).  Then c\vec{w} \in \mathcal{R}(h) since

\vec{w} \in \mathcal{R}(h) \Rightarrow \vec{w} = h(\vec{v}) for some \vec{v} \in V

and so c\vec{w} = ch(\vec{v}) = h(c\vec{v}) which implies that c\vec{w} \in \mathcal{R}(h).

Similarly, suppose \vec{w}_1, \vec{w}_2 \in \mathcal{R}(h).  This means that there exist \vec{v}_1, \vec{v}_2 \in V so that h(\vec{v}_i) = \vec{w}_i.  By the linearity of h we have that

h(\vec{v}_1+\vec{v}_2) = h(\vec{v}_1) + h(\vec{v}_2) = \vec{w}_1 + \vec{w}_2

and so (\vec{w}_1+\vec{w}_2) \in \mathcal{R}(h).

(b) The null space of h \in \mathcal{L}(V, W) is defined as \mathcal{N}(h) = \{\vec{v}\in V : h(\vec{v}) = \vec{0}\}.  First note that \vec{0}_V \in \mathcal{N}(h) since h(\vec{0}) = \vec{0}.  Next, suppose \vec{v}_1, \vec{v}_2 \in \mathcal{N}(h) and suppose c_1, c_2 \in \mathbb{R}.  Then h(c_1\vec{v}_1 + c_2\vec{v}_2) = c_1h(\vec{v}_1) + c_2h(\vec{v}_2) = c_1\vec{0}+c_2\vec{0} = \vec{0}.  Hence c_1\vec{v}_1 + c_2\vec{v}_2 \in \mathcal{N}(h)

(c ) Let U \subseteq V be a subspace.  We want to show that the set

h(U) = \{h(\vec{u}) : \vec{u} \in U\} \subseteq W

is a subspace of W.  First note that since h(\vec{0}) = \vec{0} and, since U is a subspace, \vec{0} \in U.  Therefore \vec{0} \in h(U).

Next, suppose \vec{w}_1, \vec{w}_2 \in h(U) and let c_1, c_2 \in \mathbb{R}.  By the definition of h(U) this means that \, \exists \, \vec{u}_1, \vec{u}_2 \in U such that h(u_1) = \vec{w}_1 and h(\vec{u}_2) = \vec{w}_2.  Observe that

c_1\vec{w}_1 + c_2\vec{w}_2 = c_1h(\vec{u}_1) + c_2h(\vec{u}_2) = h(c_1\vec{u}_1 + c_2\vec{u}_2)

which implies that c_1\vec{w}_1 + c_2\vec{w}_2 \in h(U) since, by virtue of U being a subspace, c_1\vec{u}_1 + c_2\vec{u}_2 \in U.

(d) Let S \subseteq W be a subspace.  The set

S' = \{\vec{v} \in V : h(\vec{v}) \in S

can be shown to be a subspace.  First, \vec{0} \in S' since \vec{0} \in S (because S is a subspace) and h(\vec{0}) = \vec{0}.

Next, suppose \vec{v}_1, \vec{v}_2 \in S' and let c_1, c_2 \in \mathbb{R}.  Since S is a subspace of W it follows that c_1h(\vec{v}_1) + c_2h(\vec{v}_2) \in S.  Because h is linear this expression equals

h(c_1\vec{v}_1 + c_2\vec{v}_2) = c_1h(\vec{v}_1) + c_2h(\vec{v}_2) \in S

and so c_1\vec{v}_1 + c_2\vec{v}_2 \in S'.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s