Linear Assignment 5

Here is your fifth assignment.  Enjoy!

(Some) Solutions.

Problem 1.  To show that the given function P is linear we check our two properties.  First, let \vec{v}_1 = (x_1, y_1, z_1)^T and \vec{v}_2 = (x_2, y_2, z_2)^T be arbitrary elements in \mathbb{R}^3.  Then, according to the definition of P, we have that

P(\vec{v}_1 + \vec{v}_2) = P(x_1+x_2, y_1+y_2, z_1+z_2) = (x_1+x_2, y_1+y_2, 0)^T = (x_1, y_1, 0)^T + (x_2, y_2,0)^T = P(x_1, y_1, z_1) + P(x_2, y_2, z_2) = P(\vec{v}_1)+P(\vec{v}_2).

Similarly, let c \in \mathbb{R} be arbitrary.  Then

P(c\dot\vec{v}_1) = P(cx_1, cy_1, cz_1) = (c x_1, c y_1, 0)^T = c\cdot(x_1, y_1, 0) = c\cdot P(\vec{v}_1).

Problem 2.  There are many ways to show that the given function is not linear.  Probably the shortest is to note that f(0) = 5\cdot0+3 = 3 \neq 0.  Homomorphisms between vector spaces must send the zero vector in the domain to the zero vector in the co-domain, and since this does not happen for our function it cannot be linear.

Problem 3.  One can verify that the given function T is, in fact, linear.

Problem 4.  (a)  The identity function is easily seen to be linear since

\text{id}_V:(\vec{v}+\vec{w}) = \vec{v}+\vec{w} = \text{id}_V(\vec{v})+\text{id}_V(\vec{w}) and

\text{id}_V(c\dot\vec{v}) = c\cdot\vec{v} = c\cdot\text{id}_V(\vec{v})

where \vec{v}, \vec{w} \in V and c \in \mathbb{R} are all arbitrary.

(b)  Similarly, one can show that the zero homomorphism (between any two vector spaces) is always linear:

Z(\vec{v}_1 + \vec{v}_2) = 0_W = 0_W + 0_W = Z(\vec{v}_1) + Z(\vec{v}_2) and

Z(c\cdot \vec{v}_1) = 0_W = c \cdot 0_W = c\cdot Z(\vec{v}_1)

where \vec{v}_1, \vec{v}_2 \in V and c \in \mathbb{R} are arbitrary.

Problem 5.  (a) T(x,y,z) = (x+y+z, 2x+4y+5z, x+3y+4z)^T, and this is easily checked to satisfy both linearity conditions.

(b) Given the formula for T from part (a), one finds that the three vectors (1, 2, 1)^T, (1, 4, 3)^T and (1, 5, 4)^T span the range space of T.  To determine the dimension of this space, then, we need to determine how many of these vectors are also linearly independent.  To answer this question, we set up a single vector equation c_1(1, 2, 1)^T + c_2(1, 4, 3)^T + c_3(1, 5, 4)^T = (0, 0, 0)^T and solve for the unknown coefficients c_i.  Doing so sets up a system of equations that can be encoded by the augmented matrix

\left(\begin{array}{ccc|c} 1 & 1 & 1 & 0 \\ 2 & 4 & 5 & 0 \\ 1 & 3 & 4 & 0 \end{array}\right)

Standard row reducing confirms that this matrix is row-equivalent to one with a row of 0‘s at the bottom, indicating one free variable and two leading ones.  This implies that only two of the vectors are linearly independent — indeed, one can check that any two of them are linearly independent, and so one basis for the image of T is \{(1, 2, 1)^T, (1, 4, 3)^T\}.

(c ) To find a basis for the kernel (aka null space) of T, we need to determine those vectors \vec{v} that satisfy T(\vec{v}) = \vec{0}.  Using our formula for T, this gives us a linear system of equations to solve, and this linear system can be encoded by the augmented matrix

\left(\begin{array}{ccc|c} 1 & 1 & 1 & 0 \\ 2 & 4 & 5 & 0 \\ 1 & 3 & 4 & 0 \end{array}\right)

This is the same system / augmented matrix that we used in part (b)!  After row reducing, we can turn this augmented matrix into

\left(\begin{array}{ccc|c}1 & 1 & 1 & 0 \\ 0 & 2 & 3 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right).

From which we learn that c_3 is free, while c_2 = -3/2 c_3 and c_1 = (1/2)c_3.  This tells us that the vector (1/2, -3/2, 1)^T is a basis for the null space of T.

(d) The Rank Nullity Theorem tells us that \text{nullity}(T) + \text{rank}(T) = \text{dim}\mathbb{R}^3 so that \text{nullity}(T) + \text{rank}(T) = 3.  From parts (b) and (c ) we found that \text{rank}(T) = 2 and \text{nullity}(T) = 1.  This checks out since 1 + 2 = 3.

Problem 6.  (a)  The point of this part of the problem is to demonstrate that, as mentioned in class and in our reading, a homomorphism is determined by its action on a basis.  Indeed, in this part of the problem we were not given a formula for the function T, but we were told what it did to a basis (in fact the standard basis) for \mathbb{R}^2.  From this given information we find

T(a, b) = aT(1,0) + bT(0,1) = (9a, 4a)^T + (2b, -5b)^T = (9a+2b, 4a-5b)^T.

(b) The matrix this problem is looking for is, in fact, the matrix representation of T:\mathbb{R}^2\to\mathbb{R}^2 with respect to the bases \mathcal{B} = \mathcal{E}_2 for V = \mathbb{R}^2 and \mathcal{D} = \mathcal{E}_2 for W = \mathbb{R}^2.

In any event, the matrix obtained is given by

\text{Rep}_{\mathcal{E}_2, \mathcal{E}_2}(T) = \left(\begin{array}{cc} 9 & 2 \\ 4 & -5 \end{array}\right)

Problem 7.  (a) No, there are no injective homomorphisms h \in \mathcal{L}(\mathbb{R}^3, \mathbb{R}^2).  This follows from the Rank-Nullity theorem.  Such a function would be injective if and only if its nullity equals zero, but then this would force the rank to equal 3.  The rank of such a homomorphism cannot possibly equal 3 since the range space is in this instance a subspace of \mathbb{R}^2.

(b) One can similarly use the Rank Nullity theorem to show that no such function can be onto.

Problem 8.  (a) The given homomorphism has as its domain the 12-dimensional vector space \mathcal{M}_{3\times 4}.  Because we are told that the homomorphism L is surjective, we know that its range space equals the codomain \mathbb{R}^7.  In particular, this means that \text{rank}(L) = 7.

(b) By the Rank-Nullity Theorem and part (a), it must follow that

\text{dim} \mathcal{N}(L) = 5.

Problem 9.  (skipped)

Problem 10.  (a) There are lots of answers for this part, but perhaps the most obvious choice is to use the function T:\mathcal:{M}_{2\times 2} \to \mathbb{R}^2 given by

T\left( \, \left(\begin{array}{cc} a & b \\ c & d \end{array}\right) \,\right) = (a, b, c, d)^T.

It is straightforward to argue that this function, T, is linear.  To show that T is one-to-one, suppose M_1, M_2 \in \mathcal{M}_{2\times 2} and that T(M_1) = T(M_2).  If we describe the inputs as

M_1 = \left(\begin{array}{cc} a & b \\ c & d \end{array}\right)

M_2 = \left(\begin{array}{cc} x & y \\ z & w \end{array}\right)

then assuming T(M_1) = T(M_2) means that (a, b, c, d)^T = (x, y, z, w)^T.  From this we see that a = x, b = y, c = z and d = w.  Of course, from these equations we learn that M_1 = M_2, and so T is one-to-one.

To show that T is onto, let (a, b, c, d)^T \in \mathbb{R}^4 be any vector.  We must show that there exists a matrix M \in \mathcal{M}_{2\times2} so that $T(M) = (a, b, c, d)^T$.  Given our formula, though, this is easily accomplished; simply set

M = \left(\begin{array}{cc} a & b \\ c & d \end{array}\right).

(b) There are many choices for this part of the problem, too.  Here is one obvious linear map that is not an isomorphism

Z(a, b, c, d) = \left(\begin{array}{cc} 0 & 0 \\ 0 & 0 \end{array}\right).

This zero-map was already shown to (always) be linear in a previous problem, but it is neither onto nor one-to-one.  For instance, the only output is the zero matrix.

Problem 11.  (a) m = 3.  (b) n = 4.

(c ) The range space of T is spanned by the four vectors (1, 2, 1)^T, (2, 4, 2)^T, (1, 3, -1)^T and (1, 6, -7)^T.  Since these are vectors in the three-dimensional space \mathbb{R}^3, it follows that they are necessarily linearly dependent.  To determine what the image of T is isomorphic to, we need to determine a basis for its range space, and so we need to know how many of these vectors must be eliminated to form a linearly independent set.

Since the second vector is a multiple of the first vector, we need only check the linear independence of the vectors (1, 2, 1)^T, (1, 3, -1)^T, and (1, 6, -7)^T.  This check can be encoded in an augmented matrix

\left( \begin{array}{ccc|c} 1 & 1 & 1 & 0 \\ 2 & 3 & 6 & 0 \\ 1 & -1 & -7 & 0 \end{array}\right) \to \left(\begin{array}{ccc|c} 1 & 1 & 1 & 0 \\ 0 & -1 & -4 & 0 \\ 0 & 2 & 8 & 0 \end{array}\right) \to \left(\begin{array}{ccc|c} 1 & 1 & 1 & 0 \\ 0 & 1 & 4 & 0 \\ 0 & 0 & 0 & 0\end{array}\right)

This tells us that only two of these vectors are linearly independent, and so \text{dim}\mathcal{R}(T) = 2.  This means that the image of T is isomorphic to \mathbb{R}^2.

(d) By the Rank-Nullity Theorem, the dimension of the null space of T is  1, and so the kernel is isomorphic to \mathbb{R}^1 = \mathbb{R}.

Problem 12.  For this problem, we are given only one basis to use.  Because the domain V = \mathcal{P}_3 equals the codomain W = \mathcal{P}_3, this means we are supposed to use this same basis for both the domain vector space and the codomain vector space.  In particular, this question is asking us to compute the matrix representation

\text{Rep}_{\mathcal{B}, \mathcal{B}}(D)

where \mathcal{B} = \{1, x, x^2, x^3\}.  This matrix is computed by first evaluating

D(1) = 0, D(x) = 1, D(x^2) = 2x, \text{ and } D(x^3) = 3x^2.  One then computes the representation of these output vectors using the basis for the codomain.  This gives

\text{Rep}_{\mathcal{B}}\,D(1) = (0, 0, 0, 0)^T

\text{Rep}_{\mathcal{B}}\, D(x) = (1, 0, 0, 0)^T

\text{Rep}_{\mathcal{B}}\, D(x^2) = (0, 2, 0, 0)^T

\text{Rep}_{\mathcal{B}}\,D(x^3) = (0, 0, 3, 0)^T.

These output vectors are used as the columns for the desired matrix.  As a result we find

\text{Rep}_{\mathcal{B}, \mathcal{B}} (T) = \left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3 \\ 0 & 0 & 0 & 0 \end{array}\right)

Problem 13.  It is straightforward to compute that

\text{Rep}_B \left(\begin{array}{c} 1 \\ 2 \\ 3 \end{array}\right) = \left(\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right).

To determine the representation of the vector (-1, 3, 0)^T in the basis B we have to solve a system of equations, namely

c_1 \left(\begin{array}{c} 1 \\ 2 \\ 3 \end{array}\right) + c_2\left(\begin{array}{c} 1 \\ 2 \\ 0 \end{array}\right) + c_3\left(\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right) = \left(\begin{array}{c} -1 \\ 3 \\ 0 \end{array}\right).

One finds that c_1 = 0, c_2 = 3/2 and c_3 = -5/2 so that

\text{Rep}_B \left( \begin{array}{c} -1 \\ 3 \\ 0 \end{array}\right) = \left( \begin{array}{c} 0 \\ 3/2 \\ -5/2 \end{array}\right).

Problem 14.  Skipped.

Problem 15.  The matrix representation is given by

\text{Rep}_{\mathcal{B}, \mathcal{D}}\,(f) = \left( \begin{array}{cccc} | & | & \cdots & | \\ \text{Rep}_{\mathcal{D}} \,f(\vec{v}_1) & \text{Rep}_{\mathcal{D}} \, f(\vec{v}_2) & \cdots & \text{Rep}_{\mathcal{D}} \, f(\vec{v}_n) \\ | & | &\cdots & | \end{array}\right)

Problem 16.  Skipped (this was done in class).

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s