This week we will begin our discussion and exploration of abstract Vector Spaces. As mentioned in class, these are “places” where linear combinations make sense. More precisely, a vector space (over the real numbers) is a set, , equipped with two operations, one called addition (or vector addition) and one called scalar multiplication. We usually denote these operations as follows:
where and . In order for these operations to feel or work as meaningfully as they do in , we require that they obey 10 rules or axioms. These are conveniently outlined in Chapter 2 of our textbook, which you should be reading thoroughly.
When we want to check that a proposed set with operations is a vector space over , we must carefully check each and every one of the ten axioms, and so this can be a lengthy process.
If one starts with a rather arbitrary set and proposes arbitrary notions of and , it is quite likely that some of the vector space axioms will fail. That being said, our world is rich with examples of vector spaces, ones that we have worked with for quite some time.
Example 1. (Differentiable functions) In Calculus I we study functions whose derivative, , can be computed. For this example, let’s consider the set
Recall from your Calculus courses that the sum of two differentiable functions is itself differentiable since
and that a constant (or scalar) times a differentiable function is also differentiable since
These two observations suggest that the set is a vector space under usual function addition and multiplication by constants. We would need to check all axioms to make this conclusion official, but [spoiler alert], it works! This set is a vector space over the reals.
This week’s assignment will also reinforce some of the computations and ideas we discussed last week, so let’s mention some of those important ideas, too.
Recall that the solution set for a linear system can always be expressed in the following form:
where the vector is one particular solution to the system, and the vectors are solutions to the the homogeneous system.
This result or observation reminds us that, by using Gauss’ method to identify leading and free variables, we can always express the solution set for a homogeneous system using a fixed number of vectors. We say that the “dimension” of the solution set equals the number of these vectors, which also equals the number of free variables associated to the system.
From this form of the solution set we see that linear systems always have either no solutions (when no exists), one solution (when exists and all ), or infinitely many solutions (when exists and at least one vector ).
As a result, we also see that every homogeneous system either has one unique solution (the zero vector ) or infinitely many solutions. We say that a coefficient matrix is non-singular if the associated homogeneous system has one solution, and we say that it is singular if it has infinitely many.
Finally, we also introduced the idea of a -weighted self-replicating solution to a linear system. As mentioned in your second assignment, we will say that a vector is such an object if
As an example of this, we can check that the vector is a -weighted self-replicating solution for the matrix
That is, we want to check that the following two equations are true:
Here is a link to your second assignment. Enjoy!