The Four Fundamental Subspaces

Joseph Jojoe
11 min readJun 23, 2022

--

Each matrix has four very important vector spaces attached to it. In this article, we explore the column space, row space, null space, and left null space ― finding basis vectors for these spaces, and determining whether or not a given vector is part of a particular space, is crucial to understanding whether systems of linear equations are solvable. Knowledge about these vector subspaces will inform us about why Ax = b and Ax = 0 often have solutions, but sometimes don’t.

Before we go further…

Representing matrix multiplication with linear combinations of columns

Matrices and vectors can be represented in a wide range of formats: using column vectors is a succinct way to write out matrix multiplications.

Matrix multiplication can interpreted as taking linear combinations of columns.
The column picture — matrix multiplication can interpreted as taking linear combinations of columns.

Linear combination: an expression constructed from a set of terms by multiplying each term by a constant and adding the results. (Wikipedia)

The sum xc₁ + yc₂ is a linear combination of the columns of A; it’s a ‘weighted sum’ which is encountered whenever we multiply matrices together with vectors, and is an alternative interpretation to the ‘row-column’ method of matrix multiplication commonly taught in high school.

By the way, the numbers x and y have a special name ― they’re called scalars, because they ‘scale’ vectors in magnitude without changing their fundamental direction.

Linear independence and basis vectors

Taking linear combinations leads to the notion of linear independence; a set of vectors is linearly independent if there aren’t any vectors in the set which can be rewritten in terms of a linear combination of the other vectors. What a mouthful!

It might help to visualise linear independence geometrically ― vectors are usually independent when they point in different directions, although it’s worth noting that this notion becomes sketchier in more than two dimensions. As our linear algebra knowledge grows, we will tend to simply consider the different linear combinations that can be formed.

Dependent, independent, and wacky independent vectors for 2D space. Notice that dependent vectors lie on the same line and independent vectors aren’t necessarily perpendicular.

The first set of vectors (red and blue) aren’t linearly independent because they lie on the same line; in fact, multiplying either one of the vectors by -1 gives the other. They are linearly dependent; at least one of the vectors in this set can be written in the form of a linear combination of the other vectors. We only had to take a scalar multiple of one vector to produce the other in this example ― in most cases, you’ll need a weighted sum of several vectors to find linear dependency.

The purple vector can be formed by a linear combination of the blue and red vectors — it’s unnecessary as it doesn’t add anything new to our basis.

The purple vector p in this diagram isn’t a scalar multiple of either of the other two vectors, but it is a linear combination; adding the blue and red vectors b and r produces the purple vector, producing the important substitution p = b + r. Eliminating p in this manner shows that it was redundant ― we already had two independent vectors for this two-dimensional space which could produce any other vector in this space through linear combinations, so including a third vector was a doomed prospect from the start.

We could have chosen to eliminate any one of the vectors in our linearly dependent set of three, although constructing a linear combination to produce b or r would be more difficult. When given a set of linearly dependent vectors, you should exercise good judgement in first eliminating vectors that are easy to find combinations for.

Linear dependence: a set of vectors is said to be linearly dependent if there is a nontrivial linear combination of the vectors that equals the zero vector. (Wikipedia)

Although it might not seem like it, this definition is in fact equivalent to our own. We said that a set of vectors {v₁, v₂, …, vₙ} is linearly dependent if a vector v from the set can be written in terms of a linear combination of the other vectors. Putting that into equation form, v = (bv₁ + cv₂ + … + zvₙ) for some scalar values b, c, …, z. We then put a scalar a in front of v without loss of generality (after all, a could equal 1) to give us our principal equation:

Our linear dependence equation.

Wikipedia states that a set of vectors is linearly dependent when a nontrivial (i.e., no cheating; you can’t set all the scalars to zero) linear combination of the vectors equals the zero vector. Putting that into an equation, we get:

Wikipedia’s linear dependence equation. The boldness on 0 indicates that we are referring to the zero vector, not the number. Thankfully, the zero vector is simply filled with zeroes anyways, so you’ll often see 0 used to refer to the number and vector interchangeably. Look at the context to work out what 0 means!

Now, it takes a simple proof to show that the two statements are one and the same.

It’s important to know that matrices containing dependent columns are not invertible; there are multiple linear combinations that equal the zero vector, so there’s no way to get back to a particular vector input.

In general, n-dimensional space requires n linearly independent vectors for linear combinations to be able to produce any vector in that space. Less than n, and you won’t have enough ‘degrees of freedom’ to be able to explore the space fully. More, and you can eliminate dependent vectors until you’re left with just n.

A set of n linearly independent vectors for n-dimensional space has a special name: we call it a basis. Finding a basis can be a difficult task in itself, but processes like Gram-Schmidt exist to generate bases from a given set of vectors. Alternatively, we can use mathematical intuition to build up a basis; unit vectors along coordinate axes would suffice, for example.

What’s a vector space?

A ‘space’ in mathematics generally refers to a collection, or set, of objects ― we’ll be working with vector spaces, where the objects in question are vectors. More exotic spaces (read: function, Hilbert, metric) also exist, but are worth exploring in their own right in future articles.

Vector spaces also have operations attached to them. In the context of linear algebra, an operation acts on two vectors and produces another vector ― we will define two operations on our vector space: vector addition and scalar multiplication.

We define vector addition and scalar multiplication as such.
We define vector addition and scalar multiplication as such.

One important property of vector spaces is that they are closed under these operations. Closure is the concept that applying these operations to an object in a particular space cannot produce another object that lies outside of the space ― a familiar example is that of the real numbers, as performing addition, subtraction, multiplication, or division on reals will never produce a non-real number. Similarly, we define a vector space to be a collection of vectors, such that scalar multiplication and linear combinations cannot produce a vector outside of our initial space.

Vector space: a set of vectors that is closed under scalar addition, scalar multiplications, and linear combinations.

An interesting consequence of closure is that all vector spaces contain the zero vector. If they didn’t, the linear combination (0v₁ + 0v₂ + … + 0vₙ) for a particular basis {v₁, v₂, …, vₙ} would produce it for us, and closure thus demands that it be included in our space.

Vector subspaces, with a quick set theory refresher

A set is just a collection of objects. The natural numbers from 1 to 5, a list of people born in Connecticut, and the registration plates of every Toyota in existence are all sets.

Some sets are short enough that we can define them through writing down each member, such as S₁ = {1, 2, 3, 4, 5}; others use set-builder notation, which saves us from having to list each element (there are a lot of people in Connecticut).

Set-builder notation in action — x refers to a generic member of our set and after the colon (meaning ‘such that’) we place our conditions of membership, usually known as rules or predicates.

A subset is a set A whose elements are all also elements of some other set B; we say that B is a superset of A, correspondingly, although we won’t be using the latter term when referring to vector subspaces. Every set is also a subset of itself, as every element contained in a particular set is also an element of that same set by definition.

A is a subset of B in this diagram. (Wikipedia)

A vector subspace, also known as a linear subspace, is a vector space which is a subset of some larger vector space. By our earlier definition of what a subset is, this also means that every vector space is a subspace of itself! The zero vector is a key example of a vector subspace; it fits all of our criteria for being a vector space, given that it is closed under linear combinations (try making something non-zero with just the zero vector!), and it is a subset of every other vector space.

Span: the vector space formed by the set of all linear combinations of elements of a set of vectors S.

Closely linked to subspaces is the idea of span, the smallest linear subspace (i.e., another vector space) which contains the set of all linear combinations of elements of S. We’re steering dangerously close to mathematical word soup, so I’ll illustrate with some 2D examples.

The red vector and blue vector form a set S. Their span is a one-dimensional linear subspace (dotted green line) in two-dimensional vector space.

Take these two vectors, for instance. We’re working in a 2D surface, so each vector is also 2D ― they both have two components (x and y) to indicate their direction. However, this set of vectors is linearly dependent; they both lie on the same line, thus linear combinations can only produce a 1D linear subspace of the 2D plane. In fact, we could even eliminate one of the vectors to give us a basis for this 1D subspace, because we only need one vector to be able to produce a 1D line through scalar multiplication.

Not all spanning sets for a subspace are bases, but all bases are spanning sets.

A spanning set for 2D space. It’s not a basis, but the set of all linear combinations of these vectors fills R²; another way of saying this is that the space spanned by these vectors is R².

Transpose of a matrix

Transposing a matrix simply means that we exchange rows and columns ― the first row of our original matrix A becomes the first column of our transposed matrix Aᵀ, the second row of A becomes the second column of Aᵀ, and so on.

Examples of matrix transposition.

The Four Fundamental Subspaces

Now that we know about linear combinations, linear independence, bases, vector spaces and subspaces, span, and the transpose, we can finally explore the four fundamental subspaces that every matrix possesses.

Column space, C(A)

The column space, denoted as C(A) for a given matrix A, is the vector space spanned by the columns of A.

Column space for different matrices. It is important to note that the column space refers to the set of all possible linear combinations of the columns, i.e. the collection of vectors produced by going through all values of a, b, and c.

The column space of A spans and is a basis for R². C(B) also spans R², but we have three vectors in our spanning set so this isn’t a basis. C(M)’s column space is somewhere in R³, but we only have two vectors in our spanning set; reminding ourselves that the number of linearly independent vectors corresponds to the ‘degrees of freedom’ our set of linear combinations has, this means that C(M) is actually a two-dimensional linear subspace of R³. Geometrically speaking, C(M) is a plane which goes through the origin.

C(M), a two-dimensional linear subspace of R³.

Remember how we previously interpreted matrix multiplication as taking linear combinations of columns? It looked very similar to our column space calculations!

Column space expressed as matrix multiplication.

The column space of a matrix A can therefore be alternatively defined as the space spanned by the result of matrix multiplications involving A. In fact, the equation Ax = b is solvable precisely when b lies within C(A), such that the matrix multiplication Ax can produce it for some vector x. Marvellous!

When is Ax = b solvable? When b is in the column space of A.

Row space, C(Aᵀ)

The row space of a matrix is the vector space spanned by its row vectors. It’s inconvenient to work with rows directly, so we instead transpose our matrix A and use its column space.

The row space C(Aᵀ) of a matrix A.

Null space, N(A)

The null space N(A) of a matrix, also known as the kernel, is the vector space spanned by vectors x for which Ax = 0.

Earlier in this article, we explored the interpretation of matrix multiplication as linear combinations of columns and the fact that linearly dependent columns have nontrivial linear combinations that equal the zero vector ― we now put these concepts together to develop our understanding the null space.

Ax = 0 only occurs when either x = 0 or when there is a linear combination of the columns of A which gives the zero vector, which would mean that the columns are linearly dependent and A is singular. If the columns of A are independent, making A invertible, the only vector that makes Ax = 0 true is x = 0; the null space only contains the zero vector, which we deem trivial. In general, only singular matrices have a nontrivial null space.

Actually finding vectors in N(A) requires that we put A into a special format called reduced row echelon form. I’ll cover the process of finding rref(A) in a separate article, but it’s closely related to how we solve systems of linear equations through Gaussian elimination and very fun to carry out!

When is Ax = 0 solvable for nontrivial values of x? When the columns of A are linearly dependent, which is equivalent to saying that A is a singular matrix.

Left null space, N(Aᵀ)

In the same way that row space is linked to column space, the left null space N(Aᵀ) links to null space; we define it to be the vector space spanned by vectors x for which Ax = 0.

A final word about orthogonal complements and subspaces

Orthogonality generalises the notion of perpendicularity to higher dimensions. Two vectors u and v are orthogonal if their dot product u.v, which we define as u.v = (uv₁ + uv₂ + … + uvₙ), where uᵢ means the i-th component of the vector u, is equal to zero.

The pairs [C(A), N(Aᵀ)] and [C(Aᵀ), N(A)] are very special; the subspaces in each pair are orthogonal complements of each other, meaning that every vector in one subspace of the pair is orthogonal to every vector of the other.

Finally, it’s worth noting that in n-dimensional space, the dimensions of each pair sum to n. This concept can be succinctly expressed in the phrase ‘orthogonal complements fill the whole space’ ― taking linear combinations of basis vectors from each pair member allows us to construct a complete basis for the n-dimensional vector space we’re working in.

An illustration of orthogonal complements in two dimensions; every vector on the red line is orthogonal to every vector on the blue line, regardless of magnitude.

Further exploration

I always love to recommend MitOpenCourseWare’s 18.06Sc Linear Algebra course as a great introduction to linear algebra; it focuses on presenting key facts and teaching techniques so familiarity with writing proofs isn’t needed, but 18.700 might be what you’re looking for if you prefer a rigorous approach. Gilbert Strang recaps this article’s contents in his lecture on the four fundamental subspaces.

If you’re interested in applying your knowledge of linear subspaces to fitting lines/curves to data, read my article on performing regression with linear algebra; it introduces the concept of projection matrices, which allows us to solve Ax = b by finding the closest vector to b which lies in C(A).

Georgia Tech’s interactive article on subspaces, complete with theorems and plenty of examples, is great for recapping the concepts we learned today.

--

--