Matrix Properties
Return to the Math 2250 Overview Page.
Vector Sub-spaces
$$ \begin{bmatrix} - & \mbox{Row 1} & -\\ - & \mbox{Row 2} & - \end{bmatrix} ; \begin{bmatrix} | & | \\ \mbox{Column 1} & \mbox{Column 2}\\ | & | \end{bmatrix} $$
Matrices are arrays of vectors that can be viewed as row or column vectors. These vectors can be combined in the space they reside in to form a subspace with either equal or fewer dimensions than the number of dimensions in the vectors. Consider the following rectangular matrix.
$$ A= \begin{bmatrix} 3 & 6 & 9\\ 2 & 2 & 4 \end{bmatrix} $$
This matrix is made up of three columns and two rows. The subspace that the matrix spans is based on either the number of independent rows or columns. The column space of this matrix is 2-dimensional. This is because the third column is a combination of the first two columns. This dimensionality can also be derived by taking the reduced row echelon form of the matrix.
$$ rref(A)= \begin{bmatrix} 1 & 0 & 1\\ 0 & 1 & 1 \end{bmatrix} $$
The matrix in reduced row echelon form has 2 elements on the primary diagonal. The number of these diagonal elements \(p\) is equal to the dimensionality of the subspace spanned by the column vectors. This subspace is called the \(\textit{Column Space}\).
The matrix can also be viewed as a set of row vectors with its own subspace. This subspace has an equal dimensionality to the column space of the same matrix, being 2-dimensional. This can be derived by taking the reduced row echelon form of the transpose of matrix \(A\).
$$ A^{T}= \begin{bmatrix} 3 & 2\\ 6 & 2\\ 9 & 4 \end{bmatrix} $$
$$ rref(A^{T})= \begin{bmatrix} 1 & 0\\ 0 & 1\\ 0 & 0 \end{bmatrix} $$
This matrix is made up of 3-dimensional vectors yet spans a 2-dimensional subspace or a plane in three-space. This subspace is known as the \(\textit{Row Space}\) of the matrix \(A\). Vector subspaces do not need to span the same dimensionality as the space the vectors reside in. The minimum number of vectors required to span a vector space must match the dimension of that space with vectors of equal dimensions. A 4-dimensional space requires at least four 4-dimensional vectors to span the entire space.
Two more subspaces that make up the fundamental matrix subspaces are the \(\textit{Null Space}\) and \(\textit{Left-Null Space}\). These subspaces can be derived from the first two subspaces by using a reduced row form of the matrix.
Null Space and Left-Null Space
The null space and left null spaces are created from null vectors. When multiplied into their accompanying matrix, these vectors will result in the zero vector or a vector where every component is zero. This zero vector itself is always in the null space as anything multiplied by zero is zero. As such, the zero vector is normally not considered a spanning vector of a null space due to its triviality. To understand the null space and left null space, consider the following example.
$$ A= \begin{bmatrix} 1 & 3\\ 2 & 6\\ 3 & 9 \end{bmatrix} $$
To find the null space of this matrix, take the reduced row echelon form of the matrix.
$$ rref(A)= \begin{bmatrix} 1 & 3\\ 0 & 0\\ 0 & 0 \end{bmatrix} $$
As the column space of this matrix is 1-dimensional, and there are 2 column vectors in the matrix, the dimensionality of the null space can be assumed to be 1-dimensional as well. As such, only a single non-trivial null vector makes up the null space for the matrix \(A\). This vector is derived by taking the negative of the free variables (variables that are not on the diagonal of the matrix) and placing them over a form of the identity matrix.
$$ \mbox{Free Vector} = \begin{bmatrix} 3\\ 0\\ 0 \end{bmatrix} \hspace{4mm} v_{null} = C\cdot \begin{bmatrix} -3\\ 1 \end{bmatrix} $$
To test the null vector, multiply it by the original matrix. Notice that the null vector can be scaled to any value. As such, a constant has been included in the null vector.
$$ Av_{null}= \begin{bmatrix} 1 & 3\\ 2 & 6\\ 3 & 9 \end{bmatrix} \begin{bmatrix} -3C\\ 1C \end{bmatrix} = \begin{bmatrix} 0\\ 0\\ 0 \end{bmatrix} $$
The product of a matrix with its null vector results in the zero vector. This works for any value of \(C\).
The left null space can also be considered to be the null space associated with the row space of the matrix. Similar to the null space, this can be derived from the reduced form of the matrix by the free vectors and identity vectors.
$$ A^T = \begin{bmatrix} 1 & 2 & 3\\ 3 & 6 & 9 \end{bmatrix} $$
$$ rref(A^T)= \begin{bmatrix} 1 & 2 & 3\\ 0 & 0 & 0 \end{bmatrix} $$
The dimensionality of the left null space is equal to the number of free variables in the matrix after reduction. This matrix has a 2-dimensional left null space.
$$ v_{\textit{left null}}= C_1\begin{bmatrix} -2\\ 1\\ 0 \end{bmatrix} + C_2\begin{bmatrix} -3\\ 0\\ 1 \end{bmatrix} $$
Either of the above null vectors will result in a zero vector when multiplied into the original matrix. Any combination of the above vectors will also result in the zero vector.
Singular Matricies
Singular matrices are common when solving linear systems. When viewing a matrix as a combination of column vectors, a singular matrix has a linear dependence between the column space. When taking the reduced row form of the matrix will yield a completely eliminated row or a row of zeros. Singular Matrices also have a determinant of 0 and at least one eigenvalue of 0 if the matrix is square. These concepts will be further explored in the discussion of eigenspaces.
Determinant of a Matrix
Matrix determinants are a special function that yields a scalar. Higher dimensionality matrices have determinants that are increasingly difficult and time-consuming to solve. Determinants have certain properties that make it possible to derive solutions of any \(n\times n\) matrix.
- The Determinant of an Identity matrix is 1
- When performing a determinant operation, if a row swap is performed on the matrix, a sign is changed.
- Determinants are linear functions that operate on rows independently.
- Singular Matrices have a determinant of 0
Consider two fundamental determinants being a \(2\times2\) and \(3\times 3\) determinant. Beginning with a \(2\times 2\) matrix, the general equation for a determinant will be derived.
$$ A = \begin{bmatrix} a & b\\ c & d \end{bmatrix} $$
Using the determinant properties, we can separate this matrix into two diagonal matrices.
$$ A = \begin{bmatrix} a & 0\\ 0 & d \end{bmatrix} + \begin{bmatrix} 0 & b\\ c & 0 \end{bmatrix} $$
Performing a row swap on the second matrix, we can put every term on the primary diagonal. The terms on the diagonal can also be factored out to generate determinants of identity matrices multiplied by scalars.
$$ |A| = \left| \begin{bmatrix} a & 0\\ 0 & d \end{bmatrix} \right| - \left| \begin{bmatrix} c & 0\\ 0 & b \end{bmatrix} \right| = ad \left| \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} \right| - bc \left| \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} \right| = ad - bc $$
The same operations can be done for a \(3\times 3\) matrix. To simplify this, we will look at every diagonal setup for a matrix \(M\).
$$ M = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix} $$
$$ M = \begin{bmatrix} a & 0 & 0\\ 0 & e & 0\\ 0 & 0 & i \end{bmatrix} + \begin{bmatrix} a & 0 & 0\\ 0 & 0 & f\\ 0 & h & 0 \end{bmatrix} + \begin{bmatrix} 0 & b & 0\\ d & 0 & 0\\ 0 & 0 & i \end{bmatrix} + \begin{bmatrix} 0 & b & 0\\ 0 & 0 & f\\ g & 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 & c\\ d & 0 & 0\\ 0 & h & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 & c\\ 0 & e & 0\\ g & 0 & 0 \end{bmatrix} $$
$$ |M| = aei - afh - bdi + bfg + cdh - ceg $$
Notice how both versions yield common formula determinants. While the second equation does not resemble this immediately, it can be derived by factoring out the top row coefficients.
\[|M| = a(ei - fh) - b(di - fg) + c(dh - eg)\]
Higher dimensional determinant formulas can be derived using this method. Determinants can also be analyzed based on known determinants and the general properties of the determinant operation.