Franklin's Notes


Generalized null spaces

If $\mathcal{N}(A)$ is the null space of a matrix (or a linear transformation) $A$, then the null spaces $\mathcal{N}(A^k)$ of the powers of $A$ are called the generalized null spaces of $A$. That is, $\mathcal{N}(A^k)$ is the subspace consisting of vectors $v$ in a given vector space for which For each $k$, it is clearly the case that $\mathcal{N}(A^k)\subset \mathcal{N}(A^{k+1})$.

It is not immediately obvious that it is even possible for, say, $A^2$ to have a larger null space than $A$. As a quick and easy example, consider the matrix which has the property that $A^2$ is the zero matrix, even though $A$ has rank $1$.

These spaces have some interesting properties. (The below apply to finite-dimensional vector spaces only.)

Proposition 1. For all vectors $v$, $v\in\mathcal{N}(A^{k+1})\iff Av\in\mathcal{N}(A^k)$ for all $k\geq 1$.

Proof. This follows straight from definitions, and the fact that $A^{k+1}v = A^k (Av)$. $\blacksquare$

Proposition 2. For all vectors $v$, if $v\in\mathcal{N}(A^{k+1})$ but $v\notin\mathcal{N}(A^k)$, we have that is a linearly independent set.

Proof. By the previous proposition, we have that $A^{j+1} v\in\mathcal{N}(A^{k-j})$ and $A^{j+1} v\notin\mathcal{N}(A^{k-j-1})$ for all $j$ between $0$ and $k-1$. However, if $A^{j+1}v$ could be expressed as a linear combination of vectors taking the form $A^{i+1}v$ with $i>j$, it would follow that $A^{j+1} v\in\mathcal{N}(A^{k-j-1})$, since $A^{i+1} v\in\mathcal{N}(A^{k-j-1})$ for all $i>j$. Hence, no vector in this set can be expressed as a linear combination of subsequent vectors, meaning that it must be a linearly independent set. $\blacksquare$

Proposition 3. If $A$ is $n\times n$, then for all $k$, $\mathcal{N}(A^k)\subset \mathcal{N}(A^n)$.

Proof. Suppose there existed some vector $v$ such that $v\in\mathcal{N}(A^{n+1})$ but $v\notin\mathcal{N}(A^{n+1})$. Then the vectors ${v,Av,...,A^n}$ would form a linearly independent set. But this is not possible, since the above set consists of $n+1$ elements, but $v$ is only an $n\times 1$ vector, meaning that the underlying vector space only has dimension $n$ at most. $\blacksquare$

Now let's prove a move difficult result: that the dimension of the generalized null space $\mathcal{N}_n(A)$ corresponds to the multiplicity of the factor $t$ in the characteristic polynomial $p_A(t)$ for any matrix $A$. For instance, if we know that a matrix has characteristic polynomial $t^3 (t-1)(t-2)^2$, then we know that $\dim \mathcal{N}_n(A)=3$. In order to prove this interesting connection, we will consider rank-one perturbations of matrices $A$ by $ww^\ast$, for $w$ in the null space of $A$. We must prove a couple of preliminary propositions about the relationship between $A$ and $A+ww^\ast$ before jumping in to this result.

Proposition 5. If $w\in\mathcal{N}_1(A)$ with $\lVert w\rVert = 1$, then

Proof. Every vector $v$ can be decomposed uniquely in the form $r+w'+\alpha w$, where $r$ is in the orthogonal complement of $\mathcal{N}_n(A)$, $w$ is in the orthogonal complement of $\text{span}{w}$ as a subspace of $\mathcal{N}_n(A)$, $w$ is a given vector in the null space of $A$, and $\alpha$ is a scalar. Notice that so that, by induction, we may show that First of all, notice that the first term $A^k r$ is nonzero and not in the generalized null space $\mathcal{N}_n(A)$, whereas the latter two terms are in this generalized null space, so $(A+ww^\ast)^k v$ cannot equal $0$ unless $r=0$. Therefore, if we wish to consider vectors $v$ in the generalized null space $\mathcal{N}_n(A+ww^\ast)$, we need only consider those taking the form $v=w'+\alpha w$, for which we have Now we claim that for any given $w'$, there is precisely one value of $\alpha$ for which $v=w'+\alpha w$ is in $\mathcal{N}_n(A+ww^\ast)$. First of all, if $w'=0$, then $\alpha=0$ is the only such value, since $(A+ww^\ast)(\alpha w) = \alpha w$. Otherwise, suppose $w'$ has rank $p$. Then we have that which is a scalar multiple of $w$. In order for this to be in $\mathcal{N}_n(A+ww^\ast)$ at all, it must be the zero vector, meaning that must hold - and this equation completely determines the value of $\alpha$, as claimed.

Now, $\mathcal{N}_n(A)$ consists of all vectors taking the form $w'+\alpha w$, but $\mathcal{N}_n(A+ww^\ast)$ consists of vectors taking the form $w'+\alpha w$ for precisely one value of $\alpha$ which depends on $w'$. This means that one degree of freedom is lost, and we have that as desired. $\blacksquare$

Proposition 6. If $w\in \mathcal{N}(A)$, and $B=A+ww^\ast$, then the multiplicity with which $t$ divides $p_A(t)$ is one more than the multiplicity with which it divides $p_B(t)$.

Proof. Notice that which, by taking determinants of both sides, implies that Using the formula for the characteristic polynomial of a rank-one matrix from my note on characteristic polynomials , we have that or which implies the desired relationship between the multiplicities with which $t$ divides $p_A$ and $p_B$. $\blacksquare$

Now we are prepared to prove the desired theorem:

Theorem 1. The multiplicity with which $t$ divides $p_A(t)$ is equal to $\dim\mathcal{N}_n(A)$.

Proof. We know that taking $A\mapsto A+ww^\ast$ for any normalize $w$ in the null space of $A$ reduces the dimension of $\mathcal{N}_n$ by $1$, and also reduces the multiplicity with which $t$ divides the characteristic polynomial. Since a matrix is nonsingular iff the dimension of its generalized null space $\mathcal{N}_n$ equals $0$, and $t$ has multiplicity $0$ in the characteristic polynomial of a nonsingular matrix, we have the desired result by induction. $\blacksquare$

Question. In $\mathbb{F}^n$ for some field $\mathbb{F}$, given a sequence of subspaces $S_1\subsetneq S_2\subsetneq ... \subsetneq S_k$ with $\dim(S_k) \leq n$, is it always possible to find a matrix $A$ such that for each $j$ between $1$ and $k$? In other words, can we always find a matrix with a prescribed sequence of nested generalized null spaces?

Solution. Yes! Let be a sequence of nested subspaces with respective orthonormal bases given by where $n_1 < n_2 < ... < n_k$. First of all, consider the matrix It is easy to verify that this matrix sends each $w_i$ to zero for $1\leq i\leq n_1$, meaning that it sends all elements of $S_1$ to zero, but it sends all vectors not in $S_1$ to a nonzero vector. Additionally, it sends all other $w_i$ with $i>n_1$ to themselves, because these vectors are orthogonal to the $w_i$ with $i\leq n_1$. Now consider the matrix Again, since $w_1,...,w_{n_1}$ are orthogonal to $w_{n_1+1},...,w_{n_2}$, it sends them to $0$, and it can be verified using straightforward algebra (as well as the fact that $A_1 w_i = w_i$ for $i>n_1$) that $A$ sends each of $w_{n_1+1},...,w_{n_2}$ to $w_{n_1}$, meaning that $A^2$ sends these vectors to the zero vector. Hence, $\mathcal{N}(A)=S_1$ and $\mathcal{N}(A^2)=S_2$.
Using a similar technique, we may continue to define a sequence of matrices recursively as follows: and we may verify inductively that for the matrix $A_j$ with $j\leq k$, we have that $\mathcal{N}(A_j^p)=S_p$ for all $p\leq j$. Thus, $A_k$ can be used as the desired matrix with prescribed generalized null spaces $S_1\subsetneq S_2\subsetneq ... \subsetneq S_k$.

linear-algebra

matrices

null-space

back to home page