Linear Algebra (2ed) Hoffman & Kunze 7.3

Jordan Form是一种比upper triangular更为简单的形式,是结合了之前对幂零矩阵和primary decomposition theorem的结果而成。Jordan form在讨论一些抽象的问题时特别简便,但如何从一个operator得到其Jordan form的矩阵(即求出对应的一组基或者相似矩阵)貌似是非常麻烦的。这一节很多是讨论Jordan form理论性的应用。

Exercises

1.Let N_1 and N_2 be 3\times 3 nilpotent matrices over the field F. Prove that N_1 and N_2 are similar if and only if they have the same minimal polynomial.
Solution: If N_1 and N_2 are similar, then they are similar to the same matrix which is in the rational form, thus have the same minimal polynomial.
Conversely, let p be the minimal polynomial for N_1 and N_2, then since both are nilpotent, p can only be x,x^2 or x^3. If p=x, then N_1=N_2=0 and are similar. If p=x^2, then both N_1 and N_2 are similar to the matrix

\displaystyle{\begin{bmatrix}0&0&0\\1&0&0\\0&0&0\end{bmatrix}}.

If p=x^3, then both N_1 and N_2 are similar to the matrix

\displaystyle{\begin{bmatrix}0&0&0\\1&0&0\\0&1&0\end{bmatrix}}

2.Use the result of Exercise 1 and the Jordan Form the prove the following: Let A and B be n\times n matrices over the field F which have the same characteristic polynomial

\displaystyle{f=(x-c_1)^{d_1}\cdots(x-c_k)^{d_k}}

and the same minimal polynomial. If no d_i is greater than 3, then A and B are similar.
Solution: We can see that A and B are separately similar to the matrix

\displaystyle{A=\begin{bmatrix}A_1&0&\cdots&0\\0&A_2&\cdots&0\\{\vdots}&\vdots&&\vdots\\0&0&\cdots&A_k\end{bmatrix},B=\begin{bmatrix}B_1&0&\cdots&0\\0&B_2&\cdots&0\\{\vdots}&\vdots&&\vdots\\0&0&\cdots&B_k\end{bmatrix}}

where each A_i,B_i are d_i\times d_i matrices. Since d_i is no greater than 3, the minimal polynomial for A and B are the same, use Exercise 1 we know A_i is similar to B_i, and the conclusion follows.

3.If A is a complex 5\times 5 matrix with characteristic polynomial f=(x-2)^3(x+7)^2, and minimal polynomial p=(x-2)^2(x+7), what is the Jordan form for A?
Solution: The Jordan form for A is

\displaystyle{\begin{bmatrix}2&0&0&0&0\\1&2&0&0&0\\0&0&2&0&0\\0&0&0&-7&0\\0&0&0&0&-7\end{bmatrix}}

4.How many possible Jordan forms are there for a 6\times 6 complex matrix with characteristic polynomial (x+2)^4(x-1)^2?
Solution: The Jordan form of this matrix can be written as \begin{bmatrix}A_1&0\\0&A_2\end{bmatrix}, where A_1 is 4\times 4 matrix and A_2 is 2\times 2 matrix. A_2 has two possible forms, corresponding to the minimal polynomial for the subspace W_2=\text{null }(T-I) being x-1 or (x-1)^2.
For A_1, if the minimal polynomial p for W_1=\text{null }(T+2I) is (x+2), then A_1 has only one form, the same when p=(x+2)^3 and p=(x+2)^4. When p=(x+2)^2, there are two possible forms of A_1, namely

\displaystyle{\begin{bmatrix}-2&0&0&0\\1&-2&0&0\\0&0&-2&0\\0&0&0&-2\end{bmatrix},\begin{bmatrix}-2&0&0&0\\1&-2&0&0\\0&0&-2&0\\0&0&1&-2\end{bmatrix}}

So A_1 has 5 forms while A_2 has 2, thus the possible forms for the original matrix is 10.

5.The differentiation operator on the space of polynomials of degree less than or equal to 3 is represented in the ‘natural’ ordered basis by the matrix

\displaystyle{\begin{bmatrix}0&1&0&0\\0&0&2&0\\0&0&0&3\\0&0&0&0\end{bmatrix}}

What is the Jordan form of this matrix? (F a subfield of the complex numnbers.)
Solution: Let D be the differentiation operator, then D^4=0 and D is a nilpotent operator, since D^3\neq 0, the Jordan form of this matrix is

\displaystyle{\begin{bmatrix}0&0&0&0\\1&0&0&0\\0&1&0&0\\0&0&1&0\end{bmatrix}}

6.Let A be the complex matrix

\displaystyle{\begin{bmatrix}2&0&0&0&0&0\\1&2&0&0&0&0\\-1&0&2&0&0&0\\0&1&0&2&0&0\\1&1&1&1&2&0\\0&0&0&0&1&-1\end{bmatrix}.}

Find the Jordan Form for A.
Solution: The characteristic polynomial for A is (x-2)^5(x+1), and

\displaystyle{A-2I=\begin{bmatrix}0&0&0&0&0&0\\1&0&0&0&0&0\\-1&0&0&0&0&0\\0&1&0&0&0&0\\1&1&1&1&0&0\\0&0&0&0&1&-3\end{bmatrix}}

we can verify that (A-2I)^4(A+I)=0, and (A-2I)^3(A+I)=0, so the minimal polynomial for A is (x-2)^4(x+1). The Jordan form for A is

\displaystyle{\begin{bmatrix}2\\1&2\\&1&2\\&&1&2\\&&&&2&\\&&&&&-1\end{bmatrix}}

7.If A is an n\times n matrix over the field F with characteristic polynomial

\displaystyle{f=(x-c_1)^{d_1}\cdots(x-c_k)^{d_k}}

what is the trace of A?
Solution: A is similar to the Jordan form matrix, in which the trace is \sum_{i=1}^kc_id_i.

8.Classify up to similarity all 3\times 3 complex matrices A such that A^3=I.
Solution: A is all matrices which is similar to the diagonal matrix \text{diag}\left(1,-\dfrac{1}{2}+\dfrac{\sqrt{3}}{2}i,-\dfrac{1}{2}+\dfrac{\sqrt{3}}{2}i\right).

9.Classify up to similarity all n\times n complex matrices A such that A^n=I.
Solution: A is all matrices which is similar to the diagonal matrix \text{diag}(c_1,\dots,c_n), where c_i are the i-th unit root in the complex field.

10.Let n be a positive integer, n\geq 2, and let N be an n\times n matrix over the field F such that N^n=0 but N^{n-1}\neq 0. Prove that N has no square root, i.e., that there is no n\times n matrix A such that A^2=N.
Solution: Assume so, then A^{2n}=N^n=0, which means A is nilpotent, thus A^n=0, also N^{n-1}=A^{2n-2}\neq 0, which means 2n-2<n, or n<2, a contradiction.

11.Let N_1 and N_2 be 6\times 6 nilpotent matrices over the field F. Suppose that N_1 and N_2 have the same minimal polynomial and the same nullity. Prove that N_1 and N_2 are similar. Show that this is not true for 7\times 7 nilpotent matrices.
Solution: N_1 and N_2 have the same minimal polynomial x^k, also if we let n be the nullity for N_1 and N_2, then the Jordan form of both N_1 and N_2 have n elementary Jordan matrix with characteristic value 0. So all we need to do is check the possible arrangement of n forms under the condition k\leq 6.
If k=1, the result is obvious.
If k=2, then n is at least 3, when n=3 the arrange is 2+2+2, when n=4 the arrange is 2+2+1+1, when n=5 the arrange is 2+1+1+1+1, which are all unique.
If k=3, then n is at least 2, when n=2 the arrange is 3+3, when n=3 the arrange is 3+2+1, when n=4 the arrange is 3+1+1+1, which are all unique.
If k=4, then n is at least 2, when n=2 the arrange is 4+2, when n=3 the arrange is 4+1+1, which are all unique.
If k=5 then n=2, the arrange is 5+1.
If k=6, then n=1 and there is only one elementary Jordan matrix.
Thus under all cases, the Jordan form for N_1 and N_2 are the same, thus they are similar.
Under the case for 7\times 7 matrix, let

\displaystyle{N_1=\begin{bmatrix}0&0&0\\1&0&0\\0&1&0\\&&&0&0&0\\&&&1&0&0\\&&&0&1&0\\&&&&&&0\end{bmatrix},N_2=\begin{bmatrix}0&0&0\\1&0&0\\0&1&0\\&&&0&0\\&&&1&0\\&&&&&0&0\\&&&&&1&0\end{bmatrix}}

Then the minimal polynomial for N_1 and N_2 are x^3 and the nullity are 3, but they are not similar.

12.Use the result of Exercise 11 and the Jordan form to prove the following: Let A and B be n\times n matrices over the field F which have the same characteristic polynomial

\displaystyle{f=(x-c_1)^{d_1}\cdots(x-c_k)^{d_k}}

and the same minimal polynomial. Suppose also that for each i the solution space of (A-c_iI) and (B-c_iI) have the same dimension. If no d_i is greater than 6, then A and B are similar.
Solution: We can see that A and B are separately similar to the matrix

\displaystyle{A=\begin{bmatrix}A_1&0&\cdots&0\\0&A_2&\cdots&0\\{\vdots}&\vdots&&\vdots\\0&0&\cdots&A_k\end{bmatrix},B=\begin{bmatrix}B_1&0&\cdots&0\\0&B_2&\cdots&0\\{\vdots}&\vdots&&\vdots\\0&0&\cdots&B_k\end{bmatrix}}

where each A_i,B_i are d_i\times d_i matrices. For each i, the nilpotent matrix A_i-c_iI and B_i-c_iI have the same minimal polynomial and the same nullity, since d_i\leq 6, they are similar and so for some invertible d_i\times d_i matrix P_i we have

\displaystyle{P_i^{-1}(A_i-c_iI)P_i=B_i-c_iI\implies P_i^{-1}A_iP_i=B_i}

thus A and B are similar.

13.If N is a k\times k elementary nilpotent matrix, i.e., N^k=0 but N^{k-1}\neq 0, show that N^t is similar to N. Now use the Jordan form to prove that every complex n\times n matrix is similar to its transpose.
Solution: the minimal polynomial for N is x^k and thus the Jordan form for N is the k\times k matrix

\displaystyle{J=\begin{bmatrix}0&\\1&\ddots\\&\ddots&\ddots\\&&1&0\end{bmatrix}}

Notice that for N^t, we also have (N^t)^k=(N^k)^t=0 and (N^t)^{k-1}=(N^{k-1})^t\neq 0, so N^t shall also be similar to J.

14.What is wrong with the following proof? If A is a complex n\times n matrix such that A^t=-A, then A=0. (Proof: Let J be the Jordan form of A. Since A^t=-A,J^t=-J. But J is triangular so that J^t=-J implies that every entry of J is zero. Since J=0 and A is similar to J, we see that A=0.) (Give an example of a non-zero A such that A^t=-A.)
Solution: An example may be A=\begin{bmatrix}0&1\\-1&0\end{bmatrix}. The problem with the proof is that A^t=-A does not imply J^t=-J.

15.If N is a nilpotent 3\times 3 matrix over C, prove that A=I+\frac{1}{2}N-\frac{1}{8}N^2 satisfy A^2=I+N, i.e., A is a square root of I+N. Use the binomial series for (1+t)^{1/2} to obtain a similar formula for a square root of I+N, where N is any nilpotent n\times n matrix over C.
Solution: We know that N^3=0 and then

\displaystyle{\begin{aligned}A^2&=\left(I+\frac{1}{2}N-\frac{1}{8}N^2\right)\left(I+\frac{1}{2}N-\frac{1}{8}N^2\right)\\&=I+\frac{1}{2}N-\frac{1}{8}N^2+\frac{1}{2}N+\frac{1}{4}N^2-\frac{1}{8}N^2=I+N\end{aligned}}

We have, use the Taylor’s Formula, that

\displaystyle{(1+t)^{1/2}=1+\sum_{i=1}\frac{1}{i!}[(1+t)^{1/2}]^{(i)}t^i=1+\sum_{i=1}(-1)^{i+1}\frac{(2i-3)!!}{i!2^i}t^i}

So a square root for I+N where N is a n\times n nilpotent matrix can be

\displaystyle{A=I+\sum_{i=1}^{n-1}(-1)^{i+1}\frac{(2i-3)!!}{i!2^i}N^i}

16.Use the result of Exercise 15 to prove that if c is a non-zero complex number and N is a nilpotent complex matrix, then (cI+N) has a square root. Now use the Jordan form to prove that every non-singular complex n\times n matrix has a square root.
Solution: As c\neq 0 we can have c^{-1}\in C, also if N is nilpotent we have c^{-1}N being nilpotent, so by Exercise 15, I+c^{-1}N has a square root, namely A, consider (c)^{1/2}A, we have

\displaystyle{((c)^{1/2}A)^2=cA^2=c(I+c^{-1}N)=cI+N}

thus (cI+N) has a square root.
Now let B be any non-singular complex n\times n matrix, the characteristic polynomial for B shall be

\displaystyle{f=(x-c_1)^{d_1}\cdots(f-c_k)^{d_k}}

where c_i\neq 0 for all i and \sum_{i=1}^kd_i=n. The Jordan form of B is of the form

\displaystyle{J_B=\begin{bmatrix}B_1\\&{\ddots}\\&&B_k\end{bmatrix}}

where each B_i is a d_i\times d_i matrix consisting of some elementary Jordan matrix associated with c_i and P^{-1}BP=J_B for some invertible n\times n matrix P. Notice that B_i-c_iI is nlipotent, thus B_i has a square root, namely A_i, if we let

\displaystyle{A=\begin{bmatrix}A_1\\&{\ddots}\\&&A_k\end{bmatrix}}

then A^2=J_B, which means PA^2P^{-1}=(PAP^{-1})(PAP^{-1})=B.

Linear Algebra (2ed) Hoffman & Kunze 7.2

这一节的内容不长但是证明很难。核心目的是证明V可以成为有限个cyclic space的直和。首先介绍了T-admissible的定义,是一个比invariant更强的定义,其能够保证多项式运算在子空间中有对应的分项(投影),Theorem 3是Cyclic Decomposition Theorem,与之前的Primary Decomposition Theorem相比,其说明V可以成为惟一的有限个T-admissible space的直和,且每个子空间都是一个cyclic space,其generator的T-annihilator是可以递归整除的。在本节的前文中,作者称这个定理是one of the deepest results in linear algebra,证明确实非常繁复。这一定理有一系列重要的推论,例如每一个T-admissible空间都有一个invariant的互补空间。Theorem 4是广义的Cayley-Hamiltion定理,在之前的最小多项式整除特征多项式的结论之上,还可以推出二者有相同的prime factors,且已知最小多项式就可以得出特征多项式。Theorem 5声明每个矩阵都相似于一个唯一的rational form的矩阵。

Exercises

1.Let T be the linear operator on F^2 which is represented in the standard ordered basis by the matrix \begin{bmatrix}0&0\\1&0\end{bmatrix}. Let \alpha_1=(0,1). Show that F^2\neq Z(\alpha_1;T), and that there is no non-zero vector \alpha_2 in F^2 with Z(\alpha_2;T) disjoint from Z(\alpha_1;T).
Solution: We have

\displaystyle{T\alpha_1=\begin{bmatrix}0&0\\1&0\end{bmatrix}\begin{bmatrix}0\\1\end{bmatrix}=0}

thus p_{\alpha_1}=x, which means \dim Z(\alpha_1;T)=1, so F^2\neq Z(\alpha_1;T).
Suppose there is some \alpha_2=(a,b)\neq 0 such that Z(\alpha_2;T) is disjoint from Z(\alpha_1;T), then \dim Z(\alpha_2;T)=1, which means p_{\alpha_2}=x or T\alpha_2=(a,0)=0, so \alpha_2=(0,b)\neq 0, but this means \alpha_2=b\alpha_1, which contradicts the hypothesis that Z(\alpha_2;T) is disjoint from Z(\alpha_1;T).

2.Let T be a linear operator on the finite-dimensional space V, and let R be the range of T.
( a ) Prove that R has a complementary T-invariant subspace if and only if R is independent of the null space N of T.
( b ) If R and N are independent, prove that N is the unique T-invariant subspace complementary to R.
Solution:
( a ) If R is independent of N, then from \dim R+\dim N=\dim V we know that R\oplus N=V, and N is obviously T-invariant. Conversely, if R has a complementary T-invariant subspace R', let \beta\in R', then T\beta\in R', but also T\beta\in R, thus T\beta=0 and \beta\in N, so R'\subseteq N, since \dim R'=\dim N=\dim V-\dim R, we know R'=N and so R\cap N=\{0\}.
( b ) Let R' be any T-invariant subspace complementary to R, from the prood of (a) we can see that R'=N, given R and N are independent.

3.Let T be the linear operator on R^3 which is represented in the standard ordered basis by the matrix

\displaystyle{\begin{bmatrix}2&0&0\\1&2&0\\0&0&3\end{bmatrix}.}

Let W be the null space of T-2I. Prove that W has no complementary T-invariant subspace.
Solution: Assume there exists a T-invariant subspace W' of R^3 such that R^3=W{\oplus}W', then let \beta=\epsilon_1, we have (T-2I)\beta=\epsilon_2, since (T-2I)\epsilon_2=0 we see that (T-2I)\beta\in W. On the other hand, since \beta\in R^3, we can find \alpha\in W,\gamma\in W' such that \beta=\alpha+\gamma, so

\displaystyle{(T-2I)\beta=(T-2I)\alpha+(T-2I)\gamma\in W}

Since W' is T-invariant, we see that (T-2I)\gamma=0 and (T-2I)\beta=(T-2I)\alpha, but \alpha\in W means (T-2I)\alpha=0, but (T-2I)\beta=\epsilon_2, this is a contradiction.

4.Let T be the linear operator on F^4 which is represented in the standard ordered basis by the matrix

\displaystyle{\begin{bmatrix}c&0&0&0\\1&c&0&0\\0&1&c&0\\0&0&1&c\end{bmatrix}.}

Let W be the null space of T-cI.
( a ) Prove that W is the subspace spanned by \epsilon_4.
( b ) Find the monic generators of the ideals S(\epsilon_4;W),S(\epsilon_3;W),S(\epsilon_2;W),S(\epsilon_1;W).
Solution:
( a ) A direct computation shows that the matrix of T-cI in the standard ordered basis is the matrix

\displaystyle{\begin{bmatrix}0&0&0&0\\1&0&0&0\\0&1&0&0\\0&0&1&0\end{bmatrix}}

and we have (T-cI)(\sum_{i=1}^4a_i\epsilon_i)=a_1\epsilon_2+a_2\epsilon_3+a_3\epsilon_4, thus W consists of all vectors of the form a\epsilon_4.
( b ) As \epsilon_4 is already in W, we have f(T)\epsilon_4\in W for all f\in F[x], thus the monic generator of S(\epsilon_4;W) is 1.
We have T\epsilon_3=\epsilon_4, so the monic generator of S(\epsilon_3;W) is x. By the same logic, the monic generator of S(\epsilon_2;W) is x^2 and the monic generator of S(\epsilon_1;W) is x^3.

5.Let T be a linear operator on the vector space V over the field F. If f is a polynomial over F and \alpha\in V, let f\alpha=f(T)\alpha. If V_1,\dots,V_k are T-invariable subspaces and V=V_1\oplus\cdots\oplus V_k, show that fV=fV_1\oplus\cdots\oplus fV_k.
Solution: For \alpha\in V, we have \alpha=\alpha_1+\cdots+\alpha_k, in which \alpha_i\in V_i for i=1,\dots,k, so

\displaystyle{f\alpha=f(T)\alpha=f(T)(\alpha_1+\cdots+\alpha_k)=\sum_{i=1}^kf(T)\alpha_i=\sum_{i=1}^kf\alpha_i}

this shows fV=fV_1+\cdots+ fV_k. To see the sum is a direct sum, let \beta\in fV_i\cap fV_j with i\neq j, then we can find \beta'\in V_i,\gamma'\in V_j such that \beta=f(T)\beta'=f(T)\gamma', since V_i and V_j are T-invariant, we have \beta\in V_i and \beta\in V_j, so \beta\in V_i\cap V_j and \beta=0, this shows V_i and V_j are independent.

6.Let T,V,F be as in Exercise 5. Suppose \alpha and \beta are vectors in V which have the same T-annihilator. Prove that, for any polynomial f, the vectors f\alpha and f\beta have the same T-annihilator.
Solution: Let p be the T-annihilator of both \alpha and \beta. Suppose the T-annihilator of f\alpha is q, then qf\alpha=0, which means qf is in the ideal generated by p, so we can find polynomial h such that qf=ph, this means qf\beta=ph\beta=hp\beta=0, thus the T-annihilator of f\beta divides q, with the same logic applying to the T-annihilator of f\beta, we see q divides the T-annihilator of f\beta, thus they are the same.

7.Find the minimal polynomials and the rational forms of each of the following real matrices.

\displaystyle{\begin{bmatrix}0&-1&-1\\1&0&0\\-1&0&0\end{bmatrix},\quad \begin{bmatrix}c&0&-1\\0&c&1\\-1&1&c\end{bmatrix},\quad\begin{bmatrix}\cos\theta&\sin{\theta}\\{-\sin\theta}&{\cos\theta}\end{bmatrix}}

Solution: For the first matrix, we compute the characteristic polynomial

\displaystyle{\begin{vmatrix}x&1&1\\-1&x&0\\1&0&x\end{vmatrix}=x^3+x-x=x^3}

and the minimal polynomial is also x^3. Thus the rational form of this matrix is

\displaystyle{\begin{bmatrix}0&0&0\\1&0&0\\0&1&0\end{bmatrix}}

For the second matrix we compute the characteristic polynomial

\displaystyle{\begin{aligned}\begin{vmatrix}x-c&0&1\\0&x-c&-1\\1&-1&x-c\end{vmatrix}&=(x-c)[(x-c)^2-1]-(x-c)\\&=(x-c)[(x-c)^2-2]\\&=x^3-3cx^2+(3c^2-2)x-c^3+2c\end{aligned}}

and the minimal polynomial is also x^3-3cx^2+(3c^2-2)x-c^3+2c. Thus the rational form of this matrix is

\displaystyle{\begin{bmatrix}0&0&c^3-2c\\1&0&-3c^2+2\\0&1&3c\end{bmatrix}}

For the third matrix we compute the characteristic polynomial

\displaystyle{\begin{vmatrix}x-\cos\theta&-\sin\theta\\sin\theta&x-\cos\theta\end{vmatrix}=x^2-2\cos\theta x+1}

and the minimal polynomial is also x^2-2\cos\theta x+1. Thus the rational form of this matrix is \begin{bmatrix}0&-1\\1&2\cos\theta\end{bmatrix}.

8.Let T be the linear operator on R^3 which is represented in the standard basis by

\displaystyle{\begin{bmatrix}3&-4&-4\\-1&3&2\\2&-4&-3\end{bmatrix}.}

Find non-zero vectors \alpha_1,\dots,\alpha_r satisfying the conditions of Theorem 3.
Solution: We first compute the characteristic polynomial of T:

\displaystyle{f=\begin{vmatrix}x-3&4&4\\1&x-3&-2\\-2&4&x+3\end{vmatrix}=\begin{vmatrix}x-3&4&4\\1&x-3&-2\\0&2x-2&x-1\end{vmatrix}=(x-1)^3}

Now the matrix of T-I is obviously not zero and the matrix of (T-I)^2 is

\displaystyle{\begin{bmatrix}2&-4&-4\\-1&2&2\\2&-4&-4\end{bmatrix}\begin{bmatrix}2&-4&-4\\-1&2&2\\2&-4&-4\end{bmatrix}=0}

thus the minimal polynomial for T is p=(x-1)^2. Since T\epsilon_1=(3,-1,2) which is not a scalar multiple of \epsilon_1, Z(\epsilon_1;T) has dimension 2 and consists of all vectors

\displaystyle{a\epsilon_1+bT\epsilon_1=a(1,0,0)+b(3,-1,2)=(a+3b,-b,2b)}

So we can let \alpha_1=\epsilon_1, the vector \alpha_2 must be a characteristic vector of T which is not in Z(\epsilon_1;T), As we can see if \alpha=(x_1,x_2,x_3), then T\alpha=\alpha means \alpha is in the form (2a+2b,a,b), let a=1,b=1 we see that we can make \alpha_2=(4,1,1).

9.Let A be the real matrix

\displaystyle{A=\begin{bmatrix}1&3&3\\3&1&3\\-3&-3&-5\end{bmatrix}.}

Find an invertible 3\times 3 real matrix P such that P^{-1}AP is in rational form.
Solution: First compute the characteristic polynomial for A

\displaystyle{\begin{aligned}\det (xI-A)&=\begin{vmatrix}x-1&-3&-3\\-3&x-1&-3\\3&3&x+5\end{vmatrix}=\begin{vmatrix}x-1&-3&-3\\-3&x-1&-3\\0&x+2&x+2\end{vmatrix}\\&=(x+2)(x^2-2x+1-9+3x-3+9)\\&=(x+2)^2(x-1)\end{aligned}}

and since

\displaystyle{(A+2I)(A-I)=\begin{bmatrix}3&3&3\\3&3&3\\-3&-3&-3\end{bmatrix}\begin{bmatrix}0&3&3\\3&0&3\\-3&-3&-6\end{bmatrix}=0}

the minimal polynomial for A is (x+2)(x-1)=x^2+x-2.
Since A\epsilon_1=(1,3,-3) is not a scalar multiple of \epsilon_1, one subspace can be (\epsilon_1,A\epsilon_1), which consists of vectors like (a+b,3b,-3b), choose a characteristic vector associated with the characteristic value -2, which may be (1,1,-2), then let

\displaystyle{P=\begin{bmatrix}1&1&1\\0&3&1\\0&-3&-2\end{bmatrix}}

we have \det P=-3\neq 0, thus P is invertible, and

\displaystyle{AP=\begin{bmatrix}1&3&3\\3&1&3\\-3&-3&-5\end{bmatrix}\begin{bmatrix}1&1&1\\0&3&1\\0&-3&-2\end{bmatrix}=\begin{bmatrix}1&1&-2\\3&-3&-2\\-3&3&4\end{bmatrix}}

the rational form of A is clearly \begin{bmatrix}0&2&0\\1&-1&0\\0&0&-2\end{bmatrix}, and we have

\displaystyle{P\begin{bmatrix}0&2&0\\1&-1&0\\0&0&-2\end{bmatrix}=\begin{bmatrix}1&1&1\\0&3&1\\0&-3&-2\end{bmatrix}\begin{bmatrix}0&2&0\\1&-1&0\\0&0&-2\end{bmatrix}=\begin{bmatrix}1&1&-2\\3&-3&-2\\-3&3&4\end{bmatrix}}

thus P is the matrix we need.

10.Let F be a subfield of the complex numbers and let T be the linear operator on F^4 which is represented in the standard ordered basis by the matrix

\displaystyle{\begin{bmatrix}2&0&0&0\\1&2&0&0\\0&a&2&0\\0&0&b&2\end{bmatrix}.}

Find the characteristic polynomial for T. Consider the cases a=b=1; a=b=0; a=0,b=1. In each of these cases, find the minimal polynomial for T and non-zero vectors \alpha_1,\dots,\alpha_r which satisfy the conditions of Theorem 3.
Solution: The characteristic polynomial for T is (x-2)^4.
In the case a=b=1, the minimal polynomial for T is (x-2)^4, since for \epsilon_1=(1,0,0,0), we have T\epsilon_1=(1,1,0,0), T^2\epsilon_1=(1,2,1,0), T^3\epsilon_1=(1,3,3,1), we have \alpha_1=\epsilon_1.
In the case a=b=0 and a=0,b=1, the minimal polynomial for T is (x-2)^2. The non-zero vectors \alpha_i satisfing Theorem 3 can be found using techniques in the proof of Theorem 3.

11.Prove that if A and B are 3\times 3 matrices over a field F, a necessary and sufficient condition that A and B be similar over F is that they have the same characteristic polynomial and the same minimal polynomial. Give an example which shows that this is false for 4\times 4 matrices.
Solution: If A and B are similar, then both are similar to the same matrix R which is in rational form, thus the minimal polynomial for A and B are the same. Let it be p.
If \deg p=3, then it is the characteristic polynomial for both A and B.
If \deg p=2, then R must have the form \begin{bmatrix}A_1&0\\0&a\end{bmatrix}, where A_1 is a 2\times 2 matrix in the rational form, then the characteristic polynomial for A and B are p(x-a).
If \deg p=1, then R must be diagonal, then it is apparent the characteristic polynomial for A and B are equal.
Conversely, if A and B have the same characteristic polynomial f and the same minimal polynomial p. We find the unique matrix in the rational form that A and B are similar to, namely R_A and R_B, then
If \deg p=3, then we must have R_A=R_B.
If \deg p=2, then R_A=\begin{bmatrix}A_1&0\\0&a\end{bmatrix}, R_B=\begin{bmatrix}A_1&0\\0&b\end{bmatrix}, where A_1 is a 2\times 2 matrix in the rational form, and f/p=x-a=x-b means a=b, so R_A=R_B.
If \deg p=1, then R_A and R_B must be diagonal, since the characteristic polynomial for A and B are equal ,we have R_A=R_B.
Since A is similar to R_A and B is similar to R_B, we have A similar to B.
For a counterexample of 4\times 4 matrix, we let

\displaystyle{A=\begin{bmatrix}0&-1\\1&2\\&&1\\&&&1\end{bmatrix},\quad B=\begin{bmatrix}0&-1\\1&2\\&&0&-1\\&&1&2\end{bmatrix}}

The characteristic polynomial of A and B is (x-1)^4, the minimal polynomial of A and B is (x-1)^2, but A and B are not similar.

12.Let F be a subfield of the field of complex numbers, and let A and B be n\times n matrices over F. Prove that if A and B are similar over the field of complex numbers, then they are similar over F.
Solution: The rational form of A is a matrix over F, thus a matrix over C, likewise for B. Thus if A and B are similar over the field of complex numbers, due to Theorem 5, the rational form of A is the same as B over C, which means A and B are similar to the same rational form over F, and the conclusion follows.

13.Let A be an n\times n matrix with complex entires. Prove that if every characteristic value of A is real, then A is similar to a matrix with real entries.
Solution: The characteristic polynomial for A contains only linear factors, and so is the minimal polynomial p for A, since every characteristic value of A is real, we see p consists only real coefficients.
Let T be the linear operator on F^n which is represented by A in the standard basis, then there is an ordered basis \mathfrak B for V such that

\displaystyle{[T]_{\mathfrak B}=\begin{bmatrix}A_1\\&A_2\\&&{\ddots}\\&&&&A_r\end{bmatrix}}

where each A_i is the companion matrix of some polynomial p_i, and p_1=p, and p_i|p for i=2,\dots,r, since p consists only real coefficients, so are all p_i, which means all A_i have real entries, and so is [T]_{\mathfrak B}, apparently, A is similar to [T]_{\mathfrak B}.

14.Let T be a linear operator on the finite-dimensional space V. Prove that there exists a vector \alpha\in V with this property. If f is a polynomial and f(T)\alpha=0, then f(T)=0. (Such a vector \alpha is called a separating vector for the algebra of polynomials in T.) When T has a cyclic vector, give a direct proof that any cyclic vector is a separating vector for the algebra of polynomials in T.
Solution: We first prove if \alpha is a cyclic vector of T, then \alpha is a separating vector for the algebra of polynomials in T. Suppose \dim V=n, then \alpha,\dots,T^{n-1}\alpha span V, so for any \beta\in V, we have \beta=g(T)\alpha for some polynomial g. Now if f is a polynomial and f(T)\alpha=0, we have

\displaystyle{f(T)\beta=f(T)g(T)\alpha=g(T)f(T)\alpha=g(T)0=0}

thus f(T)=0.
Now for any linear operator T on V, use the Cyclic Decomposition Theorem, we can write V=Z(\alpha_1;T)\oplus\cdots\oplus Z(\alpha_r;T), let \alpha=\alpha_1+\cdots+\alpha_r, then f(T)\alpha=0 means f(T)\alpha_i=0 since each Z(\alpha_i;T) is invariant under T, and V is the direct sum of all Z(\alpha_i;T). It follows that f(T)=0 on Z(\alpha_i;T) for i=1,\dots,r, which means f(T)=0 on V.

15.Let F be a subfield of the field of complex numbers, and let A be an n\times n matrix over F. Let p be the minimal polynomial for A. If we regard A as a matrix over C, then A has a minimal polynomial f as an n\times n matrix over C. Use a theorem on linear equations to prove p=f. Can you also see how this follows from the cyclic decomposition theorem?
Solution: If we write p=c_0+c_1x+\cdots+x^k, then p(A)=0 means (c_0,c_1,\dots,1) is a solution for the system

\displaystyle{x_1I+\cdots+x_{k+1}A^k=0}

in the field F, and likewise, f=d_0+d_1x+\cdots+x^k is the polynomial which has coefficients (d_0,d_1,\dots,1) as a solution for the same system in the field C, then (d_0,d_1,\dots,1) is a solution in F due to the final remark in Sec 1.4. Thus both (c_0,c_1,\dots,1) and (d_0,d_1,\dots,1) are in the solution space for x_1I+\cdots+x_{k+1}A^k=0. To prove p=f, assume there is some c_i\neq d_i, then (c_0,c_1,\dots,c_{k-1})-(d_0,d_1,\dots,d_{k-1}) is a non-trivial solution for the system

\displaystyle{x_1I+\cdots+x_{k}A^{k-1}=0}

Let h_i=c_i-d_i and h=h_0+h_1x+\cdots+h_{k-1}x^{k-1}, we see h(A)=0, but \deg h<\deg p, a contradiction.
To get this result from the cyclic decomposition theorem, notice that by Exercise 12, A has the same rational form in F and C, and the first block matrix of the rational form of A is the companion matrix of p in F and f in C, we have p=f.

16.Let A be an n\times n matrix with real entries such that A^2+I=0. Prove that n is even, and if n=2k, then A is similar over the field of real numbers to a matrix of the block form \begin{bmatrix}0&-I\\I&0\end{bmatrix} where I is the k\times k identity matrix.
Solution: The minimal polynomial for A is x^2+1, by the generalized Cayley-Hamilton Theorem, the characteristic polynomial for A must be of the form f=(x^2+1)^k, so n=\deg f is even.
If n=2k, we know A is similar to one and only one matrix B in the rational form. If we write

\displaystyle{B=\begin{bmatrix}A_1\\&\ddots\\&&A_r\end{bmatrix}}

where each A_i is the companion matrix of p_i, and p_{i+1} divides p_i, from the proof of Theorem 3 we know p_1=x^2+1, and the only possible polynomial which divides x^2+1 is x^2+1 and 1. Since 1 can only be the annihilator of zero vectors, we see that

\displaystyle{B=\begin{bmatrix}A_1\\&\ddots\\&&A_k\end{bmatrix}, \quad A_i=\begin{bmatrix}&-1\\1\end{bmatrix},i=1,\dots,k}

Let \mathscr B=\{\epsilon_1,\dots,\epsilon_n\} be a basis for R^n and T is the linear operator with [T]_{\mathscr B}=B, then

\displaystyle{T\epsilon_{2i-1}=\epsilon_{2i},\quad T\epsilon_{2i}=-\epsilon_{2i-1},\quad i=1,\dots,k}

If we let \alpha_i=\epsilon_{2i-1} for i=1,\dots,k and \alpha_i=\epsilon_{2i-2k} for i=k+1,\dots,n, then \mathscr B'=\{\alpha_1,\dots,\alpha_n\} is a basis for V, and we can verify [T]_{\mathscr B'}=\begin{bmatrix}0&-I\\I&0\end{bmatrix}, which means B is similar to \begin{bmatrix}0&-I\\I&0\end{bmatrix} and so is A.

17.Let T be a linear operator on a finite-dimensional vector space V. Suppose that
( a ) the minimal polynomial for T is a power of an irreducible polynomial;
( b ) the minimal polynomial is equal to the characteristic polynomial.
Show that no non-trivial T-invariant subspace has a complementary T-invariant subspace.
Solution: Let W be a non-trivial T-invariant subspace of V, assume there is W' which is T-invariant such that W\oplus W'=V, let T_W=U and T_{W'}=U', then the minimal polynomial p of T_W and p' of T_{W'} divide the minimal polynomial for T. Since the minimal polynomial for T is of the form q^n where q is irreducible, we have p=q^r and p'=q^s, where r+s\leq n. As W is non-trivial, we have r\geq 1.
Now if s\geq 1, we can get a contradiction by the following procedure: from (b) we know that T has a cyclic vector \alpha such that the T-annihilator of \alpha is q^n, and there is \alpha_1\in W,\alpha_2\in W' such that \alpha=\alpha_1+\alpha_2, we let k=\max(r,s), then 1\leq k<n, and q^k(T)\alpha_1=q^k(T)\alpha_2=0, which means q^k(T)\alpha=0, this is a contradiction.
Thus s=0, or the minimal polynomial for T_{W'} is 1, which means W'=\{0\} or W=V.

18.If T is a diagonalizable linear operator, then every T-invariant subspace has a complementary T-invariant subspace.
Solution: T is diagonalizable means if we let c_1,\dots,c_k be distinct characteristic values of T and let V_i=\text{null }(T-c_iI), then

\displaystyle{V=V_1\oplus\cdots\oplus V_k}

Let W be a T-invariant subspace of V, then by Exercise 10 of Section 6.8, we have

\displaystyle{W=(W\cap V_1)\oplus\cdots\oplus(W\cap V_k)}

Consider W\cap V_i, for any \beta\in W\cap V_i, we have \beta\in V_i, so T\beta=c_i\beta. Since W\cap V_i is a subspace, we can find \{\alpha_1,\dots,\alpha_{r_i}\} to be a basis for it, then it can be extended to a basis for V_i, namely \{\alpha_1,\dots,\alpha_{s_i}\}, all of which are characteristic vectors associated with c_i. Let U_i be the space spanned by \{\alpha_{{r_i}+1},\dots,\alpha_{s_i}\}, then (W\cap V_i)\oplus U_i=V_i, let U=U_1\oplus\cdots\oplus U_k, we see that V=W\oplus U, and as each U_i is invariant under T, so is U.

19.Let T be a linear operator on the finite-dimensional space V. Prove that T has a cyclic vector if and only if the following is true: Every linear operator U which commutes with T is a polynomial in T.
Solution: First suppose \alpha is a cyclic vector of T, then if \dim V=n, we have \{\alpha,T\alpha,\dots,T^{n-1}\alpha\} being a basis for V. Given an operator U which commutes with T, we have

\displaystyle{U\alpha=a_0\alpha+\cdots+a_{n-1}T^{n-1}\alpha=f(T)\alpha}

where f(x)=a_0+a_1x+\cdots+a_{n-1}x^{n-1}, notice that

\displaystyle{UT^k\alpha=T^kU\alpha=T^kf(T)\alpha=f(T)T^k\alpha,\quad k=2,\dots n-1}

We can see that U=f(T) on a basis for V, thus on V.
Conversely, if every linear operator U which commutes with T is a polynomial in T, let the cyclic decomposition of V by T be

\displaystyle{V=Z(\alpha_1;T)\oplus\cdots\oplus Z(\alpha_r;T)}

and p_i is the T-annihilator for \alpha_i with p_{i+1}|p_i. Define U as follows: U\alpha=0 if \alpha\in Z(\alpha_1;T) and U\alpha=\alpha if \alpha\in Z(\alpha_i;T),i=2,\dots,r. For any \beta\in V, we have \beta=\beta_1+\cdots+\beta_r where each \beta_i\in Z(\alpha_i;T), so

\displaystyle{\begin{aligned}UT\beta&=U(T\beta_1+T\beta_2+\cdots+T\beta_r)=U(T\beta_2+\cdots+T\beta_r)\\&=T(\beta_2+\cdots+\beta_r)=T(U\beta_2+\cdots+U\beta_r)=TU\beta\end{aligned}}

Then U commutes with T, thus is a polynomial for T. Let U=q(T), since q(T)\alpha_1=0, we know p_1|q, which means p_i|q for i\geq 2, so \alpha_i=U\alpha_i=q(T)\alpha_i=0 for i\geq 2, which means Z(\alpha_i;T)=\{0\} for i\geq 2, so V=Z(\alpha_1;T) and T has a cyclic vector.

20.Let V be a finite-dimensional vector space over the field F, and let T be a linear operator on V. We ask when it is true that every non-zero vector in V is a cyclic vector for T. Prove that this is the case if and only if the characteristic polynomial for T is irreducible over F.
Solution: Let \dim V=n. First suppose the characteristic polynomial f for T is irreducible over F, then by the Generalized Cayley-Hamiltion Theorem, the minimal polynomial p for T is equal to f and irreducible over F. For any nonzero vector \alpha\in V, if \alpha,T\alpha,\dots,T^{n-1}\alpha is linearly dependent, then there is g\in F[x] with \deg g1, since p(T)\alpha=0, we see p_{\alpha}|p, a contradiction to p being irreducible. Then it means \alpha,T\alpha,\dots,T^{n-1}\alpha is linearly independent, or Z(\alpha;T)=V.
Conversely, if every non-zero vector in V is a cyclic vector for T, and assume the characteristic polynomial f for T is not irreducible over F, then if the minimal polynomial p for T is not the same as f, we have \deg p<\deg f=n, there is a vector \alpha\in V such that the T-annihilator for \alpha is p, so Z(\alpha;T) has dimension \deg p, which means \alpha is not a cyclic vector for T.
If p=f and f=gh where \deg g\geq1,\deg h\geq1, then it is apparent \deg h<n, let h=h_0+h_1x+\cdots+x^k, there is a vector \alpha\in V such that the T-annihilator for \alpha is p, thus g(T)h(T)\alpha=0, and \beta=g(T)\alpha\neq 0. Notice that

\displaystyle{h_0\beta+h_1T\beta+\cdots+T^k\beta=h(T)\beta=h(T)g(T)\alpha=0}

this shows \beta,T\beta,\dots,T^k\beta is linearly dependent, thus by Theorem 1, \dim Z(\beta;T)\leq k=\deg h<n, so \beta is not a cyclic vector for T.

21.Let A be an n\times n matrix with real entries. Let T be the linear operator on R^n which is represented by A in the standard ordered basis, and let U be the linear operator on C^n which is represented by A in the standard ordered basis. Use the result of Exercise 20 to prove the following: If the only subspace invariant under T are R^n and the zero subspace, then U is diagonalizable.
Solution: Since A is real, the characteristic polynomial for T and U are equal, both are f=\det(xI-A). Now given any nonzero vector \alpha\in R^n, the cyclic space Z(\alpha;T) must be R^n since it is invariant under T and contains \alpha, so by Exercise 20, f is irreducible over R, which means f must be of the form x-c or x^2+d where d>0. Then in the field C, f can be factored into prime factors and thus U is diagonalizable.

Linear Algebra (2ed) Hoffman & Kunze 7.1

这一节开始讲由某一个向量\alpha经过T的多项式产生的subspaceZ(\alpha;T),以及T-annihilator的概念,Theorem 1阐述了T-annihilator的阶数和cyclic subspace维数的关系、如何通过cyclic构造一组基,T-annihilator是限制在Z(\alpha;T)上的operator的最小多项式。而后介绍了存在cyclic operator的subspace上,有一个T-annihilator的companion matrix这样的概念。Theorem 2及推论说明,U的cyclic vector存在与U的最小多项式的companion matrix存在是等价的,且一个多项式的companion matrix的最小多项式和特征多项式都是该多项式本身。

Exercises

1.Let T be a linear operator on F^2. Prove that any non-zero vector which is not a characteristic vector for T is a cyclic vector for T. Hence, prove that either T has a cyclic vector or T is a scalar multiple of the identity operator.
Solution: If \alpha is not a characteristic vector for T, then T\alpha is not in the space spanned by \alpha, which means T\alpha and \alpha are linearly independent, thus Z(\alpha;T)=F^2. Hence, if T has no cyclic vectors, then all non-zero vectors in F^2 are characteristic vectors for T, thus T\epsilon_1=k\epsilon_1 and T\epsilon_2=l\epsilon_2 for some k,l\in F. So T(\epsilon_1+\epsilon_2)=k\epsilon_1+l\epsilon_2=m(\epsilon_1+\epsilon_2), which gives k=l and T is a scalar multiple of the identity operator.

2.Let T be the linear operator on R^3 which is represented in the standard ordered basis by the matrix

\displaystyle{\begin{bmatrix}2&0&0\\0&2&0\\0&0&-1\end{bmatrix}.}

Prove that T has no cyclic vector. What is the T-cyclic subspace generated by the vector (1,-1,3)?
Solution: Let \alpha=a\epsilon_1+b\epsilon_2+c\epsilon_3, then

\displaystyle{T\alpha=2a\epsilon_1+2b\epsilon_2-c\epsilon_3,\quad T^2\alpha=4a\epsilon_1+4b\epsilon_2+c\epsilon_3}

this means T^2\alpha-T\alpha-2\alpha=0, thus T has no cyclic vector due to Theorem 1.
The T-cyclic subspace generated by the vector (1,-1,3) is the space spanned by (1,-1,3) and (2,-2,-3).

3.Let T be the linear operator on C^3 which is represented in the standard ordered basis by the matrix

\displaystyle{\begin{bmatrix}1&i&0\\-1&2&-i\\0&1&1\end{bmatrix}.}

Find the T-annihilator of the vector (1,0,0). Find the T-annihilator of (1,0,i).
Solution: We have

\displaystyle{T\begin{bmatrix}1\\0\\0\end{bmatrix}=\begin{bmatrix}1&i&0\\-1&2&-i\\0&1&1\end{bmatrix}\begin{bmatrix}1\\0\\0\end{bmatrix}=\begin{bmatrix}1\\-1\\0\end{bmatrix} ,T^2\begin{bmatrix}1\\0\\0\end{bmatrix}=T\begin{bmatrix}1\\-1\\0\end{bmatrix}=\begin{bmatrix}1-i\\-3\\-1\end{bmatrix}}

They are linearly independent, thus the degree of p_{(1,0,0)} is 3, as

\displaystyle{T^3\begin{bmatrix}1\\0\\0\end{bmatrix}=T\begin{bmatrix}1-i\\-3\\-1\end{bmatrix}=\begin{bmatrix}1-4i\\2i-7\\-4\end{bmatrix}}

We have [T^3+4T^2+(2i+5)T-(2i+2)I](1,0,0)=0, thus the T-annihilator of the vector (1,0,0) is p=x^3+4x^2+(2i+5)x-(2i+2).
We have

\displaystyle{T\begin{bmatrix}1\\0\\i\end{bmatrix}=\begin{bmatrix}1&i&0\\-1&2&-i\\0&1&1\end{bmatrix}\begin{bmatrix}1\\0\\i\end{bmatrix}=\begin{bmatrix}1\\0\\i\end{bmatrix}}

Thus the T-annihilator of (1,0,i) is x-1.

4.Prove that if T^2 has a cyclic vector, then T has a cyclic vector. Is the converse true?
Solution: If Z(\alpha;T^2)=V, then obviously Z(\alpha;T)=V, since any vectors of the form g(T^2)\alpha,g\in F[x] belongs to Z(\alpha;T).
The converse is not true. Let T in R^2 be the operator T(x,y)=(y,0), then T has (1,1) as a cyclic vector, but for any (x,y)\in R^2 we have T^2(x,y)=(0,0), thus Z(\alpha;T^2)=k\alpha and T^2 has no cyclic vector.

5.Let V be an n-dimensional vector space over the field F, and let N be a nilpotent linear operator on V. Suppose N^{n-1}\neq 0, and let \alpha be any vector in V such that N^{n-1}\alpha\neq 0. Prove that \alpha is a cyclic vector for N. What exactly is the matrix of N in the ordered basis \{\alpha,N\alpha,\dots,N^{n-1}\alpha\}?
Solution: We already know the characteristic polynomial of N is x^n, since N^{n-1}\neq 0, the minimal polynomial of N is x^n. Since p_{\alpha} shall divide x^n, we have p_{\alpha}=x^n and \deg p_{\alpha}=n=\dim Z(\alpha;N), which means Z(\alpha;N)=V and so \alpha is a cyclic vector for N. The matrix of N in the ordered basis \{\alpha,N\alpha,\dots,N^{n-1}\alpha\} is

\displaystyle{\begin{bmatrix}0&\\1&\\&\ddots\\&&1&0\end{bmatrix}}

6.Give a direct proof that if A is the companion matrix of the monic polynomial p, then p is the characteristic polynomial for A.
Solution: By definition, if p=c_0+c_1x+\cdots+c_{k-1}x^{k-1}+x^k, then

\displaystyle{A=\begin{bmatrix}0&0&0&\cdots&0&-c_0\\1&0&0&\cdots&0&-c_1\\0&1&0&\cdots&0&-c_2\\ \vdots&\vdots&\vdots&&\vdots&\vdots\\0&0&0&\cdots&1&-c_{k-1}\end{bmatrix}}

thus

\displaystyle{\begin{aligned}\det(xI-A)&=\begin{vmatrix}x&0&0&\cdots&0&c_0\\-1&x&0&\cdots&0&c_1\\0&-1&x&\cdots&0&c_2\\ \vdots&\vdots&\vdots&&\vdots&\vdots\\0&0&0&\cdots&-1&x-c_{k-1}\end{vmatrix}\\&=\begin{vmatrix}0&0&0&\cdots&0&c_0+c_1x+\cdots+c_{k-1}x^{k-1}+x^k\\-1&0&0&\cdots&0&c_1+c_2x+\cdots+c_{k-1}x^{k-2}+x^{k-1}\\0&-1&0&\cdots&0&c_2+c_3x+\cdots+c_{k-1}x^{k-3}+x^{k-2}\\ \vdots&\vdots&\vdots&&\vdots&\vdots\\0&0&0&\cdots&-1&x-c_{k-1}\end{vmatrix}\\&=(-1)^{n+1}p(-1)^{n-1}=p\end{aligned}}

7.Let V be an n-dimensional vector space, and let T be a linear operator on V. Suppose that T is diagonalizable.
( a ) If T has a cyclic vector, show that T has n distinct characteristic values.
( b ) If T has n distinct characteristic values, and if \{\alpha_1,\dots,\alpha_n\} is a basis of characteristic vectors for T, show that \alpha=\alpha_1+\cdots+\alpha_n is a cyclic vector for T.
Solution:
( a ) If T has a cyclic vector \alpha, then \alpha,T\alpha,\dots,T^{n-1}\alpha shall be a basis for V and thus are linearly independent. Since T is diagonalizable, let \{\alpha_1,\dots,\alpha_n\} is a basis of characteristic vectors for T, then T\alpha_i=k_i\alpha_i for i=1,\dots,n. If we write \alpha=\sum_{i=1}^na_i\alpha_i, then T^n\alpha=\sum_{i=1}^na_ik_i^n\alpha_i, it follows that the matrix of the basis \{\alpha,T\alpha,\dots,T^{n-1}\alpha\} under the basis \{\alpha_1,\dots,\alpha_n\} is

\displaystyle{P=\begin{bmatrix}a_1&k_1a_1&\cdots&k_1^{n-1}a_1\\ \vdots&\vdots&\vdots&\vdots\\a_n&k_na_n&\cdots&k_n^{n-1}a_n\end{bmatrix}}

Since P is invertible, \det P=(a_1\cdots{a_n})\prod_{i\neq j}(k_i-k_j)\neq 0, which means k_i\neq k_j when i\neq j, thus T has n distinct characteristic values.
( b ) With the notations in (a) with a_i=1 for i=1,\dots,n, we can have

\displaystyle{\begin{bmatrix}\alpha\\T\alpha\\\vdots\\T^{n-1}\alpha\end{bmatrix}=\begin{bmatrix}1&k_1&\cdots&k_1^{n-1}\\ \vdots&\vdots&\vdots&\vdots\\1&k_n&\cdots&k_n^{n-1}\end{bmatrix}\begin{bmatrix}\alpha_1\\ \vdots\\ \alpha_n\end{bmatrix}=P\begin{bmatrix}\alpha_1\\ \vdots\\ \alpha_n\end{bmatrix}}

since \det P=\prod_{i\neq j}(k_i-k_j)\neq 0, we know that \alpha,T\alpha,\dots,T^{n-1}\alpha are linearly independent, thus a basis for V, so Z(\alpha;T)=V.

8.Let T be a linear operator on the finite-dimensional vector space V. Suppose T has a cyclic vector. Prove that if U is any linear operator which commutes with T, then U is a polynomial in T.
Solution: If T has a cyclic vector \alpha, then \{\alpha,T\alpha,\dots,T^{n-1}\alpha\} is a basis for T. If U satisfy UT=TU, then firstly U\alpha\in V, thus there are g_0,g_1,\dots,g_{n-1} such that U\alpha=g_0\alpha+g_1T\alpha+\cdots+g_{n-1}T^{n-1}\alpha. Let g(x)=g_0+g_1x+\cdots+g^{n-1}x^{n-1}, then U\alpha=g(T)\alpha.
Now for any \beta \in V, we have \beta=b_0\alpha+b_1T\alpha+\cdots+b_{n-1}T^{n-1}\alpha for some b_0,b_1,\dots,b_{n-1}, and

\displaystyle{\begin{aligned}U\beta&=U(b_0\alpha+b_1T\alpha+\cdots+b_{n-1}T^{n-1}\alpha)\\&=b_0U\alpha+b_1TU\alpha+\cdots+b_{n-1}T^{n-1}U\alpha\\&=(b_0I+b_1T+\cdots+b_{n-1}T^{n-1})U\alpha\\&=(b_0I+b_1T+\cdots+b_{n-1}T^{n-1})g(T)\alpha\\&=g(T)(b_0\alpha+b_1T\alpha+\cdots+b_{n-1}T^{n-1}\alpha)=g(T)\beta\end{aligned}}

which means U=g(T).