Linear Algebra (2ed) Hoffman & Kunze 6.6

这一节介绍比较经典的direct sum和projection的概念,direct sum可以和k维坐标有一个形象的类比,实质是一种“互不相干”的感觉。在direct sum下不同子空间的基合起来就是和空间的一组基。投影projection是线性代数中比较重要的概念,其体现了线性无关的分解思想,本节的投影定义不是很多其他教材上用的类似P_U这样概念上很好理解和想象但定义上略不严格的做法,而是用条件E^2=E定义,这样的投影是\text{range }E上的投影。Theorem 9揭示了投影和direct sum之间的关系:有直和就能得到对应的projection,反之亦然。

Exercises

1.Let V be a finite-dimensional vector space and let W_1 be any subspace of V. Prove that there is a subspace W_2 of V such that V=W_1\oplus W_2.
Solution: Let \{\alpha_1,\dots,\alpha_k\} be an ordered basis for W_1 and extend it to an ordered basis of V, namely \{\alpha_1,\dots,\alpha_n\}, then let W_2 be the subspace spanned by \alpha_{k+1},\dots,\alpha_n, we have V=W_1\oplus W_2, as apparently W_1+W_2=V and W_1\cap W_2=\{0\}.

2.Let V be a finite-dimensional vector space and let W_1,\dots,W_k be subspaces of V such that

\displaystyle{V=W_1+\cdots+W_k\quad\text{ and }\quad \dim V=\dim W_1+\cdots+\dim W_k.}

Prove that V=W_1\oplus\cdots\oplus W_k.
Solution: Let \mathfrak B_i be an ordered basis for W_i, then the sequence \mathfrak B=(\mathfrak B_1,\dots,\mathfrak B_k) has \dim V vectors, and it spans V. It follows that \mathfrak B is a basis for V, so vectors in \mathfrak B are linearly independent. If \alpha_1+\cdots+\alpha_k=0 for \alpha_i\in W_i, then writing each \alpha_i as a linear combination of the basis vectors, we see all the vectors shall be 0, thus \alpha_i=0 for all i.

3.Find a projection E which projects R^2 onto the subspace spanned by (1,-1) along the subspace spanned by (1,2).
Solution: E(x_1,x_2)=\left(\dfrac{2}{3}x_1-\dfrac{1}{3}x_2,-\dfrac{2}{3}x_1+\dfrac{1}{3}x_2\right)

4.If E_1 and E_2 are projections onto independent subspaces, then E_1+E_2 is a projection. True or false?
Solution: Let V be the vector space we discuss in. We let E_1 projects on W_1 and E_2 projects on W_2, then for any \alpha\in V, we shall have

\displaystyle{\alpha=\alpha_1+\alpha_2+\alpha',\quad \alpha_1\in W_1,\alpha_2\in W_2,\alpha'\in V-(W_1\cup W_2)}

thus since W_1\cap W_2=\{0\}, we have

\displaystyle{E_1E_2\alpha=E_1\alpha_2=0,\quad E_2E_1\alpha=E_2\alpha_1=0}

which means E_1E_2=E_2E_1=0, so

\displaystyle{(E_1+E_2)^2=E_1^2+E_1E_2+E_2E_1+E_2^2=E_1^2+E_2^2=E_1+E_2}

5.If E is a projection and f is a polynomial, then f(E)=aI+bE. What are a and b in terms of the coefficients of f?
Solution: Let f=c_0+c_1x+\cdots+c_nx^n, then for a projection E we have E^k=E for all k\geq 1, thus a=c_0 and b=\sum_{i=1}^nc_i.

6.True or false? If a diagonalizable operator has only the characteristic values 0 and 1, it is a projection.
Solution: True. Let E be a diagonalizable operator which has only the characteristic values 0 and 1, then there is invertible P such that

\displaystyle{P^{-1}EP=\begin{bmatrix}1&\\&\ddots&\\&&1\\&&&0\\&&&&\ddots&\\&&&&&0\end{bmatrix}}

notice the diagonal matrix has the property of idempotent, thus

\displaystyle{(P^{-1}EP)^2=P^{-1}EPP^{-1}EP=P^{-1}EP\implies P^{-1}E^2P=P^{-1}EP\implies E^2=E}

7.Prove that if E is the projection on R along N, then (I-E) is the projection on N along R.
Solution: If E is the projection on R along N we know V=R\oplus N, thus consider I-E, for any \alpha we can have \alpha=\beta+\gamma,\beta\in N,\gamma\in R, thus

\displaystyle{(I-E)\alpha=(I-E)\beta+(I-E)\gamma=\beta \\ (I-E)^2\alpha =(I-E)\beta=\beta=(I-E)\alpha}

thus (I-E)^2=I-E and so I-E is a projection. Also we have

\beta \in N\Leftrightarrow E\beta=0\Leftrightarrow(I-E)\beta=\beta\Leftrightarrow\beta\in\text{range }(I-E) \\ \gamma\in R\Leftrightarrow E\gamma=\gamma\Leftrightarrow(I-E)\gamma=0\Leftrightarrow\gamma\in\text{null }(I-E)

thus I-E is a projection on N along R.

8.Let E_1,\dots,E_k be linear operator on the space V such that E_1+\cdots +E_k=I.
( a ) Prove that if E_iE_j=0 for i\neq j, then E_i^2=E_i for each i.
( b ) In the case k=2, prove the converse of (a). That is, if E_1+E_2=I and E_1^2=E_1,E_2^2=E_2, then E_1E_2=0.
Solution:
( a ) We have E_i=E_iI=E_i(E_1+\cdots +E_k)=\sum_{j=i}^kE_iE_j=E_i^2.
( b ) We have E_1^2=E_1=E_1I=E_1(E_1+E_2)=E_1^2+E_1E_2, thus E_1E_2=0.

9.Let V be a real vector space and E an idempotent linear operator on V, i.e., a projection. Prove that (I+E) is invertible. Find (I+E)^{-1}.
Solution: We have

(I+E)\left(I-\dfrac{1}{2}E\right)=I+\dfrac{1}{2}E-\dfrac{1}{2}E^2=I \\ \left(I-\dfrac{1}{2}E\right)(I+E)=I+\dfrac{1}{2}E-\dfrac{1}{2}E^2=I

thus (I+E)^{-1}=I-\dfrac{1}{2}E.

10.Let F be a subfield of the complex numbers (or, a field of characteristic zero). Let V be a finite-dimensional vector space over F. Suppose that E_1,\dots,E_k are projections of V and that E_1+\cdots+E_k=I. Prove that E_iE_j=0 for i\neq j.
Solution: For E_i, we see that E_i^2=E_i, thus the minimal polynomial of E_i divides x(x-1), thus the only possible characteristic value of E_i is 0,1, if we diagonalize E_i, then P^{-1}E_iP=\text{diag }(1,\dots,1,0,\dots,0), the number of times which 0 appears in the diagonal matrix means how many column vectors in P vanishes under E_i, so \text{trace }E_i=\dim V-\dim\text{null }E_i=\dim \text{range }E_i. Now we have

\displaystyle{\dim V=\text{trace}(I)=\text{trace}(E_1+\cdots+E_k)=\dim \text{range }E_1+\cdots+\dim\text{range }E_k}

and also E_1+\cdots+E_k=I means V= \text{range }E_1+\cdots+\text{range }E_k, from Exercise 2 we know V=\text{range }E_1\oplus\cdots\oplus\text{range }E_k, thus E_iE_j=0 for i\neq j due to Theorem 9.

11.Let V be a vector space, let W_1,\dots,W_k be subspaces of V, and let

\displaystyle{V_j=W_1+\cdots+W_{j-1}+W_{j+1}+\cdots+W_k.}

Suppose that V=W_1\oplus\cdots\oplus W_k. Prove that the dual space V^* has the direct-sum decomposition V^*=V^0_1\oplus \cdots\oplus V^0_k.
Solution: Let \mathfrak B_i be a basis for W_i, then since V=W_1\oplus\cdots\oplus W_k, we have \mathfrak B=(\mathfrak B_1,\dots,\mathfrak B_k) be a basis for V. We can find the dual basis \mathfrak B^* for the basis \mathfrak B, furthermore, we can let \mathfrak F_i be the functionals in \mathfrak B^* which contains functional dual to the vectors in \mathfrak B_i, then \mathfrak B^*=(\mathfrak F_1,\dots,\mathfrak F_k). Since \mathfrak B^* is a basis for V^*, any f\in V^* can be represented as a linear combination in (\mathfrak F_1,\dots,\mathfrak F_k). We now prove that \mathfrak F_i is a basis for V_i^0. Let f\in V_i^0, then since f vanishes for all vectors in \mathfrak B except those in \mathfrak F_i, we can see f is just a linear combination of functionals in \mathfrak F_i, thus \mathfrak F_i spans V_i^0, as \mathfrak F_i is linearly independent, it is a basis for V_i^0.
Now if we have 0=f_1+\cdots+f_k where each f_i\in V_i^0, then each f_i can be written as a linear combination of \mathfrak F_i, thus f_1+\cdots+f_k is a linear combination of \mathfrak B^*, thus all the coefficients of this basis are 0, which means f_i=0 for all i.

Linear Algebra (2ed) Hoffman & Kunze 6.5

这一节解决的核心问题是,在一族可交换operator条件下,如果这个族都是可三角化(对角化)的,那么能有一组基使得该族内所有operator在这一基下的矩阵都是三角化(对角化)的。

Exercises

1.Find an invertible real matrix P such that P^{-1}AP and P^{-1}BP are both diagonal, where A and B are the real matrices
( a ) A=\begin{bmatrix}1&2\\0&2\end{bmatrix},\qquad B=\begin{bmatrix}3&-8\\0&-1\end{bmatrix}
( b ) A=\begin{bmatrix}1&1\\1&1\end{bmatrix}, \qquad B=\begin{bmatrix}1&a\\a&1\end{bmatrix}
Solution:
( a ) We can verify that A and B commute. Since 1 and 2 are distinct characteristic value of A, and \alpha_1=(1,0) and \alpha_2=(2,1) are basis of \text{null }(A-I) and \text{null }(A-2I), we have

\displaystyle{B\alpha_1=\begin{bmatrix}3&-8\\0&-1\end{bmatrix}\begin{bmatrix}1\\0\end{bmatrix}=\begin{bmatrix}3\\0\end{bmatrix},\quad B\alpha_2=\begin{bmatrix}3&-8\\0&-1\end{bmatrix}\begin{bmatrix}2\\1\end{bmatrix}=\begin{bmatrix}-2\\-1\end{bmatrix}}

thus the real matrix P we need is P=\begin{bmatrix}1&2\\0&1\end{bmatrix}.
( b ) We can verify that AB=BA=\begin{bmatrix}1+a&1+a\\1+a&1+a\end{bmatrix}, since 0 and 2 are distinct characteristic value of A, and \alpha_1=(1,-1) and \alpha_2=(1,1) are basis of \text{null }(A) and \text{null }(A-2I), we have

\displaystyle{B\alpha_1=\begin{bmatrix}1&a\\a&1\end{bmatrix}\begin{bmatrix}1\-1\end{bmatrix}=\begin{bmatrix}1-a\\a-1\end{bmatrix},\quad B\alpha_2=\begin{bmatrix}1&a\\a&1\end{bmatrix}\begin{bmatrix}1\\1\end{bmatrix}=\begin{bmatrix}1+a\\1+a\end{bmatrix}}

thus the real matrix P we need is P=\begin{bmatrix}1&-1\\1&1\end{bmatrix}.

2.Let \mathfrak F be a commuting family of 3\times 3 complex matrices. How many linearly independent matrices can \mathfrak F contain? What about the n\times n case?
Solution: All matrices in \mathfrak F can be triangular under the same basis, thus there are 6 linearly independent matrices that \mathfrak F can contain. In the n\times n case, there are n! linearly independent matrices in \mathfrak F.

3.Let T be a linear operator on an n-dimensional space, and suppose that T has n distinct characteristic values. Prove that any linear operator which commutes with T is a polynomial in T.
Solution: Suppose T has n distinct values c_1,\dots,c_n, then there are vectors \alpha_1,\dots,\alpha_n such that T\alpha_i=c_i\alpha_i, then \{\alpha_1,\dots,\alpha_n\} are a basis for the space. Let U be any linear operator which commutes with T, then

\displaystyle{TU\alpha_i=UT\alpha_i=cU\alpha_i,\quad i=1,\dots,n}

which shows that U\alpha_i is a characteristic vector associated with c_i, thus U\alpha_i=k_i\alpha_i for some k_i, so the matrix of U with \{\alpha_1,\dots,\alpha_n\} is \text{diag }(k_1,\dots,k_n). Now since c_1,\dots,c_n are distinct, the system of equations

\displaystyle{x_0+x_1c_i+\cdots+x_{n-1}c_i^{n-1}=k_i,\quad i=1,\dots,n}

has a solution (a_0,\dots,a_{n-1}). This means if we let p(x)=a_0+a_1x+\cdots+a_{n-1}x^{n-1}, then

\displaystyle{p(T)\alpha_i=\left(\sum_{j=0}^{n-1}a_jT^j\right)\alpha_i=\sum_{j=0}^{n-1}a_jT^j\alpha_i=\sum_{j=0}^{n-1}a_jc_i^j\alpha_i=k_i\alpha_i,\quad i=1,\dots,n}

thus U=p(T).

4.Let A,B,C,D be n\times n complex matrices which commute. Let E be the 2n\times 2n matrix E=\begin{bmatrix}A&B\\C&D\end{bmatrix}. Prove that \det E=\det (AD-BC).
Solution: There is an n\times n matrix P invertible such that P^{-1}AP,P^{-1}BP,P^{-1}CP,P^{-1}DP are all triangular matrices, denote them A',B',C',D' respectively. Let E'=\begin{bmatrix}A'&B'\\C'&D'\end{bmatrix}, we prove \det E'=\det(A'D'-B'C') by induction on n.
If n=1, the conclusion is obvious. Now suppose n\geq 2, and the conclusion holds for all commuting matrices of degree n-1 or less, then write

A'=\begin{bmatrix}A'_{11}&A''\\0&A_{n-1}\end{bmatrix},A''=[A'_{12},\cdots,A'_{1n}],A_{n-1}=\begin{bmatrix}A_{22}'&\cdots&A_{2n}'\\&\ddots&\vdots\\&&A'_{nn}\end{bmatrix}

and B',C',D' similarly, from block matrix multiplication we have

A'D'=\begin{bmatrix}A'_{11}&A''\\0&A_{n-1}\end{bmatrix}\begin{bmatrix}D'_{11}&D''\\0&D_{n-1}\end{bmatrix}=\begin{bmatrix}A'_{11}D'_{11}&A'_{11}D''+A''D_{n-1}\\0&A_{n-1}D_{n-1}\end{bmatrix} \\B'C'=\begin{bmatrix}B'_{11}C'_{11}&B'_{11}C''+B''C_{n-1}\\0&B_{n-1}C_{n-1}\end{bmatrix}

thus \det(A'D'-B'C')=(A'_{11}D'_{11}-B'_{11}C'_{11})\det(A_{n-1}D_{n-1}-B_{n-1}C_{n-1}), also we can get (use the induction hypothesis)

\displaystyle{\begin{aligned}\det E'&=\begin{vmatrix}A_{11}'&\cdots&A_{1n}'&B'_{11}&\cdots&B'_{1n}\\&\ddots&\vdots&&\ddots&\vdots\\&&A'_{nn}&&&B'_{nn}\\C_{11}'&\cdots&C_{1n}'&D'_{11}&\cdots&D'_{1n}\\&\ddots&\vdots&&\ddots&\vdots\\&&C_{nn}'&&&D'_{nn}\end{vmatrix}\\&=(A'_{11}(-1)^{n+n}D'_{11}+(-1)^{n+1}C'_{11}(-1)^{n}B'_{11})\begin{vmatrix}A_{22}'&\cdots&A_{2n}'&B'_{22}&\cdots&B'_{2n}\\&\ddots&\vdots&&\ddots&\vdots\\&&A'_{nn}&&&B'_{nn}\\C_{22}'&\cdots&C_{2n}'&D'_{22}&\cdots&D'_{2n}\\&\ddots&\vdots&&\ddots&\vdots\\&&C_{nn}'&&&D'_{nn}\end{vmatrix}\\&=(A'_{11}D'_{11}-B'_{11}C'_{11})\det(A_{n-1}D_{n-1}-B_{n-1}C_{n-1})\end{aligned}}

so \det E'=\det(A'D'-B'C').
Now we further have

\displaystyle{\begin{bmatrix}P^{-1}&\\&P^{-1}\end{bmatrix}E\begin{bmatrix}P&\\&P\end{bmatrix}=E'}

so \det E'=\det E, and

\displaystyle{\begin{aligned}\det (A'D'-B'C')&=\det (P^{-1}APP^{-1}DP-P^{-1}BPP^{-1}CP)\\&=\det(P^{-1}(AD-BC)P)=\det(AD-BC)\end{aligned}}

thus the conclusion is true.

5.Let F be a field, n a positive integer, and let V be the space of n\times n matrices over F. If A is a fixed n\times n matrix over F, let T_A be the linear operator on V defined by T_A(B)=AB-BA. Consider the family of linear operators T_A obtaining by letting A vary over all diagonal matrices. Prove that the operators in that family are simultaneously diagonalizable.
Solution: From Exercise 13(b) of Section 6.4 we know that T_A is diagonalizable if A is, so we only have to show that T_A and T_D commute for diagonal matrices A and D, then by Theorem 8 we get the result.
Let A and D be diagonal matrices, then AD=DA since F is a field. Now

\begin{aligned}T_AT_D(B)&=T_A(DB-BD)=A(DB-BD)-(DB-BD)A\\&=ADB-ABD-DBA+BDA\end{aligned} \\ \begin{aligned}T_DT_A(B)&=T_D(AB-BA)=D(AB-BA)-(AB-BA)D\\&=DAB-DBA-ABD+BAD\end{aligned}

As AD=DA means ADB=DAB and BAD=BDA, we have T_AT_D(B)=T_DT_A(B).

Linear Algebra (2ed) Hoffman & Kunze 6.4

这一节除了讨论invariant subspaces外,还有很多其他扩展性的内容。例如EXAMPLE 8说明:和T可交换的operator的range和null space都是在T下invariant的。对于invariant subspace W,限制在某一subspace W上的operator T_W可以定义,通过对WT的矩阵形式讨论,可以得到T_W的特征多项式、最小多项式都整除T的特征多项式、最小多项式这一结论。EXAMPLE 10实际是对Theorem 2在另一个层面的讨论。
下一部分内容是本节的核心,首先定义Conductor,用T取值时将某一向量\alpha送入W的所有多项式集合S(\alpha;W)是一个ideal,其generator称为T-conductor of \alpha into W。每一个T-conductor都整除T的最小多项式,因为最小多项式将\alpha送入0。利用conductor性质可以证明Lemma,继而证明Theorem 5,即T有三角化矩阵的充要条件是T的最小多项式可以被totally factored,或者说是一系列线性多项式的乘积。这一定理的推论说明,在algebraically closed的域上所有矩阵都与三角阵相似。Theorem 6的结论是对角化的充要条件是T的最小多项式是一次线性多项式的乘积,或者说没有重复根。这一定理可以用于判断是否对角化:在得到特征多项式后,直接计算factor一次幂的乘积所导致的operator是不是zero operator。
Theorem 5提供了一个新的对(algebraically closed field上)Cayley-Hamilton定理的证明,因为任何一个operatorT都有一组ordered basis\{\alpha_1,\cdots,\alpha_n\}下的上三角矩阵A,故T的特征多项式是f=(x-A_{11})\cdots(x-A_{nn}),由于T-A_{ii}I\alpha_i送入\text{span }\{\alpha_1,\cdots,\alpha_{i-1}\}中,因此f(T)=0.

Exercises

1.Let T be the linear operator on R^2, the matrix of which in the standard ordered basis is A=\begin{bmatrix}1&-1\\2&2\end{bmatrix}.
( a ) Prove that the only subspaces of R^2 invariant under T are R^2 and the zero subspace.
( b ) If U is the linear operator on C^2, the matrix of which in the standard ordered basis is A, show that U has 1-dimensional invariant subspaces.
Solution:
( a ) The characteristic polynomial of T is

\displaystyle{\det (xI-A)=\begin{vmatrix}x-1&1\\-2&x-2\end{vmatrix}=x^2-3x+4}

Thus A has no characteristic value on R.
( b ) The characteristic polynomial of U is

\displaystyle{\det (xI-A)=\begin{vmatrix}x-1&1\\-2&x-2\end{vmatrix}=\left(x-\frac{3}{2}+\frac{\sqrt{7}}{2}i\right)\left(x-\frac{3}{2}-\frac{\sqrt{7}}{2}i\right)}

Thus U has characteristic value \dfrac{3}{2}-\dfrac{\sqrt{7}}{2}i, one characteristic vector associated with this value is (-4,1+\sqrt{7}i). The subspace spanned by this vector is invariant under U.

2.Let W be an invariant subspace for T. Prove that the minimal polynomial for the restriction operator T_W divides the minimal polynomial for T, without referring to matrices.
Solution: Let p be the minimal polynomial for T, then p(T)=0, using the conclusion from Lemma we know that W is invariant under p(T), so for any \alpha\in W, we have p(T_W)\alpha=p(T)\alpha=0, so p(T_W)=0, which means the minimal polynomial for T_W divides p.

3.Let c be a characteristic value of T and let W be the space of characteristic vectors associated with the characteristic value c. What is the restriction operator T_W?
Solution: It is the operator which multiplies all vector in W by c. Since for any \alpha \in W we have T\alpha=c\alpha.

4.Let

\displaystyle{A=\begin{bmatrix}0&1&0\\2&-2&2\\2&-3&2\end{bmatrix}.}

Is A similar over the field of real numbers to a triangular matrix? If so, find such a triangular matrix.
Solution: We shall compute the minimal polynomial for A. First the characteristic polynomial for A is

\displaystyle{\begin{aligned}\det (xI-A)&=\begin{vmatrix}x&-1&0\\-2&x+2&-2\\-2&3&x-2\end{vmatrix}=x\begin{vmatrix}x+2&-2\\3&x-2\end{vmatrix}+\begin{vmatrix}-2&-2\\-2&x-2\end{vmatrix}\\&=x(x^2-4+6)-2x=x^3\end{aligned}}

Thus A is similar to a triangular matrix, and the minimal polynomial is x^3. The solution space of AX=0 is spanned by \alpha_1=(1,0,-1). Choose (1,1,1) and we see \alpha_2=A(1,1,1)^T=(1,2,1) satisfies A\alpha_2=2\alpha_1, now \alpha_3=(1,1,1) is valid since A\alpha_3=\alpha_2. Thus we have

\displaystyle{A[\alpha_1,\alpha_2,\alpha_3]=[0,2\alpha_1,\alpha_2]=[\alpha_1,\alpha_2,\alpha_3]\begin{bmatrix}0&2&0\\0&0&1\\0&0&0\end{bmatrix}}

5.Every matrix A such that A^2=A is similar to a diagonal matrix.
Solution: If A=0 or A=I then A is already a diagonal matrix. Suppose A\neq 0,A\neq I, we have A(A-I)=0 and so the minimal polynomial of A is x(x-1), and the conclusion follows from Theorem 6.

6.Let T be a diagonalizable linear operator on the n-dimensional vector space V, and let W be a subspace which is invariant under T. Prove that the restriction operator T_W is diagonalizable.
Solution: If T is diagonalizable, then the minimal polynomial for T has the form p=(x-c_1)\cdots(x-c_k), since the minimal polynomial for T_W divides the minimal polynomial for T, it must have the form

\displaystyle{(x-c_{j_1})\cdots(x-c_{j_i}),\quad j_1,\cdots,j_i\in \{1,\cdots,k\}}

and the conclusion follows from Theorem 6.

7.Let T be a linear operator on a finite-dimensional vector space over the field of complex numbers. Prove that T is diagonalizable if and only if T is annihilated by some polynomial over C which has distinct roots.
Solution: If T is a diagonalizable linear operator, then the minimal polynomial for T is a product of distinct linear factors. Conversely, if T is annihilated by some polynomial p over C which has distinct roots, then the minimal polynomial must has no distinct roots since it divides this p, and the conclusion follows from Theorem 6.

8.Let T be a linear operator on V. If every subspace of V is invariant under T, then T is a scalar multiple of the identity operator.
Solution: Let \{\alpha_1,\cdots,\alpha_n\} be a basis for V, then the space spanned by \alpha_i is a subspace of V, thus invariant under T, which means we have T\alpha_i=k_i\alpha_i for i=1,\cdots,n. Now assume k_i\neq k_j for some i,j, then consider the subspace W spanned by \alpha_i+\alpha_j, we have T(\alpha_i+\alpha_j)=k_i\alpha_i+k_j\alpha_j\notin W, a contradiction.

9.Let T be the indefinite integral operator (Tf)(x)=\int_0^xf(t)dt on the space of continuous functions on the interval [0,1]. Is the space of polynomial functions invariant under T? The space of differentiable functions? The space of functions which vanish at x=\frac{1}{2}?
Solution: The space of polynomial functions are invariant under T, since if p=\sum_{i=0}^nc_ix^i, then T(p)=\sum_{i=1}^{n+1}\dfrac{c_{i-1}}{i}x^i is still a polynomial function.
The space of differentiable functions are invariant under T, since if f is differentiable, then f is continuous and integrable, thus T(f) is differentiable and [(Tf)(x)]'=f(x).
The space of functions which vanish at x=\frac{1}{2} is not invariant under T. Consider f(x)=x-\frac{1}{2}, we have (Tf)(x)=\frac{1}{2}(x^2-x) and (Tf)(\frac{1}{2})=-\frac{1}{8}\neq 0.

10.Let A be a 3\times 3 matrix with real entries. Prove that, if A is not similar over R to a triangular matrix, then A is similar over C to a diagonal matrix.
Solution: By Theorem 5, if A is not similar over R to a triangular matrix, the minimal polynomial of A must be of the form

\displaystyle{x^2+ax+b\text{ or }(x^2+ax+b)(x-c),\quad a^2<4b}

Thus if we consider A over C, the minimal polynomial of A must have no distinct roots, which means A is diagonalizable.

11.True or false? If the triangular matrix A is similar to a diagonal matrix, then A is already diagonal.
Solution: False. Since if we let A=\begin{bmatrix}1&1\\0&2\end{bmatrix}, and P=\begin{bmatrix}1&1\\0&1\end{bmatrix}, then P^{-1}=\begin{bmatrix}1&-1\\0&1\end{bmatrix} and P^{-1}AP=\begin{bmatrix}1&0\\0&2\end{bmatrix}.

12.Let T be a linear operator on a finite-dimensional vector space over an algebraically closed field F. Let f be a polynomial over F. Prove that c is a characteristic value of f(T) if and only if c=f(t), where t is a characteristic value of T.
Solution: Since T is over an algebraically closed field F, there is an ordered basis \mathfrak B of the vector space under which T has an upper triangular matrix

\displaystyle{[T]_{\mathfrak B}=\begin{bmatrix}a_1&\cdots&*\\&\ddots&\vdots\\&&a_n\end{bmatrix}\implies [f(T)]_{\mathfrak B}=\begin{bmatrix}f(a_1)&\cdots&*\\&\ddots&\vdots\\&&f(a_n)\end{bmatrix}}

thus we have \det (xI-T)=\prod_{i=1}^n(x-a_i) and \det (xI-f(T))=\prod_{i=1}^n(x-f(a_i)), thus c is a characteristic value of f(T) if and only if c=f(a_i), and a_i is a characteristic value of T since \det (a_iI-T)=0.

13.Let V be the space of n\times n matrices over F. Let A be a fixed n\times n matrix over F. Let T and U be the linear operators on V defined by T(B)=AB,U(B)=AB-BA.
( a ) True or false? If A is diagonalizable (over F), then T is diagonalizable.
( b ) True or false? If A is diagonalizable, then U is diagonalizable.
Solution:
( a ) True, since if A is diagonalizable, then the minimal polynomial of A has the form p=(x-c_1)\cdots(x-c_k), now for any matrix B we have

\displaystyle{\begin{aligned}p(T)(B)&=(T-c_1I)\cdots(T-c_kI)(B)\\&=(A-c_1I)\cdots(A-c_kI)(B)\\&=p(A)(B)=0\end{aligned}}

thus the minimal polynomial of T divides p, which means the minimal polynomial of T has no distinct roots, thus T is diagonalizable.
( b ) True. If A is diagonalizable, then there is invertible P such that P^{-1}AP=D=\text{diag }(d_1,\dots,d_n). Let E^{p,q} be the matrix which has only 1 in the pth row and the qth column, then \{E^{p,q},p,q=1,\dots,n\} form a basis for V. Since (DE^{p,q}-E^{p,q}D)_{ij}=\sum_{k=1}^nD_{ik}E^{p,q}_{kj}-\sum_{k=1}^nE^{p,q}_{ik}D_{kj}, we see that the only nonzero item of the matrix DE^{p,q}-E^{p,q}D is (DE^{p,q}-E^{p,q}D)_{pq}=(d_p-d_q), which means DE^{p,q}-E^{p,q}D=(d_p-d_q)E^{p,q}. Let F^{p,q}=PE^{p,q}P^{-1}, then \{F^{p,q},p,q=1,\dots,n\} is also a basis for V, since P is invertible. Now we have

\displaystyle{\begin{aligned}U(F^{p,q})&=AF^{p,q}-F^{p,q}A=APE^{p,q}P^{-1}-PE^{p,q}P^{-1}A\\&=(PP^{-1})APE^{p,q}P^{-1}-PE^{p,q}P^{-1}A(PP^{-1})\\&=P(P^{-1}AP)E^{p,q}P^{-1}-PE^{p,q}(P^{-1}AP)P^{-1}\\&=P[DE^{p,q}-E^{p,q}D]P^{-1}\\&=(d_p-d_q)PE^{p,q}P^{-1}=(d_p-d_q)F^{p,q}\end{aligned}}

This means F^{p,q} is a characteristic vector for U for any p,q=1,\dots,n, thus U is diagonalizable.