Linear Algebra (2ed) Hoffman & Kunze 5.4

这一节主要说了转置矩阵行列式的计算、特殊的分块矩阵行列式计算,通过classical adjoint计算行列式,以及通过行列式确定矩阵是否可逆及逆矩阵计算方法(Theorem 4), 最后提了一下Cramer’s rule。最后的最后,作者还苦口婆心的说不要注重行列式的计算,要注重其理论意义和证明,关注how it behaves而不是how to compute。(结果习题上来就是两道compute……)

Exercises

1.Use the classical adjoint formula to compute the inverse of each of the following 3\times 3 real matrices.

\displaystyle{\begin{bmatrix}-2&3&2\\6&0&3\\4&1&-1\end{bmatrix},\qquad \begin{bmatrix}\cos \theta&0&-\sin \theta\\0&1&0\\ \sin\theta&0&\cos \theta\end{bmatrix}}

Solution: Let the first matrix be A and the second be B.
We have \det A=72 and

\displaystyle{\text{adj }A=\begin{bmatrix}-3&5&9\\18&-6&18\\6&14&-18\end{bmatrix}\implies A^{-1}=\frac{1}{72}\begin{bmatrix}-3&5&9\\18&-6&18\\6&14&-18\end{bmatrix}}

We have \det B=1 and

\displaystyle{\text{adj }B=\begin{bmatrix}\cos\theta&0&\sin\theta\\0&1&0\\-\sin\theta&0&\cos\theta\end{bmatrix}\implies B^{-1}=\begin{bmatrix}\cos\theta&0&\sin\theta\\0&1&0\\-\sin\theta&0&\cos\theta\end{bmatrix}}

2.Use Cramer’s rule to solve each of the following systems of linear equations over the field of rational numbers.
( a ) \begin{aligned}x&+y+z=11\\2x&-6y-z=0\\3x&+4y+2z=0\end{aligned}. ( b ) \begin{aligned}3x&-2y=7\\3y&-2z=6\\3z&-2x=-1\end{aligned}

Solution:
( a ) The coefficient matrix A of the system is

\displaystyle{A=\begin{bmatrix}1&1&1\\2&-6&-1\\3&4&2\end{bmatrix},\qquad \det A=11}

Also we have

\displaystyle{B_1=\begin{bmatrix}11&1&1\\0&-6&-1\\0&4&2\end{bmatrix},B_2=\begin{bmatrix}1&11&1\\2&0&-1\\3&0&2\end{bmatrix},B_3=\begin{bmatrix}1&1&11\\2&-6&0\\3&4&0\end{bmatrix}}

A simple calculation shows \det B_1=-88,\det B_2=-77,\det B_3=11\times 26, thus the solution is (x,y,z)=(-8,-7,26).

( b ) The coefficient matrix A of the system is

\displaystyle{A=\begin{bmatrix}3&-2&0\\0&3&-2\\-2&0&3\end{bmatrix},\qquad \det A=19}

Also we have

\displaystyle{B_1=\begin{bmatrix}7&-2&0\\6&3&-2\\-1&0&3\end{bmatrix},B_2=\begin{bmatrix}3&7&0\\0&6&-2\\-2&-1&3\end{bmatrix},B_3=\begin{bmatrix}3&-2&7\\0&3&6\\-2&0&-1\end{bmatrix}}

A simple calculation shows \det B_1=95,\det B_2=76,\det B_3=57, thus the solution is (x,y,z)=(5,4,3).

3.An n\times n matrix A over a field F is skew-symmetric if A^t=-A. If A is a skew-symmetric n\times n matrix with complex entires and n is odd, prove that \det A=0.

Solution: We have \det A^t=\det A and \det A^t=\det (-A)=(-1)^n\det A, as n is odd, \det A^t=-\det A, thus \det A=0.

4.An n\times n matrix A over a field F is called orthogonal if AA^t=I. If A is orthogonal, show that \det A=\pm 1. Give an example of an orthogonal matrix for which \det A=-1.

Solution: We have \det (AA^t)=(\det A)(\det A^t)=(\det A)^2=1, which gives the result. One example that \det A=-1 may be A=\begin{bmatrix}1&0\\0&-1\end{bmatrix}.

5.An n\times n matrix A over the field of complex numbers is said to be unitary if AA^*=I (A^* denotes the conjugate transpose of A). If A is unitary, show that |\det A|=1.

Solution: For any matrix A over the field of complex numbers, if A' is the conjugate of A, we can see that \det A'=\overline{\det A}, since taking conjugate is preserved under addition and multiplication of complex numbers. Thus If A^* is the conjugate transpose of A, we have A^*=(A')^t and \det A^*=\det (A')^t=\det (A')=\overline{\det A}. Thus from AA^*=I we have

\displaystyle{1=\det I=(\det A)(\det A^*)=(\det A)\overline{\det A}=|\det A|^2}

thus |\det A|=1.

6.Let T and U be linear operators on the finite dimensional vector space V. Prove
( a ) \det (TU)=(\det T)(\det U);
( b ) T is invertible if and only if \det T\neq 0.

Solution: If A is the matrix of T with some basis \mathfrak B and B is the matrix of U with the same basis, then AB is the matrix of TU with \mathfrak B, thus

\det (TU)=\det (AB)=(\det A)(\det B)=(\det T)(\det U)

which solves (a). To prove (b), notice that T is invertible if and only if there exists U such that TU=UT=I, which means (\det T)(\det U)=1, since the operator Iv=v has matrix I with any basis. This means \det T\neq 0.

7.Let A be an n\times n matrix over K, a commutative ring with identity. Suppose A has the block form

\displaystyle{A=\begin{bmatrix}A_1&0&\cdots&0\\0&A_2&\cdots&0\\{\vdots}&{\vdots}&&{\vdots}\\0&0&\cdots&A_k\end{bmatrix}}

where A_j is an r_j\times r_j matrix. Prove

\displaystyle{\det A=(\det A_1)(\det A_2)\cdots(\det A_k)}

Solution: We already have \det \begin{bmatrix}A&B\\0&C\end{bmatrix}=(\det A)(\det C), recursively use this we can have the result.

8.Let V be the vector space of n\times n matrices over the field F. Let B be a fixed element of V and let T_B be the linear operator on V defined by T_B(A)=AB-BA. Show that \det T_B=0.

Solution: Since T_B(I)=IB-BI=0, we know that T_B is not singular, thus not invertible, by Exercise 6(b), \det T_B=0.

9.Let A be an n\times n matrix over a field, A\neq 0. If r is any positive integer between 1 and n, an r\times r submatrix of A is any r\times r matrix obtained by deleting (n-r) rows and (n-r) columns of A. The determinant rank of A is the largest positive integer r such that some r\times r submatrix of A has a non-zero determinant. Prove that the determinant rank of A is equal to the row rank of A (=column rank A).

Solution: Suppose the row rank of A is r, then if we write A=(\alpha_1,\cdots,\alpha_n) where \alpha_i are row vectors of A, then any basis of the set \{\alpha_1,\cdots,\alpha_n\} would have length r. Suppose \alpha_{k_1},\cdots,\alpha_{k_r} are one basis of \{\alpha_1,\cdots,\alpha_n\}, then they are linearly independent, forming a matrix A'=(\alpha_{k_1},\cdots,\alpha_{k_r}) which has rank r, rewriting A' with its column vectors [\beta_1,\cdots,\beta_n], we can select \beta_{m_1},\cdots,\beta_{m_r} which is linearly independent. The matrix [\beta_{m_1},\cdots,\beta_{m_r}] is an r\times r submatrix of A, and it is invertible since it has rank r, thus its determinant is not zero.
Now assume A has one s\times s submatrix B with a non-zero determinant, and s>r, then the corresponding s rows of A which generates B are linearly independent, which means the row rank of A is at least s, a contradiction. Thus any s\times s submatrix of A which has non-zero determinant shall satisfy s\leq r, and the conclusion follows.

10.Let A be an n\times n matrix over the field F. Prove that there are at most n distinct scalars c in F such that \det (cI-A)=0.

Solution: Let xI-A be an n\times n matrix with polynomial entries, then \det (xI-A) is a monic polynomial over F with degree n, since among A(1,\sigma1)\cdots A(n,\sigma n) the only possibility to generate an item of degree \geq n is to let \sigma i=i. Since a polynomial with degree n has at most n distinct roots in F, we get the result.

11.Let A and B be n\times n matrices over the field F. Show that if A is invertible there are at most n scalars c in F for which the matrix cA+B is not invertible.

Solution: If cA+B is not invertible, then \det (cA+B)=0. Since A is invertible, we have \det A^{-1}\neq 0 and

\displaystyle{\det(cA+B)=\det(cA+AA^{-1}B)=(\det A)[\det(cI+A^{-1}B)]}

so \det (cA+B)=0 if and only if \det(cI+A^{-1}B)=0, the conclusion follows since (-A^{-1}B) is an n\times n matrix over the field F and Exercise 10.

12.If V is the vector space of n\times n matrices over F and B is a fixed n\times n matrix over F, let L_B and R_B be the linear operators on V defined by L_B(A)=BA and R_B(A)=AB. Show that \det L_B=\det R_B=(\det B)^n.

Solution: Let E^{p,q} be the n\times n matrix which has 1 only in the pth row and the qth column, then {E^{p,q}:p=1,\dots,n,q=1,\dots,n} are one basis for V. Since

L_B(E^{p,q})=BE^{p,q}= \begin{bmatrix}&B_{1p}&\\&\vdots&\\&B_{np}&\end{bmatrix},\text{while }B_{1p},\cdots,B_{np} \text{ appear in the qth column}\\R_B(E^{p,q})=E^{p,q}B=\begin{bmatrix}&&\\B_{q1}&\cdots&B_{qn}\\&&\end{bmatrix},\text{while }B_{q1},\cdots,B_{qn} \text{ appear in the pth row}

the matrix of L_B under the basis \{E^{11},\dots,E^{n1},\dots,E^{1n},\dots,E^{nn}\} is

\displaystyle{\begin{bmatrix}B_{11}&\cdots&B_{1n}&&&&\\&\cdots&&&&&\\B_{n1}&\cdots&B_{nn}&&&&\\&&&\ddots&&&&\\&&&&B_{11}&\cdots&B_{1n}\\&&&&&\cdots&\\&&&&B_{n1}&\cdots&B_{nn}\end{bmatrix}}

The matrix of R_B under the basis \{E^{11},\dots,E^{1n},\dots,E^{n1},\dots,E^{nn}\} is

\displaystyle{\begin{bmatrix}B_{11}&\cdots&B_{n1}&&&&\\&\cdots&&&&&\\B_{1n}&\cdots&B_{nn}&&&&\\&&&\ddots&&&&\\&&&&B_{11}&\cdots&B_{n1}\\&&&&&\cdots&\\&&&&B_{1n}&\cdots&B_{nn}\end{bmatrix}}

and the conclusion follows from Exercise 7 and \det B^t=\det B.

13.Let V be the vector space of all n\times n matrices over the field of complex numbers, and let V be a fixed n\times n matrix over C. Define a linear operator M_B on V by M_B(A)=BAB^*, where B^*=\overline{B^t}. Show that \det M_B=|\det B|^{2n}. Now let H be the set of all Hermitan matrices in V, A being Hermitan if A=A^*. Then H is a vector space over the field of *real* numbers. Show that the function T_B defined by T_B(A)=BAB^* is a linear operator on the real vector space H, and then show that \det T_B=|\det B|^{2n}.

Solution: M_B=L_BR_{B^*}, thus use Exercise 6(a) we have \det M_B=(\det L_B)(\det R_{B^*}), from Exercise 12 we have \det L_B=(\det B)^n, \det R_B{^*}=(\det B^*)^n, thus \det M_B=[(\det B)(\det B^*)]^n, notice that \det B^*=\det {\overline {B^t}}=\overline{\det B^t}=\overline{\det B}, thus \det M_B=[(\det B)(\overline{\det B})]^n=[(\det B)^2]^n=(\det B)^{2n}. Now for T_B, if c\in R, then

\displaystyle{T_B(cA+D)=B(cA+D)B^*=cBAB^*+BDB^*=cT_B(A)+T_B(D)}

If we let {E^{p,q}:p=1,\dots,n,q=1,\dots,n} be the set of n\times n matrices which satisfies:
For i=1,\dots,n, E^{i,i}_{ii}=1,E^{i,i}_{jk}=0 if one of j,k\neq i;
For i< j, E^{i,j}_{ij}=1=E^{i,j}_{ji} and E^{i,j}_{kl}=0 if k\neq i or l\neq j; For i>j, E^{i,j}_{ij}=i,E^{i,j}_{ji}=-i and E^{i,j}_{kl}=0 if k\neq i or l\neq j.
Then all the E^{i,j} are Hermitan and the set {E^{p,q}:p=1,\dots,n,q=1,\dots,n} is a basis for V. Thus on H we have T_B=M_B and the matrix of M_B under the basis {E^{p,q}:p=1,\dots,n,q=1,\dots,n} shall be the same as the matrix of T_B under this basis, thus \det T_B=\det M_B.

14.Let A,B,C,D be commuting n\times n matrices over the field F. Show that the determinant of the 2n\times 2n matrix \begin{bmatrix}A&B\\C&D\end{bmatrix} is \det(AD-BC).

Solution: Let K=\det \begin{bmatrix}A&B\\C&D\end{bmatrix}, then (-1)^nK=\begin{bmatrix}C&D\\A&B\end{bmatrix}=\det \begin{bmatrix}C&A\\D&B\end{bmatrix}, which means (-1)^{2n}K=\det \begin{bmatrix}D&B\\C&A\end{bmatrix}=K, and then (-1)^nK=\det \begin{bmatrix}D&B\-C&-A\end{bmatrix}, thus using the fact that A,B,C,D are commuting, we have

\displaystyle{\begin{aligned}K(-1)^nK&=\det \begin{bmatrix}A&B\\C&D\end{bmatrix}\det \begin{bmatrix}D&B\\-C&-A\end{bmatrix}=\det \left(\begin{bmatrix}A&B\\C&D\end{bmatrix}\begin{bmatrix}D&B\\-C&-A\end{bmatrix}\right)\\&=\det\begin{bmatrix}AD-BC&AB-BA\\CD-DC&CB-DA\end{bmatrix}=\det \begin{bmatrix}AD-BC&0\\0&BC-AD\end{bmatrix}\\&=(-1)^n\det \begin{bmatrix}AD-BC&0\\0&AD-BC\end{bmatrix}\end{aligned}}

thus K^2=[\det (AD-BC)]^2, which means K=\pm \det (AD-BC). Now Let A=D=I_n,B=C=0, we see that \begin{bmatrix}A&B\\C&D\end{bmatrix}=I_{2n}, so K=1 and AD-BC=I_n, thus K=\det (AD-BC).

Linear Algebra (2ed) Hoffman & Kunze 5.3

这一节讲了permutation,即1,\dots,n的重排与行列式的关系,进而证明了行列式的唯一性。这种处理优于将permutation引入到行列式的定义中,因为permutation的很多性质用行列式的alternate和单位矩阵值为1的已证结论进行证明更为方便。Theorem 2通过给出一个显式的与permutation有关的公式来证明行列式的唯一性,Theorem 3说明行列式具有乘法不变性,由此可以引出很多相关的性质。

Exercises

1.If K is a commutative ring with identity and A is the matrix over K given by

\displaystyle{A=\begin{bmatrix}0&a&b\\-a&0&c\\-b&-c&0\end{bmatrix}}

show that \det A=0.

Solution: For the first row, if choose \sigma1=2, then \sigma3 can only be 1 and \sigma2=3, if choose \sigma1=3, then \sigma2 can only be 1 and \sigma3=2, since \text{sgn}\{231\}=1 and \text{sgn}\{312\}=1, we have \det A=ac(-b)+b(-a)(-c)=abc-abc=0.

2.Prove that the determinant of the Vandermonde matrix

\displaystyle{\begin{bmatrix}1&a&a^2\\1&b&b^2\\1&c&c^2\end{bmatrix}}

is (b-a)(c-a)(c-b).

Solution: Use the definition we have

\displaystyle{\begin{aligned}&\quad \det \begin{bmatrix}1&a&a^2\\1&b&b^2\\1&c&c^2\end{bmatrix}\\&=\text{sgn}\{123\}1bc^2+\text{sgn}\{132\}1b^2c+\text{sgn}\{213\}a1c^2+\text{sgn}\{231\}ab^21\\&\quad+\text{sgn}\{312\}a^21c+\text{sgn}\{321\}a^2b1\\&=bc^2-b^2c-ac^2+ab^2+a^2c-a^2b=bc(c-b)+a^2(c-b)-a(b+c)(c-b)\\&=[bc+a^2-ab-ac](c-b)=(b-a)(c-a)(c-b)\end{aligned}}

3.List explicitly the six permutations of degree 3, state which are odd and which are even, and use this to give the complete formula (5-15) for the determinant of a 3\times 3 matrix.

Solution: the six permutations of degree 3 are 123,132,213,231,312,321, and we have \text{sgn}\{123\}=\text{sgn}\{231\}=\text{sgn}\{312\}=1, \text{sgn}\{132\}=\text{sgn}\{213\}=\text{sgn}\{321\}=-1, thus if A is a 3\times 3 matrix, we have

\displaystyle{\begin{aligned}\det A&=A(1,\sigma1)A(2,\sigma2)A(3,\sigma3)-A(1,\sigma1)A(2,\sigma3)A(3,\sigma2)\\&\quad-A(1,\sigma2)A(2,\sigma1)A(3,\sigma3)+A(1,\sigma2)A(2,\sigma3)A(3,\sigma1)\\&\quad+A(1,\sigma3)A(2,\sigma1)A(3,\sigma2)-A(1,\sigma3)A(2,\sigma2)A(3,\sigma1)\end{aligned}}

4.Let \sigma and \tau be the permutations of degree 4 defined by \sigma1=2,\sigma2=3,\sigma3=4,\sigma4=1,\tau1=3,\tau2=1,\tau3=2,\tau4=4.
( a ) Is \sigma odd or even? Is \tau odd or even?
( b ) Find \sigma\tau and \tau\sigma.

Solution:
( a ) From 1234 to 2341 there needs 3 changes, thus \sigma is odd, from 1234 to 3124 there needs 2 changes, thus \tau is even.
( b ) We have \sigma\tau1=4,\sigma\tau2=2,\sigma\tau3=3,\sigma\tau4=1 and \tau\sigma1=1,\tau\sigma2=2,\tau\sigma3=4,\tau\sigma4=3.

5.If A is an invertible n\times n matrix over a field, show that \det A\neq 0.

Solution: We have AA^{-1}=I, and thus use Theorem 3 we have

\displaystyle{1=\det I=\det (AA^{-1})=(\det A)(\det A^{-1})\implies \det A=\frac{1}{[\det A^{-1}]}\neq 0}

6.Let A be a 2\times 2 matrix over a field. Prove that \det (I+A)=1+\det A if and only if \text{trace}(A)=0.

Solution: Let A=\begin{bmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix}, then

\det (I+A)=(1+A_{11})(1+A_{22})-A_{12}A_{21}=1+\det A+\text{trace}(A)

and the conclusion follows.

7.An n\times n matrix is called triangular if A_{ij}=0 whenever i>j or if A_{ij}=0 whenever i<j. Prove that the determinant of a triangular matrix is the product A_{11}A_{22}\dots A_{nn} of its diagonal entries.

Solution: If A is triangular, then the only possible permutation \sigma of 1,\dots,n which makes A(1,\sigma1)\cdots A(n,\sigma n) not zero is \sigma i=i for 1,\dots,n, and the conclusion follows.

8.Let A be a 3\times 3 matrix over the field of complex numbers. We form the matrix xI-A with polynomial entries, the i,j entry of this matrix being the polynomial \delta_{ij}x-A_{ij}. If f=\det(xI-A), show that f is a monic polynomial of degree 3. If we write

\displaystyle{f=(x-c_1)(x-c_2)(x-c_3)}

with complex entries c_1,c_2,c_3, prove that c_1+c_2+c_3=\text{trace}(A) and c_1c_2c_3=\det A.

Solution: Since the permutation \{123\} would create (x-A_{11})(x-A_{22})(x-A_{33}) in the components of \det A and is the only item of degree \geq 3, f is a monic polynomial of degree 3.
As f=(x-c_1)(x-c_2)(x-c_3)=x^3-(c_1+c_2+c_3)x^2+(*)x-c_1c_2c_3, in which (*) represents some expression of c_1,c_2,c_3. In the expression of \det (xI-A), if we want to get an item with x^2, we can only find it in (x-A_{11})(x-A_{22})(x-A_{33}), in which the coefficient of x^2 is -(A_{11}+A_{12}+A_{13}), a comparision shows c_1+c_2+c_3=\text{trace}(A). If we put x=0, then f=\det(-A), on the other hand, f=-c_1c_2c_3, thus we see \det(-A)=(-1)^3\det A=-c_1c_2c_3, which gives \det A=c_1c_2c_3.

9.Let n be a positive integer and F a field. If \sigma is a permutation of degree n, prove that the function

\displaystyle{T(x_1,\dots,x_n)=(x_{\sigma1},\dots,x_{\sigma n})}

is an invertible linear operator on F^n.

Solution: Let \sigma^{1-} be the inverse of \sigma, thus \sigma^{-1}\sigma(i)=\sigma\sigma^{-1}(i)=1 for i=1,\dots,n. Define U on F^n by

\displaystyle{U(x_1,\dots,x_n)=(x_{\sigma^{-1}1},\dots,x_{\sigma^{-1} n})}

then we have

U(T(x_1,\dots,x_n))=U(x_{\sigma1},\dots,x_{\sigma n})=(x_1,\dots,x_n)\\T(U((x_1,\dots,x_n))=T(x_{\sigma^{-1}1},\dots,x_{\sigma^{-1} n})=(x_1,\dots,x_n)

thus UT=TU=I and T is invertible.

10.Let F be a field, n a positive integer, and S the set of n\times n matrices over F. Let V be the vector space of all functions from S into F. Let W be the set of alternating n-linear functions on S. Prove that W is a subspace of V. What is the dimension of W?

Solution: Let D_1,D_2\in W, then consider cD_1+D_2, since both D_1,D_2 are alternating and n-linear, it is easy to verify that cD_1+D_2 is alternating and n-linear. Thus W is a subspace of V. From (5-14) in the text we know that for any alternating n-linear function D, we have D(A)=(\det A)D(I) for any A, thus if D\in W, then D is determined only by the value of D(I), so \dim W=1.

11.Let T be a linear opeartor on F^n. Define

\displaystyle{D_T(\alpha_1,\dots,\alpha_n)=\det(T\alpha_1,\dots,T\alpha_n)}

( a ) Show that D_T is an alternating n-linear function.
( b ) If c=\det(T{\epsilon}_1,\dots,T{\epsilon}_n), show that for any n vectors \alpha_1,\dots,\alpha_n we have

\displaystyle{\det (T\alpha_1,\dots,T\alpha_n)=c\det(\alpha_1,\dots,\alpha_n)}

( c ) If \mathfrak B is any ordered basis for F^n and A is the matrix of T in the ordered basis \mathfrak B, show that \det A=c.
( d ) What do you think is a resonable name for the scalar c?

Solution:
( a ) Since T is linear we have

\displaystyle{\begin{aligned}&\quad D_T(\alpha_1,\dots,c\alpha_i+\alpha_j,\dots,\alpha_n)\\&=\det(T\alpha_1,\dots,T(c\alpha_i+\alpha_j),\dots,T\alpha_n)\\&=\det(T\alpha_1,\dots,cT\alpha_i+T\alpha_j,\dots,T\alpha_n)\\&=c\det(T\alpha_1,\dots,T\alpha_i,\dots,T\alpha_n)+\det(T\alpha_1,\dots,T\alpha_j,\dots,T\alpha_n)\\&=cD_T(\alpha_1,\dots,\alpha_i,\dots,\alpha_n)+D_T(\alpha_1,\dots,\alpha_j,\dots,\alpha_n)\end{aligned}}

Thus D_T is n-linear. If \alpha_i=\alpha_{i+1}, then T\alpha_i=T\alpha_{i+1}, thus D_T(\alpha_1,\dots,\alpha_n)=0, and D_T is alternating by Lemma in 5.2.

( b ) From (a) and formula (5-14) we can have for any (\alpha_1,\dots,\alpha_n):

\displaystyle{\begin{aligned}\det (T\alpha_1,\dots,T\alpha_n)&=D_T(\alpha_1,\dots,\alpha_n)\\&=\det (\alpha_1,\dots,\alpha_n)D_T(\epsilon_1,\dots,\epsilon_n)\\&=c\det (\alpha_1,\dots,\alpha_n)\end{aligned}}

( c ) We suppose \mathfrak B=\{\alpha_1,\dots,\alpha_n\}, then T\alpha_j=\sum_{i=1}^nA_{ij}\alpha_i, so

\displaystyle{\begin{bmatrix}T\alpha_1\\ {\vdots}\\T{\alpha_n}\end{bmatrix}=\begin{bmatrix}\sum_{i=1}^nA_{i1}\alpha_i\\ {\vdots}\\ \sum_{i=1}^nA_{in}\alpha_i\end{bmatrix}=\begin{bmatrix}A_{11}&\cdots&A_{n1}\\&\cdots&\\A_{1n}&\cdots&A_{nn}\end{bmatrix}\begin{bmatrix}\alpha_1\\ {\vdots}\\{\alpha_n}\end{bmatrix}=A^T\begin{bmatrix}\alpha_1\\ {\vdots}\\{\alpha_n}\end{bmatrix}}

thus \det (T\alpha_1,\dots,T\alpha_n)=\det A^T \det (\alpha_1,\dots,\alpha_n). As \det A^T=\det A, we have c=\det A.
( d ) c may be called the determinant of T.

12.If \sigma is a permutation of degree n and A is an n\times n matrix over the field F with row vectors \alpha_1,\dots,\alpha_n, let \sigma (A) denote the n\times n matrix with row vectors \alpha_{\sigma1},\dots,\alpha_{\sigma n}.
( a ) Prove that \sigma(AB)=\sigma(A)B, and in particular that \sigma(A)=\sigma(I)A.
( b ) If T is the linear operator of Exercise 9, prove that the matrix of T in the standard ordered basis is \sigma(I).
( c ) Is \sigma^{-1}(I) the inverse matrix of \sigma(I)?
( d ) Is it true that \sigma(A) is similar to A?

Solution:
( a ) If we denote B=[b_1,\cdots,b_n] where b_i are column vectors of B, then

AB=\begin{bmatrix}{\alpha_{1}}b_1&\cdots&\alpha_1b_n\\&\cdots&\\{\alpha_n}b_1&\cdots&\alpha_nb_n\end{bmatrix}\\ \sigma(AB)=\begin{bmatrix}{\alpha_{\sigma1}}b_1&\cdots&\alpha_{\sigma1}b_n\\&\cdots&\\{\alpha_{\sigma n}}b_1&\cdots&\alpha_{\sigma n}b_n\end{bmatrix}=\begin{bmatrix}\alpha_{\sigma1}\\ {\vdots}\\{\alpha_{\sigma n}}\end{bmatrix}[b_1,\cdots,b_n]=\sigma(A)B

( b ) We have T(x_1,\dots,x_n)=(x_{\sigma1},\dots,x_{\sigma n}), thus T\epsilon_j is the column vector \epsilon_{\sigma^{-1}j}, which means in the matrix of T in the standard basis, the row vector \epsilon_j must be in the \sigma^{-1} jth row, or the row vector of the \sigma^{-1} jth row is \epsilon_{\sigma(\sigma^{-1} j)}, thus the row vector of the ith row is \epsilon_{\sigma i}.

( c ) Yes, since if we write \sigma(I)\sigma^{-1}(I)=A, then
\displaystyle{A_{ij}=\sum_{n=1}^k(\epsilon_{\sigma i})_k(\epsilon_{\sigma^{-1} k})_j} to let (\epsilon_{\sigma i})_k=1, we need k=\sigma i, to let (\epsilon_{\sigma^{-1} k})_j=1, we need j=\sigma^{-1} k, or k=\sigma j, thus A_{ij}=1 if i=j and 0 otherwise, which means A=I.

( d ) No. For the case n=2, let A=I and \sigma 1=2,\sigma 2=1, then \sigma(A)=\begin{bmatrix}0&1\\1&0\end{bmatrix}, assume we always have \sigma(A) be similar to A, then there exists P invertible such that \sigma(A)=P^{-1}AP=P^{-1}IP=I, this is a contradiction.

13.Prove that the sign function on permutations is unique in the following sence. If f is any function which assignes te each permutation of degree n an integer, and if f(\sigma\tau)=f(\sigma)f(\tau), then f is identically 0, or f is identically 1, or f is the sign function.

Solution: If f=0 there is nothing to prove.
Suppose f\neq 0, then there is a permutation \sigma of degree n such that f(\sigma)\neq 0, let \epsilon be the identical permutation, namely \epsilon i=i for i=1,\dots,n, then \sigma\epsilon=\sigma and thus f(\sigma\epsilon)=f(\sigma)=f(\sigma)f(\epsilon), so f(\epsilon)=1 since f(\epsilon) is an integer. Now for any permutation \sigma of degree n we have f(\epsilon)=f(\sigma)f(\sigma^{-1})=1, which shows f(\sigma) can only be 1 or -1.
Notice that when n=1 there is only one permutation which is \epsilon, when n=2 the only possible permutations are \epsilon and \sigma 1=2,\sigma2=1, if f(\sigma)=1 then f is identically 1, if f(\sigma)=-1 then f is the sign function. Now suppose n\geq 2, let \tau be the permutation that \tau1=2,\tau2=1 and \tau i=i for i=3,\dots,n. Consider any permutation \sigma which interchanges from \{1,\dots,n\} only once, namely changes (i,j) to (j,i) for \neq j, then let \tau' be the permutation which makes \tau' i=1,\tau' j=2, we shall have

\displaystyle{\sigma=\tau'^{-1}\tau\tau'}

thus f(\sigma)=f(\tau'^{-1})f(\tau)f(\tau')=f(\tau), thus any permutation which interchanges from \{1,\dots,n\} only once has value 1 or -1 under f, since all permutations \sigma of degree n can be written as compositions of several ‘once’ permutations, namely \sigma=\tau_m\cdots \tau_1, where m is always even or odd depending only on \sigma is even or odd, and \tau _i is a permutation which interchanges from \{1,\dots,n\} only once, and f(\tau_i)=f(\tau), we have f(\sigma)=[f(\tau)]^m. If f(\tau)=1 then f is identically 1, and if f(\tau)=-1 then f is the sign function.

Linear Algebra (2ed) Hoffman & Kunze 5.2

这一节开始讲行列式。行列式是在一个commutative ring上满足n-linear,alternating和D(I)=1的函数,而书里的处理是分别把n-linear,alternating都显式定义出来,并在定义伴随矩阵后,在Theorem 1中显式给出了一个计算n阶行列式的公式,本节并没有涉及行列式是否唯一的证明,但给了2阶、3阶行列式的具体公式。

Exercises

1.Each of the following expressions defines a function D on the set of 3\times 3 matrices over the field of real numbers. In which of these cases is D a 3-linear function?
( a ) D(A)=A_{11}+A_{22}+A_{33};
( b ) D(A)=(A_{11})^2+3A_{11}A_{22};
( c ) D(A)=A_{11}A_{22}A_{33};
( d ) D(A)=A_{13}A_{22}A_{32}+5A_{12}A_{22}A_{32};
( e ) D(A)=0;
( f ) D(A)=1.

Solution: Cases ( a ),( c ), ( d ) and ( e ) are. For case ( b ), D is not linear with the first row and the third row, and for case ( f ), D is not linear with all rows.

2.Verify directly that the three functions E_1,E_2,E_3 defined by (5-6),(5-7) and (5-8) are identical.
Solution: A direct calculation shows the result.

3.Let K be a commutative ring with identity. If A is a 2\times 2 matrix over K, the classical adjoint of A is the 2\times 2 matrix \text{adj } A defined by

\displaystyle{\text{adj }A=\begin{bmatrix}A_{22}&-A_{12}\\-A_{21}&A_{11}\end{bmatrix}}


If \det denotes the unique determinant function on 2\times 2 matrices over K, show that
( a ) (\text{adj }A)A=A(\text{adj }A)=(\det A)I;
( b ) \det (\text{adj }A)=\det (A);
( c ) \text{adj }(A^t)=(\text{adj }A)^t. (A^t denotes the transpose of A.)

Solution:
( a ) We have

(\text{adj }A)A=\begin{bmatrix}A_{22}&-A_{12}\\-A_{21}&A_{11}\end{bmatrix}\begin{bmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix}=\begin{bmatrix}A_{11}A_{22}-A_{12}A_{21}&0\\0&A_{11}A_{22}-A_{12}A_{21}\end{bmatrix} \\ A(\text{adj }A)=\begin{bmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix}\begin{bmatrix}A_{22}&-A_{12}\\-A_{21}&A_{11}\end{bmatrix}=\begin{bmatrix}A_{11}A_{22}-A_{12}A_{21}&0\\0&A_{11}A_{22}-A_{12}A_{21}\end{bmatrix}

and the conclusion follows.
( b ) It is easy to see that

\displaystyle{\det (\text{adj }A)=A_{22}A_{11}-(-A_{12})(-A_{21})=A_{11}A_{22}-A_{12}A_{21}=\det (A)}

( c ) A direct verification shows the result.

4.Let A be a 2\times 2 matrix over a field F. Show that A is invertible if and only if \det A\neq 0. When A is invertible, give a formula for A^{-1}.

Solution: If \det A\neq 0, Exercise 3(a) shows that A is invertible, since F is a field. On the other hand, if A is invertible, and assume \det A=A_{11}A_{22}-A_{12}A_{21}=0, then at least one of A_{11},A_{22},A_{12},A_{21} is not 0, thus one of X=\begin{bmatrix}A_{22}\-A_{21}\end{bmatrix} or X'=\begin{bmatrix}A_{12}\-A_{11}\end{bmatrix} is not zero, but one can see that when \det A=0, AX=0 and AX'=0, thus A is not invertible, a contradiction.
When A is invertible, one formula of A^{-1} can be given from Exercise 3(a) to be A^{-1}=(\det A)^{-1}(\text{adj }A).

5.Let A be a 2\times 2 matrix over a field F, and suppose that A^2=0. Show for each scalar c that \det (cI-A)=c^2.

Solution: If A=0 the concusion is obviously right. Now suppose A\neq 0, we have

\displaystyle{A^2=\begin{bmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix}\begin{bmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix}=\begin{bmatrix}A_{11}^2+A_{12}A_{21}&A_{12}(A_{11}+A_{22})\\A_{21}(A_{11}+A_{22})&A_{12}A_{21}+A_{22}^2\end{bmatrix}}

solve the equation

\displaystyle{\begin{cases}A_{11}^2+A_{12}A_{21}=0\\A_{12}(A_{11}+A_{22})=0\\A_{21}(A_{11}+A_{22})=0\\A_{12}A_{21}+A_{22}^2=0 \end{cases}}

we see that if A_{11}+A_{22}\neq 0, then A_{12}=A_{21}=0, and then A_{11}^2=A_{22}^2=0, a contradiction to A\neq 0, thus we must have A_{11}+A_{22}=0, which means

\displaystyle{\begin{aligned}\det (cI-A)&=\det\left(\begin{bmatrix}c-A_{11}&-A_{12}\-A_{21}&c-A_{22}\end{bmatrix}\right)\\&=c^2-(A_{11}+A_{22})c+A_{11}A_{22}-A_{12}A_{21}\\&=c^2+A_{11}A_{22}+A_{11}^2\\&=c^2+A_{11}(A_{11}+A_{22})=c^2\end{aligned}}

6.Let K be a subfield of the complex numbers and n a positive integer. Let j_1,\dots,j_n and k_1,\dots,k_n be positive integers not exceeding n. For an n\times n matrix A over K define

\displaystyle{D(A)=A(j_1,k_1)A(j_2,k_2)\cdots A(j_n,k_n)}

Prove that D is n-linear if and only if the integers j_1,\dots,j_n are distinct.

Solution: If the integers j_1,\dots,j_n are distinct, then for each 1\leq j\leq n there should be one unique j_i such that j_i=j, so D is n-linear. Conversely, if D is n-linear and assume j_1,\dots,j_n are not distinct, then there must be some 1\leq j\leq n which is not in j_1,\dots,j_n, then D is not linear with the jth row.

7.Let K be a commutative ring with identity. Show that the determinant function on 2\times 2 matrices A over K is alternating and 2-linear as a function of the columns of A.

Solution: For A=\begin{bmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix}, D( A)=A_{11}A_{22}-A_{12}A_{21}. If we alternate columns, let A'=\begin{bmatrix}A_{12}&A_{11}\\A_{22}&A_{21}\end{bmatrix}, then D( A')=A_{12}A_{21}-A_{11}A_{22}=-D(A). Also we have

D\left(\begin{bmatrix}aA_{11}&A_{12}\\aA_{21}&A_{22}\end{bmatrix}\right)=(aA_{11})A_{22}-A_{12}(aA_{21})=aD(A) \\ D\left(\begin{bmatrix}A_{11}&aA_{12}\\A_{21}&aA_{22}\end{bmatrix}\right)=A_{11}(aA_{22})-A_{12}(aA_{21})=aD(A).

8.Let K be a commutative ring with identity. Define a function D on 3\times 3 matrices over K by the rule

\displaystyle{D(A)=A_{11}\det \begin{bmatrix}A_{22}&A_{23}\\A_{32}&A_{33}\end{bmatrix}-A_{12}\det \begin{bmatrix}A_{21}&A_{23}\\A_{31}&A_{33}\end{bmatrix}+A_{13}\det \begin{bmatrix}A_{21}&A_{22}\\A_{31}&A_{32}\end{bmatrix}}

Show that D is alternating and 3-linear as a function of the columns of A.

Solution: For A=\begin{bmatrix}A_{11}&A_{12}&A_{13}\\A_{21}&A_{22}&A_{23}\\A_{31}&A_{32}&A_{33}\end{bmatrix}, we first prove D is alternating, by Lemma from the text it is enough to show D(A)=0 if the adjacent columns are equal, we first suppose A_{1j}=A_{2j} for 1\leq j\leq 3, then \det \begin{bmatrix}A_{22}&A_{23}\\A_{32}&A_{33}\end{bmatrix}=\det \begin{bmatrix}A_{21}&A_{23}\\A_{31}&A_{33}\end{bmatrix} and \det \begin{bmatrix}A_{21}&A_{22}\\A_{31}&A_{32}\end{bmatrix}=0, thus D(A)=0, if we suppose A_{2j}=A_{3j} for 1\leq j\leq 3, the result can be similarly proved.
The fact that D is 3-linear with columns of A can be shown by a direct calculation and the conclusion from Exercise 7.

9.Let K be a commutative ring with identity and D an alternating n-linear function on n\times n matrices over K. Show that ( a ) D(A)=0, if one of the rows of A is 0. ( b ) D(B)=D(A), if B is obtained from A by adding a scalar multiple of one row of A to another.

Solution:
( a ) Suppose row j of A is all 0 and since D is n-linear, we see D(\alpha_j)=D(c\alpha_j)=cD(\alpha_j) for all c\in K, especially D(\alpha_j)=-D(\alpha_j), thus D(A)=0.
( b ) If A=(\alpha_1,\dots,\alpha_n) and B=(\alpha_1,\dots,\alpha_i+c\alpha_j,\dots,\alpha_n) with j\neq i, then D(B)=D(A)+cD(\alpha_1,\dots,\alpha_j,\dots,\alpha_n)=D(A) since D is alternating.

10.Let F be a field, A a 2\times 3 matrix over F, and (c_1,c_2,c_3) the vector in F^3 defined by

\displaystyle{c_1=\left|\begin{matrix}A_{12}&A_{13}\\A_{22}&A_{23}\end{matrix}\right|, \quad c_2=\left|\begin{matrix}A_{13}&A_{11}\\A_{23}&A_{21}\end{matrix}\right|, \quad c_3=\left|\begin{matrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{matrix}\right|.}

Show that
( a ) \text{rank }A=2 if and only if (c_1,c_2,c_3)\neq 0;
( b ) if A has rank 2, then (c_1,c_2,c_3) is a basis for the solution space of the system of equations AX=0.

Solution:
( a ) If \text{rank }A=2, then two columns of A are linearly independent, thus at least one of c_1,c_2,c_3 is not zero. Conversely, assume \text{rank }A<2, then either \text{rank }A=0, which means A=0, or \text{rank }A=1, which means one column can be a basis for the column space of A, thus other columns would be a scalar multiple of this basis, both cases leads to c_i=0,i=1,2,3.
( b ) By results of Chapter 2, the solution space of AX=0 has dimension 1, since (c_1,c_2,c_3)\neq 0 by (a), it is enough to show that (c_1,c_2,c_3) is a solution for AX=0. A direct verification shows that A_{11}c_1+A_{12}c_2+A_{13}c_3=0 and A_{21}c_1+A_{22}c_2+A_{23}c_3=0.

11.Let K be a commutative ring with identity, and let D be an alternating 2-linear function on 2\times 2 matrices over K. Show that D(A)=(\det A)D(I) for all A. Now use this result to show that \det (AB)=(\det A)(\det B) for any 2\times 2 matrices A and B over K.

Solution: Since D is alternating and 2-linear, use Exercise 9(b) we have when A_{11}A_{21}\neq 0:

\displaystyle{\begin{aligned}A_{11}A_{21}D(A)&=D\left(\begin{bmatrix}A_{11}A_{21}&A_{12}A_{21}\\A_{11}A_{21}&A_{11}A_{22}\end{bmatrix}\right)=D\left(\begin{bmatrix}A_{11}A_{21}&A_{12}A_{21}\\0&A_{11}A_{22}-A_{12}A_{21}\end{bmatrix}\right)\\&=(\det{A})D\left(\begin{bmatrix}A_{11}A_{21}&A_{12}A_{21}\\0&1\end{bmatrix}\right)=(\det{A})D\left(\begin{bmatrix}A_{11}A_{21}&0\\0&1\end{bmatrix}\right)\\&=A_{11}A_{21}(\det{A})D(I)\end{aligned}}

Thus A_{11}A_{21}[D(A)-(\det{A})D(I)]=0 and D(A)-(\det{A})D(I)=0.
If A_{11}\neq 0,A_{21}=0, then \det{A}=A_{11}A_{22} and

\displaystyle{D(A)=A_{22}D\left(\begin{bmatrix}A_{11}&A_{12}\\0&1\end{bmatrix}\right)=A_{22}D\left(\begin{bmatrix}A_{11}&0\\0&1\end{bmatrix}\right)=A_{11}A_{22}D(I)=(\det{A})D(I)}

If A_{21}\neq 0,A_{11}=0, then \det{A}=-A_{12}A_{21} and

\displaystyle{D(A)=A_{12}D\left(\begin{bmatrix}0&1\\A_{21}&A_{22}\end{bmatrix}\right)=-A_{12}D\left(\begin{bmatrix}A_{21}&A_{22}\\0&1\end{bmatrix}\right)=-A_{12}A_{21}D(I)=(\det{A})D(I)}

If A_{11}=A_{21}=0, then \det{A}=0, and by Exercise 9(a)

\displaystyle{D(A)=A_{22}D\left(\begin{bmatrix}0&A_{12}\\0&1\end{bmatrix}\right)=A_{12}A_{22}D\left(\begin{bmatrix}0&1\\0&1\end{bmatrix}\right)=0}

thus the conclusion holds anyway.
Now for any A,B over K, define a function d_B(A)=\det(AB) for A over K, if we write A=\begin{bmatrix}a_1\\a_2\end{bmatrix}, then AB=\begin{bmatrix}a_1B\\a_2B\end{bmatrix}, since a_1=a_2 means a_1B=a_2B and \det(AB)=0, and also since

d_B\left(\begin{bmatrix}ca_1\\a_2\end{bmatrix}\right)=\det\left(\begin{bmatrix}ca_1B\\a_2B\end{bmatrix}\right)=c\det(AB) \\ d_B\left(\begin{bmatrix}a_1\\ca_2\end{bmatrix}\right)=\det\left(\begin{bmatrix}a_1B\\ca_2B\end{bmatrix}\right)=c\det(AB)

we can conclude d_B is alternating and 2-linear for any matrix B, thus d_B(A)=\det(A)d_B(I), which gives \det(AB)=(\det A)(\det B).

12.Let F be a field and D a function on n\times n matrices over F (with values in F). Suppose D(AB)=D(A)D(B) for all A,B. Show that either D(A)=0 for all A, or D(I)=1. In the latter case show that D(A)\neq 0 whenever A is invertible.

Solution: We have D(A)=D(AI)=D(A)D(I) for all A, if D(A)=0 for all A, the condition is satisfied. Otherwise let B be some matrix such that D(B)\neq 0, then D(B)=D(B)D(I), as F is a field we multiply both sides with [D(B)]^{-1} and get D(I)=1.
If D(I)=1, then when A is invertible, we have AA^{-1}=I and 1=D(I)=D(AA^{-1})=D(A)D(A^{-1}), this means D(A)\neq 0.

13.Let R be the field of real numbers, and let D be a function on 2\times 2 matrices over R, with values in R, such that D(AB)=D(A)D(B) for all A,B. Suppose also that

\displaystyle{D\left(\begin{bmatrix}0&1\\1&0\end{bmatrix}\right)\neq D\left(\begin{bmatrix}1&0\\0&1\end{bmatrix}\right)}

Prove the following.
( a ) D(0)=0;
( b ) D(A)=0 if A^2=0;
( c ) D(B)=-D(A) if B is obtained by interchanging the rows (or columns) of A.
( d ) D(A)=0 if one row (or one column) of A is 0;
( e ) D(A)=0 whenever A is singular.

Solution:
( a ) We have D(A0)=D(A)D(0)=D(0) for all A, thus D(0)=0 due to the fact that D\left(\begin{bmatrix}0&1\\1&0\end{bmatrix}\right)\neq D\left(\begin{bmatrix}1&0\\0&1\end{bmatrix}\right).

( b ) We have 0=D(A^2)=D(A)D(A)=[D(A)]^2, thus D(A)=0.

( c ) From Exercise 12 we know D(I)=1, thus D\left(\begin{bmatrix}0&1\\1&0\end{bmatrix}\right)\neq 1, we also have

\displaystyle{D\left(\begin{bmatrix}0&1\\1&0\end{bmatrix}\begin{bmatrix}0&1\\1&0\end{bmatrix}\right)=D(I)=1\implies \left[D\left(\begin{bmatrix}0&1\\1&0\end{bmatrix}\right)\right]^2=1\implies D\left(\begin{bmatrix}0&1\\1&0\end{bmatrix}\right)=-1}

Now if B is obtained from A by interchanging the rows (or columns) of A, then B=\begin{bmatrix}0&1\\1&0\end{bmatrix}A or B=A\begin{bmatrix}0&1\\1&0\end{bmatrix}. Thus D(B)=(-1)D(A)=-D(A).

( d ) Let A=\begin{bmatrix}0&0\\a&b\end{bmatrix}, then A^2=\begin{bmatrix}0&0\\ab&b^2\end{bmatrix}, and

\displaystyle{\begin{bmatrix}0&0\\0&b\end{bmatrix}\begin{bmatrix}0&0\\a&b\end{bmatrix}=\begin{bmatrix}0&0\\ab&b^2\end{bmatrix}=A^2\implies D(A^2)=D\left(\begin{bmatrix}0&0\\0&b\end{bmatrix}\right)D(A)}

Since D\left(\begin{bmatrix}0&0\\0&b\end{bmatrix}\right)=-D\left(\begin{bmatrix}0&b\\0&0\end{bmatrix}\right) and \begin{bmatrix}0&b\\0&0\end{bmatrix}^2=\begin{bmatrix}0&0\\0&0\end{bmatrix}, we know D\left(\begin{bmatrix}0&0\\0&b\end{bmatrix}\right)=0 and D(A^2)=0, thus D(A)=0.
Then cases of A=\begin{bmatrix}a&b\\0&0\end{bmatrix} and one zero column can be similarly proved.

( e ) If A is singular, then with finite left multiplication of elementary matrices, A can be reduced to its row reduced echelon form A', with the second row being zero. If we let A'=P_k\cdots P_1A then A=P_1^{-1}\cdots P_k^{-1}A', and D(A)=D(P_1^{-1})\cdots D(P_k^{-1})D(A')=0.

14.Let A be a 2\times 2 matrix over a field F. Then the set of all matrices of the form f(A), where f is a polynomial over F, is a commutative ring K with identity. If B is a 2\times 2 matrix over K, the determinant of B is then a 2\times 2 matrix over F, of the form f(A). Suppose I is the 2\times 2 identity matrix over F and that B is the 2\times 2 matrix over K

\displaystyle{B=\begin{bmatrix}A-A_{11}I&-A_{12}I\-A_{21}I&A-A_{22}I\end{bmatrix}.}

Show that \det B=f(A), where f=x^2-(A_{11}+A_{22})x+\det A, and also that f(A)=0.

Solution: We have

\displaystyle{\begin{aligned}\det B&=(A-A_{11}I)(A-A_{22}I)-(A_{12}I)(A_{21}I)\\&=A^2-(A_{11}+A_{22})A+(A_{11}A_{22}-A_{12}A_{21})I\\&=f(A)\end{aligned}}

To show that f(A)=0, notice that

\displaystyle{\begin{aligned}&\qquad A^2-(A_{11}+A_{22})A\\&=\begin{bmatrix}A_{11}^2+A_{12}A_{21}&A_{12}(A_{11}+A_{22})\\A_{21}(A_{11}+A_{22})&A_{12}A_{21}+A_{22}^2\end{bmatrix}-(A_{11}+A_{22})\begin{bmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix}\\&=\begin{bmatrix}A_{12}A_{21}-A_{11}A_{22}&0\\0&A_{12}A_{21}-A_{11}A_{22}\end{bmatrix}\end{aligned}}

thus f(A)=A^2-(A_{11}+A_{22})A+(A_{11}A_{22}-A_{12}A_{21})I=0.