Linear Algebra (2ed) Hoffman&Kunze 2.4

先给V定义一个ordered basis \mathscr B={\alpha_1,\dots,\alpha_n},其中次序是有用的(即一个sequence),那么对于\alpha\in V,有唯一的一个n-tuple(x_1,\dots,x_n)使得\alpha=\sum_{i=1}^nx_i\alpha_i, 于是称x_i是 the ith coordinate of \alpha relative to the ordered basis \mathscr B={\alpha_1,\dots,\alpha_n}. 总结:事实上V的每个ordered basis 决定了一个一一对应:\alpha\to(x_1,\dots,x_n),并且可以保证加法、数乘也在F^n中对应。
我们将\alpha\to(x_1,\dots,x_n)得到的(x_1,\dots,x_n)记为[\alpha]_{\mathscr B}, 这个记号在讨论basis的变换时很方便。接下来就进入本节的核心内容:change of basis。如果有V的两组basis: \mathscr B={\alpha_1,\dots,\alpha_n},  \mathscr B'={\alpha_1',\dots,\alpha_n'},每个\alpha_j'都是V中的向量,因此有唯一的scalarsP_{ij}使得\alpha_j'=\sum_{i=1}^nP_{ij}\alpha_i,1\leq j\leq n,如果我们令[\alpha]_{\mathscr B'}=(x_1',\dots,x_n'),那么可得到

\alpha=\sum_{j=1}^nx_j'\alpha_j'=\sum_{j=1}^nx_j'\sum_{i=1}^nP_{ij}\alpha_i=\sum_{j=1}^n\sum_{i=1}^n(P_{ij}x_j')\alpha_i=\sum_{i=1}^n\left(\sum_{j=1}^nP_{ij}x_j'\right)\alpha_i

根据[\alpha]_{\mathscr B}的唯一性,可知\sum_{j=1}^nP_{ij}x_j'=x_i,如果记矩阵P=(P_{ij}),那么[\alpha]_{\mathscr B}=P[\alpha]_{\mathscr B'}或者[\alpha]_{\mathscr B'}=P^{-1}[\alpha]_{\mathscr B},其中P的可逆性来源于[\alpha]_{\mathscr B'}=0\Leftrightarrow [\alpha]_{\mathscr B}=0,因此根据第一章的Theorem 7,P可逆。上述内容则是这一章的Theorem 7的结论,注意到对PP_j=[\alpha_j']_{\mathscr B},j=1,\dots,n。 以上的讨论从另一个角度,可以得到Theorem 8,即如果先假设P是可逆的,则对于V的一个ordered basis \mathscr B, 可以有另一个唯一的ordered basis \mathscr B' 使得[\alpha]_{\mathscr B}=P[\alpha]_{\mathscr B'}或者[\alpha]_{\mathscr B'}=P^{-1}[\alpha]_{\mathscr B}V中每一个\alpha成立。 Example 18 是standard basis的一个例子。Example 19实际上是二维空间中的rotation变换。从Example 20中可以总结一些规律,如果说矩阵P是可逆的,那么P_j=[\alpha_j']_{\mathscr B},j=1,\dots,n,特别当{\mathscr B}是standard basis时,P_j就直接是新的ordered basis {\mathscr B'}中的\alpha_j',求任何一个向量在这个新的basis {\mathscr B'} 中的坐标,就用[\alpha]_{\mathscr B'}=P^{-1}[\alpha]_{\mathscr B}即可,特别是X=(x_1,\cdots,x_n)'{\mathscr B'}中的坐标是P^{-1}X.

Exercises

1. Show that the vectors

\alpha_1=(1,1,0,0),\quad \alpha_2=(0,0,1,1) \\ \alpha_3=(1,0,0,4),\quad \alpha_4=(0,0,0,2)

form a basis for R^4. Find the coordinates of each of the standard basis vectors in the ordered basis \{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}.

Solution: We have

\begin{aligned}&\begin{bmatrix}1&0&1&0&a\\1&0&0&0&b\\0&1&0&0&c\\0&1&4&2&d\end{bmatrix}\rightarrow\begin{bmatrix}1&0&1&0&a\\0&0&-1&0&b-a\\0&1&0&0&c\\0&0&4&2&d-c\end{bmatrix}\\ \rightarrow &\begin{bmatrix}1&0&1&0&a\\0&0&1&0&a-b\\0&1&0&0&c\\0&0&2&1&\frac{d-c}{2}\end{bmatrix} \rightarrow\begin{bmatrix}1&0&0&0&b\\0&1&0&0&c\\0&0&1&0&a-b\\0&0&0&1&\frac{d-c}{2}+2b-2a\end{bmatrix}\end{aligned}

Thus the four vectors are linearly independent, since \dim R^4=4, we see they form a basis for R^4.
by the augmented matrix above, we have

\begin{aligned}\epsilon_1&=(1,0,0,0)=\alpha_3-2\alpha_4 \\ \epsilon_2&=(0,1,0,0)=\alpha_1-\alpha_3+2\alpha_4 \\ \epsilon_3&=(0,0,1,0)=\alpha_2-1/2 \alpha_4 \\ \epsilon_4&=(0,0,0,1)=1/2 \alpha_4\end{aligned}

2. Find the coordinate matrix of the vector (1,0,1) in the basis of C^3 consisting of the vectors (2i,1,0),(2,-1,1),(0,1+i,1-i), in that order.

Solution:

\begin{aligned}\begin{bmatrix}2i&2&0&1\\1&-1&1+i&0\\0&1&1-i&1\end{bmatrix}&\rightarrow\begin{bmatrix}1&-1&1+i&0\\0&1&1-i&1\\0&2+2i&2-2i&1 \end{bmatrix}\rightarrow\begin{bmatrix}1&-1&1+i&0\\0&1&1-i&1\\0&0&-2-2i&-1-2i \end{bmatrix}\\&\rightarrow\begin{bmatrix}1&0&2&1\\0&1&1-i&1\\0&0&1&(3+i)/4\end{bmatrix}\rightarrow\begin{bmatrix}1&0&0&-(1+i)/2\\0&1&0&i/2\\0&0&1&(3+i)/4\end{bmatrix}\end{aligned}

thus the coordinate matrix is \begin{bmatrix}-(1+i)/2\\i/2\\(3+i)/4 \end{bmatrix}.

4. Let {\mathscr B}={\alpha_1,\alpha_2,\alpha_3} be the ordered basis for R^3 consisting of

\alpha_1=(1,0,-1),\quad \alpha_2=(1,1,1),\quad \alpha_3=(1,0,0)

What are the coordinates of the vectors (a,b,c) in the ordered basis {\mathscr B}?

Solution:

\begin{aligned}\begin{bmatrix}1&1&1&a\\0&1&0&b\\-1&1&0&c\end{bmatrix}&\rightarrow\begin{bmatrix}1&1&1&a\\0&1&0&b\\0&2&1&a+c\end{bmatrix}\\&\rightarrow\begin{bmatrix}1&1&1&a\\0&1&0&b\\0&0&1&a+c-2b\end{bmatrix}\\&\rightarrow\begin{bmatrix}1&0&0&b-c\\0&1&0&b\\0&0&1&a+c-2b\end{bmatrix}\end{aligned}

thus the coordinate matrix is \begin{bmatrix}b-c\\b\\a+c-2b\end{bmatrix}.

4. Let W be the subspace of C^3 spanned by \alpha_1=(1,0,i) and \alpha_2=(1+i,1,-1)
( a ) Show that \alpha_1 and \alpha_2 form a basis for W.
( b ) Show that the vectors \beta_1=(1,1,0) and \beta_2=(1,i,1+i) are in W and from another basis for W.
( c ) What are the coordinates of \alpha_1 and \alpha_2 in the ordered basis {\beta_1,\beta_2} for W?

Solution:
( a ) It’s obvious that \alpha_1 and \alpha_2 spans W, and they’re linearly independent since they’re not proportionate with each other, thus they form a basis for W.
( b ) We have \beta_1=-i\alpha_1+\alpha_2 and \beta_2=(2-i) \alpha_1+i\alpha_2, thus \beta_1,\beta_2\in W, since they are linearly independent and we already know \dim W=2, \beta_1,\beta_2 form a basis for W.
( c ) We have

P=\begin{bmatrix}-i&2-i\\1&i\end{bmatrix}, and thus P^{-1}=\frac{1}{2}\begin{bmatrix}1-i&3+i\\1+i&i-1\end{bmatrix}, let \mathscr B={\alpha_1,\alpha_2},\mathscr B'={\beta_1,\beta_2}, then
[\alpha_1]_{\mathscr B}=\begin{bmatrix}1\\0\end{bmatrix},\quad [\alpha_2]_{\mathscr B}=\begin{bmatrix}0\\1\end{bmatrix}

thus

[\alpha_1]_{\mathscr B'}=\dfrac{1}{2} \begin{bmatrix}1-i&3+i\\1+i&i-1\end{bmatrix} \begin{bmatrix}1\\0\end{bmatrix} =\begin{bmatrix}(1-i)/2\\(1+i)/2 \end{bmatrix}

[\alpha_2]_{\mathscr B'}= \dfrac{1}{2} \begin{bmatrix}1-i&3+i\\1+i&i-1\end{bmatrix} \begin{bmatrix}0\\1\end{bmatrix}  =\begin{bmatrix}(3+i)/2 \\ (i-1)/2\end{bmatrix}

5. Let \alpha=(x_1,x_2) and \beta=(y_1,y_2) be the vectors in R^2 such that

x_1y_1+x_2y_2=0,\quad x_1^2+x_2^2=y_1^2+y_2^2=1

Prove that {\mathscr B}=\{\alpha,\beta\} is a basis for R^2. Find the coordinates of the vectors (a,b) in the ordered basis {\mathscr B}=\{\alpha,\beta\}. (The conditions on \alpha and \beta say, geometrically, that \alpha and \beta are perpendicular and each has length 1.)

Solution: To show \mathscr B=\{\alpha,\beta\} is a basis for R^2, it’s enough to show they’re linearly independent, assume they are linearly dependent, then \alpha=k\beta or \beta=k\alpha, first let \alpha=k\beta, then x_1=ky_1,x_2=ky_2, thus we have x_1 y_1+x_2 y_2=ky_1^2+ky_2^2=0, since y_1^2+y_2^2=1, we have k=0, so x_1=x_2=0, but this contradicts x_1^2+x_2^2=1. If we let \beta=k\alpha, we could similarly reach a contradiction.
let \gamma=(a,b), and let \mathscr B'=\{\epsilon_1,\epsilon_2 \} be the standard basis in R^2, then [\gamma]_{\mathscr B' }=\begin{bmatrix}a\\b\end{bmatrix}, and it’s easy to see \alpha=x_1 \epsilon_1+x_2 \epsilon_2 and \beta=y_1 \epsilon_1+y_2 \epsilon_2, thus P=\begin{bmatrix}x_1&y_1\\x_2&y_2 \end{bmatrix}, and [\gamma]_{\mathscr B}=P^{-1} [\gamma]_{\mathscr B'}, when y_2\neq 0, we shall have

\begin{bmatrix}x_1&y_1&1&0\\x_2&y_2&0&1\end{bmatrix}\rightarrow\begin{bmatrix}1&0&x_1&x_2\\x_2&y_2&0&1\end{bmatrix}\rightarrow\begin{bmatrix}1&0&x_1&x_2\\0&y_2&-x_1 x_2&x_1^2 \end{bmatrix}\rightarrow\begin{bmatrix}1&0&x_1&x_2\\0&1&\frac{-x_1 x_2}{y_2}&\frac{x_1^2}{y_2}\end{bmatrix}

If y_2=0, then y_1\neq 0 and so x_1=0, it follows x_2\neq 0, in this case we have

\begin{bmatrix}x_1&y_1&1&0\\x_2&y_2&0&1\end{bmatrix}\rightarrow\begin{bmatrix}x_2&0&0&1\\0&y_1&1&0\end{bmatrix}\rightarrow\begin{bmatrix}1&0&0&1/x_2\\0&1&1/y_1&0\end{bmatrix}

thus P^{-1}=\begin{bmatrix}x_1&x_2\\-(x_1 x_2)/y_2 &(x_1^2)/y_2 \end{bmatrix},y_2\neq 0 and P^{-1}=\begin{bmatrix}0&1/x_2 \\1/y_1 &0\end{bmatrix},y_2=0=x_1, and

[\gamma]_B=\begin{bmatrix}x_1&x_2\\-(x_1 x_2)/y_2 &(x_1^2)/y_2\end{bmatrix}\begin{bmatrix}a\\b\end{bmatrix}=\begin{bmatrix}x_1 a+x_2 b\\ \dfrac{x_1}{y_2} (x_1 b-x_2 a)\end{bmatrix},\quad y_2\neq 0

[\gamma]_B=\begin{bmatrix}0&1/x_2\\1/y_1&0\end{bmatrix}\begin{bmatrix}a\\b\end{bmatrix}=\begin{bmatrix}b/x_2 \\a/y_1 \end{bmatrix},\quad x_1=y_2=0

6. Let V be the vector space over the complex numbers of all functions from R into C, i.e., the space of all complex-valued functions on the real line. Let f_1(x)=1,f_2(x)=e^{ix},f_3(x)=e^{-ix}.
( a ) Prove that f_1,f_2,f_3 are linearly independent.
( b ) Let g_1(x)=1,g_2(x)=\cos x,g_3(x)=\sin x. Find an invertible 3\times 3 matrix P such that g_j=\sum_{i=1}^3P_{ij}f_i.

Solution:
( a ) Suppose c_1 f_1+c_2 f_2+c_3 f_3=0, from e^{ix}=\cos x+i \sin x and e^{-ix}=\cos x-i\sin x we know that

c_1+c_2 \cos x+c_3 \cos x=0,\quad c_2 \sin x-c_3 \sin x=0

from the second equation we know c_2=c_3, thus c_1=-2c_2\cos x,\forall x\in R, if c_2\neq 0, then c_1 is not constant, a contradiction. Thus c_2=0, and it follows c_1=0,c_3=0.
( b ) We have

g_1=f_1,\quad g_2=\dfrac{1}{2} (f_2+f_3 ),\quad g_3=\dfrac{1}{2i}(f_2-f_3)

thus the invertible matrix P is P=\begin{bmatrix}1&0&0\\0&\frac{1}{2}&\frac{1}{2i}\\0&\frac{1}{2}&\frac{1}{2i} \end{bmatrix}

7. Let V be the (real) vector space of all polynomial functions from R into R of degree 2 or less, i.e., the space of all functions f of the form f(x)=c_0+c_1x+c_2x^2. Let t be a fixed real number and define

g_1(x)=1,\quad g_2(x)=x+t,\quad g_3(x)=(x+t)^2

Prove that {\mathscr B}={g_1,g_2,g_3} is a basis for V. If

f(x)=c_0+c_1x+c_2x^2

what are the coordinates of f in this ordered basis {\mathscr B}?

Solution: Suppose c_1 g_1+c_2 g_2+c_3 g_3=0, then

c_1+c_2 (x+t)+c_3 (x+t)^2=0,\quad \forall x\in R

c_3 x^2+(2c_3+c_2 )x+(t^2 c_3+tc_2+c_1 )=0,\quad \forall x\in R

assume c_1,c_2,c_3 is not all zero, then the above function is a quadratic function in real coefficients, and it can’t have more than two roots in R, a contradiction. Thus we must have c_1=c_2=c_3=0, so {g_1,g_2,g_3 } is linearly independent.
Let f=c_0+c_1 x+c_2 x^2, then we can write

\begin{aligned}f&=c_2 (x+t)^2+(c_1-2c_2 )(x+t)+c_0-c_1 t+2c_2 t-c_2 t^2 \\&=c_2 g_3+(c_1-2c_2 ) g_2+(c_0-c_1 t+2c_2 t-c_2 t^2 ) g_1\end{aligned}

thus {g_1,g_2,g_3} spans V, and is a basis of V.
From proofs above we can see

[f]_{\mathscr B}=\begin{bmatrix}c_0-c_1 t+2c_2 t-c_2 t^2\\c_1-2c_2\\c_2 \end{bmatrix}

Linear Algebra (2ed) Hoffman&Kunze 2.3

dimension是刻画线性空间的一个重要指标。首先对linearly dependent和independent进行了定义。basis就是同时满足linearly independent和span两个条件的集合,如果basis是有限的,那么该空间就是有限维的。Example 13给出了standard basis的模式,Example 14说明:可逆矩阵的列向量构成F^{n\times 1}的一组basis,Example 15给出了如何得到homogeneous system解空间的一组基。Example 16再次用多项式函数空间给出了infinite basis的一个例子,这个例子的推导和思路值得学习。
Theorem 4是个很重要的定理,其说明如果能找到有限数量的m个向量spans V,那么所有线性无关向量组都是有限的并且不会超过m个。证明直接证所有超过m个的向量组是线性相关的,使用方程组理论和第一章的Theorem 6即可。Corollary 1说明了dimension是唯一的。Corollary 2说明维数的重要性:其是线性相关/无关和能否span的一个分界线。比维数多的不能线性无关,比维数少的不能span。
在Theorem 5之前有一个有用的Lemma,即如果S是linearly independent并且\beta不在span(S)中,那么S\cup \{\beta\}是linearly independent的。证明用反证法。Theorem 5说明:有限维空间的子集肯定有限维,并且任何该子集中的线性无关向量组都可以扩充成空间的一组基。其实反过来也成立,即任何span该空间的向量组都可以缩成一组基。Corollary 1说明真子集的维数一定严格小于空间本身,Corollary 2说任意线性无关向量组都可以扩充成空间的一组基。Corollary 3说明,如果n阶方阵A的行向量线性无关,则A可逆。主要由于行向量的空间维数为n
Theorem 6是大家很熟悉的关于维数的定理,即对于有限维空间W_1,W_2,有\dim W_1+\dim W_2=\dim (W_1\cap W_2)+\dim (W_1+W_2)
最后一部分,老先生说了一下对于集合与序列如何区分,核心是能否重复以及order是不是重要的,重点在后者,因为线性无关保证了向量组里不会有重复(相同的)向量。后者则是下一节学坐标时需要关注的新增东西。

Exercises

1. Prove that if two vectors are linearly dependent, one of them is a scalar multiplication of the other.

Solution: Let a,b be linearly dependent, then there’s scalars c_1,c_2, not all zero, such that c_1 a+c_2 b=0. If c_1\neq 0, then we have a=-(c_2 /c_1 )b, otherwise c_2\neq 0, so b=-(c_1/c_2 )a.

2. Are the vectors

\alpha_1=(1,1,2,4),\quad \alpha_2=(2,-1,-5,2)\\ \alpha_3=(1,-1,-4,0), \quad\alpha_4=(2,1,1,6)

linearly independent in R^4?

Solution: Since

\begin{bmatrix}1&1&2&4\\2&-1&-5&2\\1&-1&-4&0\\2&1&1&6\end{bmatrix}\rightarrow\begin{bmatrix}1&1&2&4\\0&-3&-9&-6\\0&-2&-6&-4\\0&-1&-3&-2\end{bmatrix}\rightarrow\begin{bmatrix}1&1&2&4\\0&1&3&2\\0&1&3&2\\0&1&3&2\end{bmatrix}

We have (\alpha _2-2\alpha _1 )\times (1/3)=(\alpha _3-\alpha _1 )\times (1/2), or

3(\alpha _3-\alpha _1 )=2(\alpha _2-2\alpha _1 ) \Rightarrow \alpha _1-2\alpha _2+3\alpha _3+0\alpha _4=0

Thus the vectors are linearly dependent.

3. Find a basis for the subspace of R^4 spanned by the four vectors of Exercise 2.

Solution: The vectors (\alpha _1,\alpha _2 ) can be a basis.

4. Show that the vectors

\alpha_1=(1,0,-1),\quad \alpha_2=(1,2,1),\quad \alpha_3=(0,-3,2)

form a basis for R^3. Express each of the standard basis vectors as linear combinations of \alpha_1,\alpha_2,\alpha_3.

Solution: We have

\begin{bmatrix}1&0&-1\\1&2&1\\0&-3&2\end{bmatrix}\rightarrow\begin{bmatrix}1&0&-1\\0&2&2\\0&-3&2\end{bmatrix}\rightarrow\begin{bmatrix}1&0&-1\\0&1&1\\0&0&5\end{bmatrix}\rightarrow\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}

thus the vectors (\alpha _1,\alpha _2,\alpha _3 ) are linearly independent, if we solve x_1 \alpha _1+x_2 \alpha _2+x_3 \alpha _3=(a,b,c), we get

\begin{aligned}\begin{bmatrix}1&1&0&a\\0&2&-3&b\\-1&1&2&c\end{bmatrix}&\rightarrow\begin{bmatrix}1&1&0&a\\0&2&-3&b\\0&2&2&a+c\end{bmatrix}\rightarrow\begin{bmatrix}1&1&0&a\\0&2&-3&b\\0&0&5&a+c-b\end{bmatrix}\\&\rightarrow\begin{bmatrix}1&1&0&a\\0&2&0&\frac{2}{5} b+\frac{3}{5}(a+c)\\0&0&1&\frac{1}{5} (a+c-b) \end{bmatrix}\rightarrow\begin{bmatrix}1&0&0&\frac{7}{10} a-\frac{1}{5} b-\frac{3}{10} c\\0&1&0&\frac{1}{5} b+\frac{3}{10}(a+c)\\0&0&1&\frac{1}{5} (a+c-b) \end{bmatrix}\end{aligned}

thus

\varepsilon _1=(1,0,0)=\dfrac{7}{10} \alpha _1+\dfrac{3}{10} \alpha _2+\dfrac{1}{5} \alpha _3 \\ \varepsilon _2=(0,1,0)=-\dfrac{1}{5} \alpha _1+\dfrac{1}{5} \alpha _2-\dfrac{1}{5} \alpha _3 \\ \varepsilon _3=(0,0,1)=-\dfrac{3}{10} \alpha _1+\dfrac{3}{10} \alpha _2+\dfrac{1}{5} \alpha _3

5. Find three vectors in R^3 which are linearly dependent, and are such that any two of them are linearly independent.

Solution: One possible answer is (1,0,0),(0,1,0),(1,1,0).

6. Let V be the vector space of all 2\times 2 matrices over the field F. Prove that V has dimension 4 by exhibiting a basis for V which has four elements.

Solution: A basis for V can be

A_1=\begin{bmatrix}1&0\\0&0\end{bmatrix},\quad A_2=\begin{bmatrix}0&1\\0&0\end{bmatrix},\quad A_3=\begin{bmatrix}0&0\\1&0\end{bmatrix},\quad A_4=\begin{bmatrix}0&0\\0&1\end{bmatrix}

It’s obvious (A_1,A_2,A_3,A_4 ) are linearly independent, and for any matrices A=\begin{bmatrix}a&b\\c&d\end{bmatrix}\in V, we have A=aA_1+bA_2+cA_3+dA_4, so they span V and is a basis for V.

7. Let V be the vector space of Exercise 6. Let W_1 be the set of matrices of the form \begin{bmatrix}x&-x\\y&z\end{bmatrix} and let W_2 be the set of matrices of the form \begin{bmatrix}a&b\\-a&c\end{bmatrix}
( a ) Prove that W_1 and W_2 are subspaces of V.
( b ) Find the dimensions of W_1,W_2,W_1+W_2,W_1\cap W_2.

Solution:
( a ) If A,B\in W_1, then A,B are of the form A=\begin{bmatrix}x&-x\\y&z\end{bmatrix},B=\begin{bmatrix}x'&-x'\\y'&z' \end{bmatrix}, thus

cA+B=c\begin{bmatrix}x&-x\\y&z\end{bmatrix}+\begin{bmatrix}x'&-x'\\y'&z' \end{bmatrix}=\begin{bmatrix}cx+x'&-cx-x'\\cy+y'&cz+z' \end{bmatrix}\in W_1

so W_1 is a subspace, similarly W_2 is a subspace.
( b ) \dim W_1=\dim W_2=3, we can see W_1\cap W_2 is all matrices of the form \begin{bmatrix}x&-x\\-x&z\end{bmatrix}, and it’s easy to see \dim W_1\cap W_2=2, so \dim (W_1+W_2)=3+3-2=4.

8. Again let V be the space of 2\times 2 matrices over F. Find a basis {A_1,A_2,A_3,A_4} for V such that A_j^2=A_j for each j.

Solution: We can have

A_1=\begin{bmatrix}1&0\\0&0\end{bmatrix},\quad A_2=\begin{bmatrix}1&0\\0&1\end{bmatrix},\quad A_3=\begin{bmatrix}1&1\\0&0\end{bmatrix},\quad A_4=\begin{bmatrix}0&0\\1&1\end{bmatrix}

9. Let V be a vector space over a subfield F of the complex numbers. Suppose \alpha,\beta,\gamma are linearly independent vectors in V. Prove that (\alpha+\beta),(\beta+\gamma),(\gamma+\alpha) are linearly independent.

Solution: Suppose we have a(\alpha +\beta )+b(\beta +\gamma )+c(\alpha +\gamma )=0, then it’s equivalent to (a+c)\alpha +(a+b)\beta +(b+c)\gamma =0, since \alpha ,\beta ,\gamma are linearly independent, we have a+c=a+b=b+c=0, solve the system we get a=b=c=0, and the conclusion follows.

10. Let V be a vector space over the field F. Suppose there are a finite number of vectors \alpha_1,\dots,\alpha_r in V which span V. Prove that V is finite-dimensional.

Solution: If {\alpha_1,\dots,\alpha_r} arelinearly independent, then they form a basis for Vand the conclusion follows. Now suppose {\alpha_1,\dots,\alpha_r } are linearly dependent, then there’s c_1,\dots,c_r not all zero s.t.

c_1 \alpha_1+\cdots+c_r \alpha_r=0

choose some c_i\neq 0 (this c_i must exist),then we have \alpha_i=-c_i^{-1} \sum_{j\neq i}c_j\alpha _j, it follows that the set {\alpha_1,\dots,\alpha_{i-1},\alpha_{i+1},\dots,\alpha_r } spans V, now we check if this set is linearly dependent, and continue this step if it is, stop if it isn’t, eventually we’ll get a basis for V, thus \dim V\leq r, and we are done.

11. Let V be the set of all 2\times 2 matrices A with complex entries which satisfy A_{11}+A_{22}=0.
( a ) Show that V is a vector space over the field of real numbers, with the usual operations of matrix addition and multiplication of a matrix by a scalar.
( b ) Find a basis for this vector space.
( c ) Let W be the set of all matrices A in V such that A_{21}=-\overline{A_{12}} (the bar denotes complex conjugation). Prove that W is a subspace of V and find a basis for W.

Solution:
( a ) We can describe V=\left\{\begin{bmatrix}c&a\\b&-c\end{bmatrix}:a,b,c\in C\right\}. To show V is a vector space over R, let A,B\in V and r\in R, then it’s able to write A=\begin{bmatrix}c_1&a_1\\b_1&-c_1\end{bmatrix},B=\begin{bmatrix}c_2&a_2\\b_2&-c_2\end{bmatrix}, the entries in C, thus

rA+B=\begin{bmatrix}rc_1+c_2&ra_1+a_2\\rb_1+b_2&-rc_1-c_2\end{bmatrix}

since rc_1+c_2+(-rc_1-c_2 )=r(c_1-c_1 )+(c_2-c_2 )=r\cdot0+0=0, we see rA+B\in V.
( b ) A basis for V can be

\begin{bmatrix}1&0\\0&-1\end{bmatrix},\begin{bmatrix}i&0\\0&-i\end{bmatrix},\begin{bmatrix}0&1\\0&0\end{bmatrix},\begin{bmatrix}0&0\\1&0\end{bmatrix},\begin{bmatrix}0&i\\0&0\end{bmatrix},\begin{bmatrix}0&0\\i&0\end{bmatrix}

( c ) We can describe W=\left\{\begin{bmatrix}a&c\\-\overline{c}&b\end{bmatrix}:a,b,c\in C\right\}. To show W is a vector space over R, let A,B\in W and r\in R, then it’s able to write A=\begin{bmatrix}a_1&c_1\\-\overline{c_1} &b_1 \end{bmatrix},B=\begin{bmatrix}a_2&c_2\\-\overline{c_2}&b_2\end{bmatrix}, the entries in C, thus

rA+B=\begin{bmatrix}ra_1+a_2&rc_1+c_2\\-r\overline{c_1} -\overline{c_2} &rb_1+b_2\end{bmatrix}

Since -r\overline{c_1}-\overline{c_2}=-(r\overline{c_1}+\overline{c_2})=-\overline{rc_1+c_2}, we see rA+B\in W.
A basis for W can be

\begin{bmatrix}1&0\\0&0\end{bmatrix},\begin{bmatrix}i&0\\0&0\end{bmatrix},\begin{bmatrix}0&0\\0&1\end{bmatrix}, \begin{bmatrix}0&0\\0&i\end{bmatrix},\begin{bmatrix}0&i\\i&0\end{bmatrix},\begin{bmatrix}0&1\\-1&0\end{bmatrix}

12. Prove that the space of all m\times n matrices over the field F has dimension mn, by exhibiting a basis for this space.

Solution: Let \varepsilon _{ij} be the matrix which has only 1 in the ith row and jth column, and \{\varepsilon _{ij}:1\leq i\leq m,1\leq j\leq n\} is a basis for the space mentioned.

13. Discuss Exercise 9, when V is a vectorspace over the field with two elements described in Exercise 5, Section 1.1.

Solution: The statement is no longer true, since if we have a(\alpha +\beta )+b(\beta +\gamma )+c(\alpha +\gamma )=0, we would get a+c=a+b=b+c=0, but in the field given now we can let a=b=c=1.

14. Let V be the set of real bumners. Regard V as a vector space over the field of rational numbers, with the usual operations. Prove that this vector space is not finite-dimensional.

Solution: First, the set {2,\sqrt{2}} is linearly independent, since \sqrt{2} is irrational. Suppose we already have the set {2,2^{1/2},\dots,2^{1/k} } linearly independent, and add 2^{1/(k+1)} to this set, assume this makes the set linearly dependent, then \exists q_1,\dots,q_k\in Q s.t. 2^{1/(k+1)}=\sum_{i=1}^kq_i 2^{1/i}, thus 2=(\sum_{i=1}^kq_i 2^{1/i})^{k+1}. Notice that the right side is an expression of 2,2^{1/2},\dots,2^{1/k} with rational coefficients, minus 2, we still get an expression of 2,2^{1/2},\dots,2^{1/k} with rational coefficients whose sum equals 0, this contradicts our assumption that {2,2^{1/2},\dots,2^{1/k}} is linearly independent.
Thus the set {2,2^{1/2},\dots,2^{1/k}} is linearly independent for all k, we thus have \dim V\geq k,\forall k\in N, so V is not finite-dimensional.

Linear Algebra (2ed) Hoffman&Kunze 2.2

Subspace本身并不是一个很难的概念,就是一个自己也是space的子集合。其检验可以简化为Theorem 1, 即\alpha,\beta \in W,c\in F\Leftrightarrow c\alpha+\beta\in W, 在很多其他的书中,这个定理用于subspace的定义。Example 6 中给出了一些例子,包括zero subspace,symmetric,Hermitan(self-adjoint)空间等。Example 7 说明homogeneous system的解的集合是一个subspace,并且还推广了一个矩阵的带数乘的分配律,即A(dB+C)=d(AB)+AC,当然里面矩阵的乘法要有意义。
Theorem 2开始,用一个很新的方式看待某一类特殊的subspace,即subspace spanned by S,Theorem 2首先说明,V的任意subspace的交集还是一个subspace,由此可知,包含S的subspace有一个最小的subspace(不断地取S的subspace的交集),并因此将subspace spanned by S定义出来:所有包含S的subspace的交。这一定义的好处是允许S有无限元素(实际上是无限维元素),当S只包含有限个vector时,subspace spanned by S是传统上大家比较熟悉的样子。Theorem 3即阐述了:subspace spanned by S实际上是S中vector所有的linear combination的集合,注意这里同样没有限制S是有限的,这就是这一种subspace定义方式的好处。
接下来有一个新的定义即集合的和或者说subspace的和,实质是每个集合中任取一个vector并相加而组成的集合。如果W_1,W_2,\dots,W_kV的subspace,那么W=W_1+W_2+\dots+W_k也是一个subspace(按定义可证),并且W=\text{subspace spanned by }W_1\cup\dots \cup W_k,因为采用Theorem 3的类似证法,任意包含W_1\cup\dots \cup W_k的subspace都会包含W
最后是几个例子,其中Example 10和11比较精彩。Example 10提出了row-space的概念,其是矩阵的row vectors spanned的subspace,为未来引入矩阵的rank打下基础。Example 11是一个无限维的subspace的例子,并且为第四章介绍多项式打下基础。

Exercises

1. Which of the following sets of vectors \alpha =(a_1,\dots,a_n) in R^n are subspaces of R^n(n\geq3)?
( a ) all \alpha such that a_1\geq0;
( b ) all \alpha such that a_1+3a_2=a_3;
( c ) all \alpha such that a_2=a_1^2;
( d ) all \alpha such that a_1a_2=0;
( e ) all \alpha such that a_2 is rational.

Solution:
( a ) No, since (1,0,\dots,0) satisfies, but -(1,0,\dots,0)=(-1,0,\dots,0) doesn’t.
( b ) Yes, for \alpha=(a_1,\dots,a_n ),\beta=(b_1,\dots,b_n), if a_1+3a_2=a_3,b_1+3b_2=b_3, then any c\in R, the vector c\alpha+\beta=(ca_1+b_1,\dots,ca_n+b_n) has the property
(ca_1+b_1 )+3(ca_2+b_2 )=c(a_1+3a_2 )+b_1+3b_2=ca_3+b_3
( c ) No, since (1,1,\dots,0) satisfies, but 2(1,1,\dots,0)=(2,2,\dots,0) doesn’t.
( d ) No, since (1,0,\dots,0) and (0,1,\dots,0) satisfies, but (1,1,\dots,0)=(1,0,\dots,0)+(0,1,\dots,0) doesn’t.
( e ) No, since (0,1,\dots,0) satisfies, but \sqrt{2} (0,1,\dots,0)=(0,\sqrt{2},\dots,0) doesn’t.

2. Let V be the (real) vector space of all functions f from R into R. Which of the following sets of functions are subspaces of V?
( a ) all f such that f(x^2)=f(x)^2;
( b ) all f such that f(0)=f(1);
( c ) all f such that f(3)=1+f(-5);
( d ) all f such that f(-1)=0;
( e ) all f which are continuous.

Solution:
( a ) No, consider f(x)=1 if x\geq 0 and f(x)=-1 if x<0, then f(x^2 )=1=f(x)^2, but 2f(x) doesn’t satisfy (2f)(x^2 )=(2f(x))^2 since (2f)(x^2 )=2, while (2f(x))^2=4.
( b ) Yes, if f,g are functions s.t. f(0)=f(1),g(0)=g(1), then
(cf+g)(0)=(cf)(0)+g(0)=cf(0)+g(0)=cf(1)+g(1)=(cf+g)(1),\quad \forall c\in R
( c ) No, if f satisfies f(3)=1+f(-5), then 2f doesn’t, since
(2f)(3)=2f(3)=2+2f(-5)=2+(2f)(-5)\neq 1+(2f)(-5)
(d) Yes, if if f,g are functions s.t. f(-1)=g(-1)=0, then
(cf+g)(-1)=cf(-1)+g(-1)=0+0=0,\quad \forall c\in R
( e ) Yes, since cf+g is continuous for any continuous functions f,g.

3.Is the vector (3,-1,0,-1) in the subspace of R^5 spanned by the vectors (2,-1,3,2),(-1,1,1,-3) and (1,1,9,-5)?

Solution: It’s equivalent to determine whether the system of equations

\begin{cases}2x_1-x_2+x_3=3\\-x_1+x_2+x_3=-1\\3x_1+x_2+9x_3=0\\2x_1-3x_2-5x_3=-1\end{cases}

has solutions or not. Since

A'=\begin{bmatrix}2&-1&1&3\\-1&1&1&-1\\3&1&9&0\\2&-3&-5&-1\end{bmatrix}\rightarrow\begin{bmatrix}-1&1&1&-1\\0&1&3&1\\0&4&13&-3\\0&-1&-3&-3\end{bmatrix}\rightarrow\begin{bmatrix}1&-1&-1&1\\0&1&3&1\\0&4&13&-3\\0&0&0&-4\end{bmatrix}

this system of equations has no solution, thus the answer is no.

4. Let W be the set of all (x_1,x_2,x_3,x_4,x_5) in R^5 which satisfy

\begin{aligned}2x_1-x_2+\frac{4}{3}x_3-x_4&=0\\x_1+\frac{2}{3}x_3-x_5&=0\\9x_1-3x_2+6x_3-3x_4-3x_5&=0\end{aligned}

Find a finite set of vectors which spans W.

Solution: We have

\begin{bmatrix}2&-1&4/3&-1&0\\1&0&2/3&0&-1\\9&-3&6&-3&-3\end{bmatrix}\rightarrow\begin{bmatrix}1&0&2/3&0&-1\\0&-1&0&-1&2\\3&-1&2&-1&-1\end{bmatrix}\\ \rightarrow\begin{bmatrix}1&0&2/3&0&-1\\0&1&0&1&-2\\0&-1&0&-1&2\end{bmatrix}\rightarrow\begin{bmatrix}1&0&2/3&0&-1\\0&1&0&1&-2\\0&0&0&0&0\end{bmatrix}

The solutions to the system can be written as x_1=x_5-\frac{2}{3} x_3,x_2=2x_5-x_4. The finite set of vectors which spans W can be found as (-2,0,3,0,0),(0,-1,0,1,0),(1,2,0,0,1)

5. Let F be a field and let N be a positive integer (n\geq2). Let V be the vector space of all n\times n matrices over F. Which of the following sets of matrices A in V are subspaces of V?
( a ) all invertible A;
( b ) all non-invertible A;
( c ) all A such that AB=BA, where B is some fixed matrix in V;
( d ) all A such that A^2=A.

Solution:
( a ) No, since A=\begin{bmatrix}1&\\&1\end{bmatrix},B=\begin{bmatrix}1&1\\0&1\end{bmatrix} are invertible matrices, but B-A=\begin{bmatrix}0&1\\0&0\end{bmatrix} is not.
( b ) No, since A=\begin{bmatrix}1&0\\0&0\end{bmatrix},B=\begin{bmatrix}0&0\\0&1\end{bmatrix} are not invertible, but A+B=\begin{bmatrix}1&0\\0&1\end{bmatrix} is.
( c ) Yes, since for A,C belongs to the set, we have

(dA+C)B=d(AB)+CB=d(BA)+BC=B(dA+C),\quad \forall d\in F

( d ) No, since if A=\begin{bmatrix}1&0\\0&1\end{bmatrix}, then A^2=A, but 3A=\begin{bmatrix}3&0\\0&3\end{bmatrix} and (3A)^2=9I=3(3A).

6. (a) Prove that the only subspaces of R^1 are R^1 and the zero subspace.
(b) Prove that a subspace of R^2 is R^2, or the zero subspace, or consists of all scalar multiples of some fixed vector in R^2. (The last type of subspace is, intuitively, a straight line throuth the origin.)
(c) Can you describe the subspace of R^3?

Solution:
( a ) It’s apparent that R^1 and the zero space are subspaces of R^1, assume there’s W\subset R^1 which is a subspace different from R^1 and the zero space, then there’s a\in W,a\neq 0 and b\notin W, let k=b/a, then b=ka, which means b\in W since W is a subspace, this is a contradiction.
( b ) It’s apparent that R^2 and the zero space are subspaces of R^2, if W=\{kv:v\in R^2 \}, then it’s easy to prove W is a subspace of R^2. Assume there’s U\subset R^2 which is a subspace, but not of the three types above, then 0\in U and \exists u\in U,\exists v\notin U, since U is not of the last type, we can find u'\in U s.t. u' is not a scalar multiple of u, thus we must be able to find x_1,x_2\in R s.t. x_1 u+x_2 u'=v, this means v\in U and thus a contradiction.
( c ) The subspaces of R^3 is R^3, the zero subspace, all forms of \{kv+lu:u,v\in R^3,k,l\in R\}.

7. Let W_1 and W_2 be subspaces of a vector space V such that the set-theoretic union of W_1 and W_2 is also a subspace. Prove that one of the spaces W_i is contained in the other.

Solution: Let W=W_1\cup W_2, then W is a subspace of V, assume \exists a\in W_1\backslash W_2,b\in W_2\backslash W_1, then a,b\in W, thus a+b\in W, which means (a+b\in W_1 )\vee (a+b\in W_2 ), but this means

(b=(a+b)-a\in W_1 )\vee (a=(a+b)-b\in W_2 )

and either leads to a contradiction, thus either W_1\backslash W_2=\emptyset or W_2\backslash W_1=\emptyset, the conclusion follows.

8. Let V be the vector space of all functions from R into R; let V_e be the subset of even functions, f(--x)=f(x); let V_o be the subset of odd functions, f(-x)=-f(x).
( a ) Prove that V_e and V_o are subspaces of V.
( b ) Prove that V_e+V_o=V.
( c ) Prove that V_e\cap V_o=\{0\}.

Solution:
( a ) first let f,g\in V_e, then f and g are even functions, for c\in R we have

(cf+g)(-x)=cf(-x)+g(-x)=cf(x)+g(x)=(cf+g)(x)

thus cf+g\in V_e and V_e is a subspace of V. Similarly, if f,g\in V_o, then

(cf+g)(-x)=cf(-x)+g(-x)=-cf(x)-g(x)=-(cf+g)(x)

thus cf+g\in V_o and V_o is a subspace of V.
( b ) let \forall h\in V, define

f(x)=\dfrac{h(x)+h(-x)}{2},\quad g(x)=\dfrac{h(x)-h(-x)}{2}

then f(-x)=f(x),g(-x)=-g(x), so f\in V_e,g\in V_o, and f+g=h.
( c ) If h\in V_e\cap V_o, then h(-x)=h(x)=-h(x), thus h(x)=0,\forall x\in R.

9. Let W_1 and W_2 be subspaces of a vector space V such that W_1+W_2=V and W_1\cap W_2=\{0\}. Prove thet for each vector \alpha in V there are unique vectors \alpha_1 in W_1 and \alpha_2 in W_2 such that \alpha = \alpha_1+\alpha_2.

Solution: Let \alpha\in V, since V=W_1+W_2, we can find \alpha_1\in W_1,\alpha_2\in W_2 s.t. \alpha=\alpha_1+\alpha_2.
To prove uniqueness, suppose there’s \beta_1\in W_1,,\beta_2\in W_2 s.t. \alpha=\beta_1+\beta_2, then

\alpha_1+\alpha_2=\beta_1+\beta_2  \Rightarrow \alpha_1-\beta_1=\beta_2-\alpha_2

since \alpha_1-\beta_1\in W_1 and \beta_2-\alpha_2\in W_2, we know that \alpha_1-\beta_1\in W_1\cap W_2,\beta_2-\alpha_2\in W_1\cap W_2, thus \alpha_1-\beta_1=\beta_2-\alpha_2=0, which means \alpha_1=\beta_1,\beta_2=\alpha_2.