Linear Algebra (2ed) Hoffman & Kunze 8.3

The first portion of this section treats linear functionals on an inner product space and their relation to the inner product. The basic result is that any linear functional f on a finite-dimensional inner product space is ‘inner product with a fixed vector in the space,’ i.e., that such an f has the form f(\alpha)=(\alpha|\beta) for some fixed \beta\in V. We use this result to prove the existence of the ‘adjoint’ of a linear operator T on V, this being a linear operator T^* such that (T\alpha|\beta)=(\alpha|T^*\beta) for all \alpha,\beta\in V. Through the use of an orthonormal basis, this adjoint operator on linear operator (passing from T to T^*) is identified with the operation of forming the conjugate transpose of a matrix. We explore slightly the analogy between the adjoint operation and conjugation on complex numbers.

Exercises

1.Let V be the space C^2, with the standard inner product. Let T be the linear operator defined by T\epsilon_1=(1,-2),T\epsilon_2=(i,-1). If \alpha=(x_1,x_2), find T^*\alpha.
Solution: The matrix of T in the standard ordered basis is \begin{bmatrix}1&i\\-2&-1\end{bmatrix}, which means the matrix of T^* in the standard ordered basis is \begin{bmatrix}1&-2\\-i&-1\end{bmatrix}, thus

\displaystyle{T^*\alpha=x_1T^*\epsilon_1+x_2T^*\epsilon_2=x_1(1,-i)+x_2(-2,-1)=(x_1-2x_2,-ix_1-x_2)}

2.Let T be the linear operator on C^2 defined by T\epsilon_1=(1+i,2),T\epsilon_2=(i,i). Using the standard inner product, find the matrix of T^* in the standard ordered basis. Does T commute with T^*?
Solution: The matrix of T in the standard ordered basis is \begin{bmatrix}1+i&i\\2&i\end{bmatrix}, which means the matrix of T^* in the standard ordered basis is \begin{bmatrix}1-i&2\\-i&-i\end{bmatrix}. T does not commute with T^* since

\displaystyle{\begin{bmatrix}1+i&i\\2&i\end{bmatrix}\begin{bmatrix}1-i&2\\-i&-i\end{bmatrix}=\begin{bmatrix}3&3+2i\\3-2i&5\end{bmatrix},\begin{bmatrix}1-i&2\\-i&-i\end{bmatrix}\begin{bmatrix}1+i&i\\2&i\end{bmatrix}=\begin{bmatrix}6&1+3i\\1-3i&2\end{bmatrix}}

3.Let V be C^3 with the standard inner product. Let T be the linear operator on V whose matrix in the standard ordered basis is defined by A_{jk}=i^{j+k},(i^2=-1). Find a basis for the null space of T^*.
Solution: Let \mathscr B be the standard ordered basis, then

\displaystyle{[T]{\mathscr B}=\begin{bmatrix}-1&-i&1\\-i&1&i\\1&i&-1\end{bmatrix}\implies [T^*]_{\mathscr B}=\begin{bmatrix}-1&i&1\\i&1&-i\\1&-i&-1\end{bmatrix}}

To find a basis for the null space of T^*, we see that

\displaystyle{\begin{bmatrix}-1&i&1\\i&1&-i\\1&-i&-1\end{bmatrix}{\rightarrow}\begin{bmatrix}1&-i&-1\\0&0&0\\0&0&0\end{bmatrix}}

One basis can be (1,0,1),(i,1,0)

4.Let V be a finite-dimensional inner product space and T a linear operator on V. Show that the range of T^* is the orthogonal complement of the null space of T.
Solution: First suppose \beta\in\text{range }T^*, then \exists \alpha\in V, s.t. T^*\alpha=\beta, then for any u\in \text{null }T, we have (u|\beta)=(u|T^*\alpha)=(Tu|\alpha)=0, thus \beta\in W^{\perp}, it follows that \text{range }T^*\subset W^{\perp}. Next we see that

\displaystyle{w\in \text{null }T{\Leftrightarrow}Tw=0{\Leftrightarrow}(Tw|v)=0,\forall v\in V{\Leftrightarrow}(w|T^*v)=0{\Leftrightarrow}w\in(\text{range }T^*)^{\perp}}

so \text{null }T=(\text{range }T^*)^{\perp}, which means \dim (\text{range }T^*)^{\perp}=\dim \text{null }T, use the fact that V=\text{null }T\oplus(\text{null }T)^{\perp} and V=\text{range }T^*\oplus(\text{range }T^*)^{\perp}, we have \dim (\text{null }T)^{\perp}=\dim \text{range }T^*, combined with \text{range }T^*\subset W^{\perp} we get the result.

5.Let V be a finite-dimensional inner product space and T a linear operrator on V. If T is invertible, show that T^* is invertbile and (T^*)^{-1}=(T^{-1})^*.
Solution: We have I^*=I, so that

\displaystyle{I=(TT^{-1})^*=(T^{-1})^*T^{*}\implies (T^*)^{-1}=(T^{-1})^*}

6.Let V be an inner product space and \beta,\gamma fixed vectors in V. Show that T\alpha=(\alpha|\beta)\gamma defines a linear operator on V. Show that T has an adjoint, and describe T^* explicitly. Now suppose V is C^n with standard inner product, \beta=(y_1,\dots,y_n) and \gamma=(x_1,\dots,x_n). what is the j,k entry of the matrix of T in the standard ordered basis? What is the rank of this matrix?
Solution: We let a,b\in V and c a scalar, then
\displaystyle{T(ca+b)=(ca+b|\beta)\gamma=[c(a|\beta)+(b|\beta)]\gamma=c(a|\beta)\gamma+(b|\beta)\gamma=cTa+Tb}
To see T has an adjoint, let U\alpha=(\alpha|\gamma)\beta, then

\displaystyle{(T\alpha|\delta)=((\alpha|\beta)\gamma|\delta)=(\alpha|\beta)(\gamma|\delta)=(\gamma|\delta)(\alpha|\beta)=(\alpha|(\delta|\gamma)\beta)=(\alpha|U\delta)}

which means T^*\alpha=(\alpha|\gamma)\beta.
In the case V=C^n, If A is the matrix of T in the standard ordered basis, then

\displaystyle{T\epsilon_k=(\epsilon_k|\beta)\gamma=\overline{y_k}(x_1,\dots,x_n)}

So the j,kth entry of A is x_j\overline{y_k}. The rank of A is 1.

7.Show that the product of two self-adjoint operator is self-adjoint if and only if the two operators commute.
Solution: Let T,U be two self-adjoint operator, if TU=UT, then $latex (TU)^=U^T^*=UT=TU$, and (UT)^*=UT by the same argument. Conversely if TU is self adjoint, we have (TU)^*=TU, then TU=U^*T^*=UT.

8.Let V be the vector space of the polynomials over R of degree less than or equal to 3, with the inner product (f|g)=\int_0^1f(t)g(t)dt. If t is a real number, find the polynomial g_t in V such that (f|g_t)=f(t) for all f\in V.
Solution: Let g_t=a+bx+cx^2+dx^3, in which a,b,c,d\in R, then we only requires (1|g_t)=1,(x|g_t)=t,(x^2|g_t)=t^2,(x^3|g_t)=t^3, and the conclusion follows since 1,x,x^2,x^3 is a basis for V.
Solve the system of equations

a+\dfrac{1}{2}b+\dfrac{1}{3}c+\dfrac{1}{4}d=1 \\ \dfrac{1}{2}a+\dfrac{1}{3}b+\dfrac{1}{4}c+\dfrac{1}{5}d=t \\ \dfrac{1}{3}a+\dfrac{1}{4}b+\dfrac{1}{5}c+\dfrac{1}{6}d=t^2 \\ \dfrac{1}{4}a+\dfrac{1}{5}b+\dfrac{1}{6}c+\dfrac{1}{7}d=t^3

we can get a solution for (a,b,c,d) with parameter t, and g_t follows.

9.Let V be the inner product space of Exercise 8, and let D be the differentiation operator on V. Find D^*.
Solution: If such D^* exists, it shall satisfy the condition
\displaystyle{(f|Dg+D^*g)=f(1)g(1)-f(0)g(0),\quad \forall f,g\in V} We let T=D+D^* and f_i=x^i,i=0,1,2,3 be a basis for V, then (f|Tg)=f(1)g(1)-f(0)g(0) if and only if (f_i|Tg)=f_i(1)g(1)-f_i(0)g(0) for i=0,1,2,3, which means

\displaystyle{\int_0^1Tg=g(1)-g(0),\int_0^1xTg=g(1),\int_0^1x^2Tg=g(1),\int_0^1x^3Tg=g(1)}

Use the result of Exercise 8 we can get an expression for Tg, and then use the relation D^*=T-D to get the result.

10.Let V be the space of n\times n matrices over the complex numbers, with the inner product (A,B)=\text{tr}(AB^*). Let P be a fixed invertbile matrix in V, and let T_P be the linear operator on V defined by T_P(A)=P^{-1}AP. Find the adjoint of T_P.
Solution: The adjoint of T_P shall satisfy (T_P(A),B)=(A,T_P^*(B)), since (T_P(A),B)=\text{tr}(P^{-1}APB^*)=\text{tr}(APB^*P^{-1}), we see that T_P^*(B)=(PB^*P^{-1})^*=(P^{-1})^*BP^*.

11.Let V be a finite-dimensional inner product space, and let E be an idempotent linear operator on V, i.e.,E^2=E. Prove that E is self-adjoint if an only if EE^*=E^*E.
Solution: If E is self-adjoint, then EE^*=E^2=E^*E. Conversely, if we have EE^*=E^*E, then as

\displaystyle{(E+E^*-I)(E-E^*)=E^2-EE^*+E^*E-(E^*)^2-E+E^*=0}

we see that (E+E^*-I)(E-E^*)v=0 for all v\in V. Now suppose there is u\neq 0 such that (E+E^*-I)u=0, then we have E(E+E^*-I)u=EE^*u=0, or (EE^*u|u)=(E^*u|E^*u)=0, which shows E^*u=0, similarly we can show Eu=0, but then (E+E^*-I)u=-u\neq 0, a contradiction, thus the null space of the operator (E+E^*-I) is the zero space, which means (E-E^*)v=0 for all v\in V, or E=E^*.

12.Let V be a finite-dimensional complex inner product space, and let T be a linear operator on V. Prove that T is self-adjoint if and only if (T\alpha|\alpha) is real for every \alpha\in V.
Solution: We have

\displaystyle{T^*=T\implies(T\alpha|\alpha)=(\alpha|T\alpha)=\overline{(T\alpha|\alpha)},\quad \forall \alpha\in V} Conversely, we can have \displaystyle{0=(T\alpha|\alpha)-\overline{(T\alpha|\alpha)}=(T\alpha|\alpha)-(\alpha|T\alpha)=(T\alpha|\alpha)-(T^*\alpha|\alpha)}

thus ((T-T^*)\alpha|\alpha)=0 for every \alpha\in V. Let U=T-T^*, then

\displaystyle{0=\dfrac{(U(\alpha+\beta)|\alpha+\beta)-(U(\alpha-\beta)|\alpha-\beta)}{4}=\dfrac{(U\beta|\alpha)+(U\alpha|\beta)}{2}}

and

\displaystyle{0=\dfrac{(U(\alpha+i\beta)|\alpha+i\beta)-(U(\alpha-i\beta)|\alpha-i\beta)}{4}i=\dfrac{(U\alpha|i\beta)+U(i\beta|\alpha)}{2}i=\dfrac{(U\alpha|\beta)-(U\beta|\alpha)}{2}}

thus we have (U\alpha|\beta)=0 for all \alpha,\beta\in V. Let \beta=U\alpha, we see U\alpha=0 for all \alpha\in V, thus U=0 and T=T^*.

Linear Algebra (2ed) Hoffman & Kunze 8.2

这一节讨论内积空间,内容丰富,但都是经典的内容,比如Caucy-Schwarz不等式,orthogonal和orthonormal,Gram-Schmidt正交化过程。后半部分是投影和最小化过程,这是线性代数的核心内容之一。这部分的重要概念是垂直投影(orthogonal projection),垂直补空间(orthogonal complement)。Theorem 5说明任何一个子空间和其垂直补空间直和成为总的空间,其推论说明如果子空间的投影变换是E则垂直补空间的投影变换是I-E,以及Bessel不等式,任何向量在一组正交基子空间中的投影长度小于等于自身。这个不等式在有限维空间中是显然的情况,但是Bessel不等式并没有限制在有限维空间中。

Exercises

1.Consider R^4 with the standard inner product. Let W be the subspace of R^4 consisting of all vectors which are orthogonal to both \alpha=(1,0,-1,1) and \beta=(2,3,-1,2). Find a basis for W.
Solution: If x=(x_1,x_2,x_3,x_4)\in W, then it shall satisfy (x|\alpha)=0 and (x|\beta)=0, thus from

\displaystyle{\begin{bmatrix}1&0&-1&1\\2&3&-1&2\end{bmatrix}\rightarrow\begin{bmatrix}1&0&-1&1\\0&3&1&0\end{bmatrix}}

we can find a basis for W to be (1,0,0,-1),(0,1,-3,-3).

2.Apply the Gram-Schimidt process to the vectors \beta_1=(1,0,1),\beta_2=(1,0,-1),\beta_3=(0,3,4), to obtain an orthonormal basis for R^3 with the standard inner product.
Solution: We let \alpha_1=\beta_1, and then

\displaystyle{\alpha_2=(1,0,-1)-\frac{0}{2}\alpha_1=(1,0,-1) \\ \alpha_3=(0,3,4)-\frac{4}{2}(1,0,1)-\frac{-4}{2}(1,0,-1)=(0,3,0)}

The orthonormal basis for R^3 may be \frac{1}{\sqrt{2}}(1,0,1),\frac{1}{\sqrt{2}}(1,0,-1),(0,1,0).

3.Consider C^3, with the standard inner product. Find an orthonormal basis for the subspace spanned by \beta_1=(1,0,i) and \beta_2=(2,1,1+i).
Solution: We have

\displaystyle{\begin{aligned}\beta_2-\dfrac{(\beta_2|\beta_1)}{||\beta_1||^2}&=(2,1,1+i)-\frac{2-i(1+i)}{2}(1,0,i)\\&=(2,1,1+i)-\frac{3-i}{2}(1,0,i)\\&=\left(\frac{1+i}{2},1,\frac{1-i}{2}\right)\end{aligned}}

thus (1,0,i),\left(\dfrac{1+i}{2},1,\dfrac{1-i}{2}\right) are one orthogonal basis for the desired subspace. The corresponding orthonormal basis is

\displaystyle{\left(\frac{1}{\sqrt{2}},0,\frac{i}{\sqrt{2}}\right),\left(\dfrac{1+i}{2\sqrt{2}},\frac{1}{\sqrt{2}},\dfrac{1-i}{2\sqrt{2}}\right)}

4.Let V be an inner product space. The distance between two vectors \alpha and \beta in V is defined by d(\alpha,\beta)=||\alpha-\beta||. Show that
( a ) d(\alpha,\beta)\geq 0;
( b ) d(\alpha,\beta)=0 if and only if \alpha=\beta;
( c ) d(\alpha,\beta)=d(\beta,\alpha);
( d ) d(\alpha,\beta)\leq d(\alpha,\gamma)+d(\gamma,\beta).
Solution:
( a ) Easy by definition ;
( b ) By Theorem 1(ii), ||\alpha-\beta||>0 for \alpha-\beta\neq 0;
( c ) We have

\displaystyle{\begin{aligned}||\alpha-\beta||^2&=(\alpha-\beta|\alpha-\beta)\\&=(\alpha|\alpha)-(\alpha|\beta)-(\beta|\alpha)+(\beta|\beta)\\&=(\beta-\alpha|\beta-\alpha)=||\beta-\alpha||^2\end{aligned}}

thus d(\alpha,\beta)=d(\beta,\alpha), which are separately positive roots for ||\alpha-\beta||^2 and ||\beta-\alpha||^2.
( d ) By Theorem 1(iv), we have

\displaystyle{\begin{aligned}d(\alpha,\beta)&=||\alpha-\beta||=||\alpha-\gamma+\gamma-\beta||\\&\leq ||\alpha-\gamma||+||\gamma-\beta||=d(\alpha,\gamma)+d(\gamma,\beta)\end{aligned}}

5.Let V be an inner product space, and let \alpha,\beta be vectors in V. Show that \alpha=\beta if and only if (\alpha|\gamma)=(\beta|\gamma) for every \gamma\in V.
Solution: One direction is ovbious, conversely, if (\alpha|\gamma)=(\beta|\gamma), then (\alpha-\beta|\gamma)=0 for every \gamma\in V, let \gamma=\alpha-\beta, we see that ||\alpha-\beta||^2=0, or ||\alpha-\beta||=d(\alpha,\beta)=0, which means \alpha=\beta.

6.Let W be the subspace of R^2 spanned by the vector (3,4). Using the standard inner product, let E be the orthogonal projection of R^2 onto W. Find
( a ) a formula for E(x_1,x_2);
( b ) the matrix of E in the standard ordered basis;
( c ) W^{\perp};
( d ) an orthonormal basis in which E is represented by the matrix \begin{bmatrix}1&0\\0&0\end{bmatrix}.
Solution:
( a ) E(x_1,x_2)=\dfrac{3x_1+4x_2}{25}(3,4).
( b ) E(1,0)=\frac{3}{25}(3,4)=(\frac{9}{25},\frac{12}{25}) and E(0,1)=\frac{4}{25}(3,4)=(\frac{12}{25},\frac{16}{25}). Thus the matrix is

\displaystyle{\begin{bmatrix}\dfrac{9}{25}&\dfrac{12}{25}\\ \dfrac{12}{25}&\dfrac{16}{25}\end{bmatrix}}

( c ) W^{\perp} has dimension 1 and we have (x_1,x_2)-E(x_1,x_2)\in W^{\perp}, so let (x_1,x_2)=(1,0) we find a vector in W^{\perp} to be (\frac{16}{25},-\frac{12}{25}), which means W^{\perp} is the subspace spanned by (4,-3).
( d ) One orthonormal basis shall be (\frac{3}{5}, \frac{4}{5}),(\frac{4}{5}, -\frac{3}{5}).

7.Let V be the inner product space consisting of R^2 and the inner product whose quadratic form is defined by

\displaystyle{||(x_1,x_2)||^2=(x_1-x_2)^2+3x_2^2.}

Let E be the orthonormal projection of V onto the subspace W spanned by the vector (3,4). Now answer the four questions of Exercise 6.
Solution: We first compute the inner product under this case, let x=(x_1,x_2),y=(y_1,y_2), then we have

\displaystyle{\begin{aligned}(x|y)&=\frac{1}{4}||x+y||^2-\frac{1}{4}||x-y||^2\\&=\frac{1}{4}||(x_1+y_1,x_2+y_2)||^2-\frac{1}{4}||(x_1-y_1,x_2-y_2)||^2\\&=\frac{1}{4}(2x_1-2x_2)(2y_1-2y_2)+\frac{3}{4}(4x_2y_2)\\&=(x_1-x_2)(y_1-y_2)+3x_2y_2\end{aligned}}

( a ) E(x_1,x_2)=\dfrac{((x_1,x_2)|(3,4))}{1+48}(3,4)=\dfrac{13x_2-x_1}{49}(3,4);
( b ) E(1,0)=-\dfrac{1}{49}(3,4),E(0,1)=\dfrac{13}{49}(3,4), thus the matrix is

\displaystyle{\begin{bmatrix}-\dfrac{3}{49}&\dfrac{39}{49}\\-\dfrac{4}{49}&\dfrac{52}{49}\end{bmatrix}}

( c ) W^{\perp} has dimension 1 and we have (x_1,x_2)-E(x_1,x_2)\in W^{\perp}, so let (x_1,x_2)=(1,0) we find a vector in W^{\perp} to be (\frac{52}{49},\frac{4}{49}), which means W^{\perp} is the subspace spanned by (13,1).
( d ) It follows that (3,4),(13,1) are one basis for V, and one orthonormal basis shall be (\frac{3}{7}, \frac{4}{7}),(\frac{13}{\sqrt{147}}, \frac{1}{\sqrt{147}}).

8.Find an inner product on R^2 such that (\epsilon_1,\epsilon_2)=2.
Solution: If the quadratic form of (x_1,x_2) is defined by

\displaystyle{||(x_1,x_2)||^2=(x_1-x_2)^2+3(x_1+x_2)^2}

then the inner product can be computed as

\displaystyle{\begin{aligned}(x|y)&=\frac{1}{4}||x+y||^2-\frac{1}{4}||x-y||^2\\&=\frac{1}{4}||(x_1+y_1,x_2+y_2)||^2-\frac{1}{4}||(x_1-y_1,x_2-y_2)||^2\\&=\frac{1}{4}(x_1+y_1-x_2-y_2)^2+\frac{3}{4}(x_1+y_1+x_2+y_2)^2-\frac{1}{4}(x_1-y_1-x_2+y_2)^2-\frac{3}{4}(x_1+x_2-y_1-y_2)^2\\&=\frac{1}{4}(2x_1-2x_2)(2y_1-2y_2)+\frac{3}{4}(2x_1+2x_2)(2y_1+2y_2)\\&=(x_1-x_2)(y_1-y_2)+3(x_1+x_2)(y_1+y_2)\end{aligned}}

and (\epsilon_1,\epsilon_2)=-1+3=2.

9.Let V be the subspace of R[x] of polynomials of degree at most 3. Equip V with the inner product

\displaystyle{(f|g)=\int_0^1f(t)g(t)dt.}

( a ) Find the orthogonal complement of the subspace of scalar polynomials.
( b ) Apply the Gram-Schmidt process to the basis \{1,x,x^2,x^3\}.
Solution:
( a ) For f(x)=c_0+c_1x+c_2x^2+c_3x^3, we have (f|c)=\int_0^1cf(t)dt=c\int_0^1f(t)dt, thus the orthogonal complement of the subspace of scalar polynomials is the the subspace of polynomials which has zero integral on [0,1], which means

\displaystyle{\int_0^1f(t)dt=c_0+\frac{c_1}{2}+\frac{c_2}{3}+\frac{c_3}{4}=0}

One basis for this space can be 1-4x^3,x-2x^3,3x^2-4x^3.
( b ) We let \beta_i=x^{i-1}, then \alpha_1=1, and

\displaystyle{\alpha_2=\beta_2-\frac{(\beta_2|\alpha_1)}{||\alpha_1||^2}\alpha_1=x-\int_0^1tdt=x-\frac{1}{2}}\\ \begin{aligned}{\alpha_3}&=\beta_3-\dfrac{(\beta_3|\alpha_1)}{||\alpha_1||^2}\alpha_1-\dfrac{(\beta_3|\alpha_2)}{||\alpha_2||^2}\alpha_2\\&=x^2-\left(\int_0^1t^2dt\right)-\dfrac{\left(\int_0^1t^2(t-\frac{1}{2})dt\right)}{\left(\int_0^1(t-\frac{1}{2})^2dt\right)}(x-\dfrac{1}{2})\\&=x^2-x+\dfrac{1}{6}\end{aligned} \\ \begin{aligned}\alpha_4&=\beta_4-\dfrac{(\beta_4|\alpha_1)}{||\alpha_1||^2}\alpha_1-\dfrac{(\beta_4|\alpha_2)}{||\alpha_2||^2}\alpha_2-\dfrac{(\beta_4|\alpha_3)}{||\alpha_3||^2}\alpha_3\\&=x^3-\left(\int_0^1t^3dt\right)-(x-\dfrac{1}{2})-\dfrac{\left(\int_0^1t^3(t^2-t+\dfrac{1}{6})dt\right)}{\left(\int_0^1(t^2-t+\dfrac{1}{6})^2dt\right)}(x^2-x+\dfrac{1}{6})\\&=x^3-x+\dfrac{1}{4}-\dfrac{3}{2}(x^2-x+\dfrac{1}{6})=x^3-\dfrac{3}{2}x^2+\dfrac{1}{2}x\end{aligned}

10.Let V be the vector space of all n\times n matrices over C, with the inner product (A|B)=\text{tr}(AB^*). Find the orthogonal complement of the subspace of diagonal matrices.
Solution: Let D=\text{diag}(d_1,\dots,d_n) be any diagonal matrix, then

\displaystyle{(A|D)=\text{tr}(AD^*)=\sum_{i=1}^n(AD^*)_{ii}=\sum_{i=1}^n\sum_{j=1}^nA_{ij}\overline{D_{ji}}=\sum_{i=1}^nA_{ii}\overline{d_i}}

If A is in the orthogonal complement of the subspace of diagonal matrices, then \sum_{i=1}^nA_{ii}\overline{d_i}=0 for any choice of d_i, thus A_{ii}=0 for i=1,\dots,n.

11.Let V be a finite-dimensional inner product space, and let \{\alpha_1,\dots,\alpha_n\} be an orthonormal basis for V. Show that for any vectors \alpha,\beta\in V

\displaystyle{(\alpha|\beta)=\sum_{k=1}^n(\alpha|\alpha_k)\overline{(\beta|\alpha_k)}.}

Solution: We have \alpha=\sum_{i=1}^na_i\alpha_i and \beta=\sum_{i=1}^nb_i\alpha_i, also we have (\alpha|\alpha_k)=a_k and (\beta|\alpha_k)=b_k for any k=1,\dots,n. Thus

\displaystyle{(\alpha|\beta)=\left(\sum_{i=1}^na_i\alpha_i|\sum_{i=1}^nb_i\alpha_i\right)=\sum_{i=1}^na_i(\alpha_i|\sum_{i=1}^nb_i\alpha_i)=\sum_{i=1}^na_i\overline{b_i}}

12.Let W be a finite-dimensional subspace of an inner product space V, and let E be the orthogonal projection of V on W. Prove that (E\alpha|\beta)=(\alpha|E\beta) for all \alpha,\beta\in V.
Solution: We can write \alpha=E\alpha+(\alpha-E\alpha) and \beta=E\beta+(\beta-E\beta). Since E\alpha,E\beta\in W, we know that \alpha-E\alpha,\beta-E\beta\in W^{\perp}, so

\displaystyle{\begin{aligned}(E\alpha|\beta)&=(E\alpha|E\beta+(\beta-E\beta))=(E\alpha|E\beta)+(E\alpha|\beta-E\beta)\\&=(E\alpha|E\beta)+0=(E\alpha|E\beta)+(\alpha-E\alpha|E\beta)\\&=(\alpha|E\beta)\end{aligned}}

13.Let S be a subset of an inner product space V. Show that (S^{\perp})^{\perp} contains the subspace spanned by S. When V is finite dimensional, show that (S^{\perp})^{\perp} is the subspace spanned by S.
Solution: Let W be the subspace spanned by S, which is the intersection of all subspaces which contain S, if S is empty, then the case is trivial. Suppose S is non-empty, then W is the set of all linear combinations of vectors in S, so for any w\in W, we can find s_1,\dots,s_k in S such that w=\sum_{i=1}^kc_is_i for some scalars c_i, given any u\in S^{\perp}, we know that (s_i|u)=0 for all i=1,\dots,k, which means (w|u)=\sum_{i=1}^kc_i(s_i|u)=0, so w\in (S^{\perp})^{\perp} and W\subset (S^{\perp})^{\perp}.
When V is finite dimensional, W can be spanned by finitely many vectors in S, namely s_1,\dots,s_k, now we have

\displaystyle{\begin{aligned}u\in S^{\perp}&\implies (u|s)=0,\forall s\in S \\&\implies (s_i|u)=0,i=1,\dots,k \\&\implies(w|u)=0,\forall w\in W\\&\implies u\in W^{\perp}\end{aligned}}

and also S\subset W means if u\in W^{\perp}, then u\in S^{\perp}, thus S^{\perp}=W^{\perp}, since V=(S^{\perp})^{\perp}\oplus S^{\perp}=W\oplus W^{\perp}, we get the result.

14.Let V be a finite-dimensional inner product space, and let \mathcal B=\{\alpha_1,\dots,\alpha_n\} be an orthonormal basis for V. Let T be a linear operator on V and A the matrix of T in the ordered basis \mathcal B. Prove that A_{ij}=(T\alpha_j|\alpha_i).
Solution: By definition we have T\alpha_j=\sum_{i=1}^nA_{ij}\alpha_i, thus (T\alpha_j|\alpha_i)=(\sum_{k=1}^nA_{kj}\alpha_k|\alpha_i)=\sum_{k=1}^nA_{kj}(\alpha_k|\alpha_i)=A_{ij}.

15.Suppose V=W_1\oplus W_2 and that f_1 and f_2 are inner products on W_1 and W_2, respectively. Show that there is a unique inner product f on V such that
( a ) W_2=W_1^{\perp};
( b ) f(\alpha,\beta)=f_k(\alpha,\beta), when \alpha,\beta\in W_k,k=1,2.
Solution: For any u,v\in V, we can write u=u_1+u_2 and v=v_1+v_2, define f(u,v)=f_1(u_1,v_1)+f_2(u_2,v_2), we first verify f is an inner product:
consider u,v which is decomposed as follows, and w=w_1+w_2,w_1\in W_1,w_2\in W_2, then

\begin{aligned}f(u+v,w)&=f_1(u_1+v_1,w_1)+f_2(u_2+v_2,w_2)\\&=f_1(u_1,w_1)+f_1(v_1,w_1)+f_2(u_2,w_2)+f_2(v_2,w_2)\\&=f(u,w)+f(v,w)\end{aligned}\\f(cu,v)=f_1(cu_1,v_1)+f_2(cu_2,v_2)=c(f_1(u_1,v_1)+f_2(u_2,v_2))=cf(u,v) \\ f(v,u)=f_1(v_1,u_1)+f_2(v_2,u_2)=\overline{f_1(u_1,v_1)}+\overline{f_2(u_2,v_2)}=\overline{f(u,v)} \\ f(u,u)=f_1(u_1,u_1)+f_2(u_2,u_2)>0 \text{ if } u=u_1+u_2\neq 0

To prove (a), let v\in W_2, then for any u\in W_1 we have f(u,v)=f_1(u,0)+f_2(0,v)=0, and (b) is obvious.

16.Let V be an inner product space and W a finite-dimensional subspace of V. There are (in general) many projections which have W as their range. One of these, the orthogonal projection on W, has the property that ||E\alpha||\leq||\alpha|| for every \alpha\in V. Prove that if E is a projection with range W, such that ||E\alpha||\leq||\alpha|| for every \alpha\in V, then E is the orthogonal projection on W.
Solution: We assume E is not the orthogonal projection on W, then we can find u\in V such that Eu\in W and (u-Eu|Eu)\neq 0, it is apparent that u\neq 0 and also u-Eu\neq 0. We define

\displaystyle{k=\frac{||u-Eu||^2}{(Eu|u-Eu)},\qquad v=u-Eu-kEu}

then we have Ev=E(u-Eu)-kE^2u=-kEu, thus ||Ev||^2=|k|^2||Eu||, on the other hand we have

\displaystyle{\begin{aligned}||v||^2&=(u-Eu-kEu|u-Eu-kEu)\\&=||u-Eu||^2-k(Eu|u-Eu)-\overline{k(Eu|u-Eu)}+|k|^2||Eu||^2\\&=||Ev||^2-||u-Eu||^2\end{aligned}}

which means ||v||^2+||u-Eu||^2=||Ev||^2. Since u-Eu\neq 0, we have ||u-Eu||^2\neq 0 and thus ||v||^2<||Ev||^2

17.Let V be the real inner product space consisting of the space of real-valued continuous functions on the interval, -1\leq t\leq 1, with the inner product (f|g)=\int_{-1}^1f(t)g(t)dt. Let W be the subspace of odd functions, i.e. functions satisfying f(-t)=-f(t). Find the orthogonal complement of W.
Solution: If g is an even function, which means g(-t)=g(t), then f(t)g(t) is an odd function, and (f|g)=0, so g belongs to the orthogonal complement of W. Conversely, if g\in W^{\perp}, then \int_{-1}^1f(t)g(t)dt=0 for all odd functions f, let G(x)=g(x)-g(-x) defined on [0,1], then G(0)=0, and G might be positive, 0, or negative on (0,1], define

f(x)=\begin{cases}1&x\in\{x\in(0,1]:G(x)>0\}\\-1&x\in \{x\in(0,1]:G(x)<0\}\\0&\text{otherwise}\end{cases}

let f(x)=-f(-x) for x\in [-1,0) and f(0)=0, then f is an odd function, and

\displaystyle{\begin{aligned}\int_{-1}^1f(t)g(t)dt&=\int_{0}^1f(t)g(t)dt+\int_{-1}^0f(t)g(t)dt=\int_{0}^1f(t)g(t)dt+\int_{1}^0f(-t)g(-t)d(-t)\\&=\int_{0}^1f(t)g(t)dt+\int_{0}^1f(t)(-g(-t))dt=\int_0^1f(t)G(t)dt\end{aligned}}

We know that f(t)G(t)\geq 0 on [0,1] and if for some x\in (0,1] we have G(x)>0, then \int_{-1}^1f(t)g(t)dt\neq 0, a contradiction.
In conclusion, W^{\perp} is the subspace of functions which are even except on a subset of [-1,1] with measure zero, i.e.,

\displaystyle{W^{\perp}=\{f\in V:f(-x)=f(x) \text{ for }x\in S\subset [-1,1],m([-1,1]\backslash S)=0\}}

Linear Algebra (2ed) Hoffman & Kunze 8.1

内积是一般线性代数都会cover的内容,本节的处理在定义上并没有什么不同,但后面的阐述则非常有高度,其中EXAMPLE6是如何从一个已知的内积和non-singular的变换得到一个新的内积,并且本节还介绍了polarization identities和内积矩阵,将之前的知识和内积串起来,体现出内积和矩阵、线性变换等很多知识是有内在联系的。

Exercises

1.Let V be a vector space and (\text{ }|\text{ }) an inner product on V. ( a ) Show that (0|\beta)=0 for all \beta\in V. ( b ) Show that if (\alpha|\beta)=0 for all \beta\in V, then \alpha=0.
Solution:
( a ) We have (0|\beta)=(0+0|\beta)=(0|\beta)+(0|\beta), thus (0|\beta)=0.
( b ) We have (\alpha|\alpha)=0, thus \alpha=0.

2.Let V be a vector space over F. Show that the sum of two inner products on V is an inner product on V. Is the difference of two inner products an inner product? Show that a positive multiple of an inner product is an inner product.
Solution: Let (\text{ }|\text{ }) and (\text{ }|\text{ })' be two inner products on V, let (\text{ }|\text{ })''=(\text{ }|\text{ })+(\text{ }|\text{ })', then condition ( a ) is satisfied as:

\displaystyle{\begin{aligned}(\alpha+\beta|\gamma)''&=(\alpha+\beta|\gamma)+(\alpha+\beta|\gamma)'\\&=(\alpha|\gamma)+(\beta|\gamma)+(\alpha|\gamma)'+(\beta|\gamma)'\\&=(\alpha|\gamma)+(\alpha|\gamma)'+(\beta|\gamma)+(\beta|\gamma)'\\&=(\alpha|\gamma)''+(\beta|\gamma)''\end{aligned}}

condition ( b ) is satisfied as:

\displaystyle{\begin{aligned}(c\alpha|\beta)''=(c\alpha|\beta)+(c\alpha|\beta)'=c(\alpha|\beta)+c(\alpha|\beta)'=c(\alpha|\beta)''\end{aligned}}

condition ( c ) is satisfied as:

\displaystyle{\begin{aligned}(\beta|\alpha)''=(\beta|\alpha)+(\beta|\alpha)'=\overline{(\alpha|\beta)}+\overline{(\alpha|\beta)'}=\overline{(\alpha|\beta)+(\alpha|\beta)'}=\overline{(\alpha|\beta)''}\end{aligned}}

condition ( d ) is satisfied as: \begin{aligned}(\alpha|\alpha)''=(\alpha|\alpha)+(\alpha|\alpha)'>0\end{aligned} if \alpha\neq 0.
Thus (\text{ }|\text{ })'' is an inner product on V.
The difference of two inner products may not be an inner product, since condition (d) may be violated.
If k\in F,k>0, and (\text{ }|\text{ }) an inner product on V, let (\text{ }|\text{ })'=k(\text{ }|\text{ }), then we have

(\alpha+\beta|\gamma)'=k(\alpha+\beta|\gamma)=k[(\alpha|\gamma)+(\beta|\gamma)]=k(\alpha|\gamma)+k(\beta|\gamma)=(\alpha|\gamma)'+(\beta|\gamma)' \\ (c\alpha|\beta)'=k(c\alpha|\beta)=kc(\alpha|\beta)=ck(\alpha|\beta)=c(\alpha|\beta)' \\ (\beta|\alpha)'=k(\beta|\alpha)=k\overline{(\alpha|\beta)}=\overline{k(\alpha|\beta)}=\overline{(\alpha|\beta)'} \\ (\alpha|\alpha)'=k(\alpha|\alpha)>0 \text{ if } \alpha\neq 0

Thus (\text{ }|\text{ })' is an inner product on V.

3.Describe explicitly all inner products on R^1 and on C^1.
Solution: If (\text{ }|\text{ }) is an inner product on R^1, then there is some r\in R,r>0 such that (x|y)=rxy, in which r=(1|1).
If (\text{ }|\text{ }) is an inner product on C^1, then there is some r\in R,r>0 such that (x|y)=\overline{y}rx, in which r=(1|1).

4.Verify that the standard inner product on F^n is an inner product.
Solution: We let \alpha=(x_1,\dots,x_n), \beta=(y_1,\dots,y_n) and \gamma=(z_1,\dots,z_n), then

(\alpha+\beta|\gamma)=\sum_{i=1}^n(x_i+y_i)\overline{z_i}=\sum_{i=1}^nx_i\overline{z_i}+\sum_{i=1}^ny_i\overline{z_i}=(\alpha|\gamma)+(\beta|\gamma) \\ (c\alpha|\beta)=\sum_{i=1}^n(cx_i)\overline{y_i}=c\sum_{i=1}^nx_i\overline{y_i}=c(\alpha|\beta) \\ (\beta|\alpha)=\sum_{i=1}^ny_i\overline{x_i}=\overline{\sum\nolimits_{i=1}^nx_i\overline{y_i}}=\overline{(\alpha|\beta)} \\ (\alpha|\alpha)=\sum_{i=1}^nx_i\overline{x_i}=\sum_{i=1}^n|x_i|^2>0 \text{ if } (x_1,\dots,x_n)\neq 0

5.Let (\text{ }|\text{ }) be the standard inner product on R^2.
( a ) Let \alpha=(1,2),\beta=(-1,1). If \gamma is avector such that (\alpha|\gamma)=-1 and (\beta|\gamma)=3, find \gamma.
( b ) Show that for any \alpha\in R^2 we have \alpha=(\alpha|\epsilon_1)\epsilon_1+(\alpha|\epsilon_2)\epsilon_2.
Solution:
( a ) Let \gamma=(x,y), then we have x+2y=-1 and -x+y=3, thus \gamma=(-7/3,2/3).
( b ) Let \alpha=(x,y), then (\alpha|\epsilon_1)=x and (\alpha|\epsilon_2)=y, thus

\displaystyle{(\alpha|\epsilon_1)\epsilon_1+(\alpha|\epsilon_2)\epsilon_2=x(1,0)+y(0,1)=(x,y)=\alpha}

6.Let (\text{ }|\text{ }) be the standard inner product on R^2, and let T be the linear operator T(x_1,x_2)=(-x_2,x_1). Now T is ‘rotation through 90^{\circ}‘ and has the property that (\alpha|T\alpha)=0 for all \alpha\in R^2. Find all inner products [\text{ }|\text{ }] on R^2 such that [\alpha|T\alpha]=0 for each \alpha.
Solution: All inner products on R^2 can be expressed as [\alpha|\beta]=Y^tGX, so for any \alpha=(x_1,x_2) we have

\displaystyle{[\alpha|T\alpha]=\begin{bmatrix}-x_2&x_1\end{bmatrix}\begin{bmatrix}a&b\\c&d\end{bmatrix}\begin{bmatrix}x_1\\x_2\end{bmatrix}=cx_1^2-bx_2^2+(d-a)x_1x_2=0}

thus c=b=0 and a=d, so G is of the form \begin{bmatrix}a&0\\0&a\end{bmatrix} with a>0, the corresponding inner products being

\displaystyle{[(x_1,x_2)|(y_1,y_2)]=a(x_1y_1+x_2y_2),\quad a>0}

7.Let (\text{ }|\text{ }) be the standard inner product on C^2. Prove that there is no non-zero linear operator on C^2 such that (\alpha|T\alpha)=0 for all \alpha\in C^2. Generalize.
Solution: Let \alpha=(c_1,c_2), and suppose the matrix of T with respect to the standard basis is A=\begin{bmatrix}a&b\\c&d\end{bmatrix}, then T\alpha=(ac_1+cc_2,bc_1+dc_2), and

\displaystyle{(\alpha|T\alpha)=c_1\overline{ac_1+cc_2}+c_2\overline{bc_1+dc_2}=\overline{a}|c_1|^2+\overline{d}|c_2|^2+\overline{c}c_1\overline{c_2}+\overline{b}c_2\overline{c_1}=0}

Let c_1=c_2=1 we get a+b+c+d=0, let c_1=i,c_2=-i we have a+d-c-b=0, thus a+d=0 and b+c=0, also if we let c_2=0 then we have a|c_1|^2=0 for all c_1\in C, thus a=0 and whence d=0. Let c_1=1,c_2=i we have b(-i)+ci=0 or b-c=0 which gives b=c=0. So T must be zero.
A generalization shall be: Let (\text{ }|\text{ }) be the standard inner product on C^n. Prove that there is no non-zero linear operator on C^n such that (\alpha|T\alpha)=0 for all \alpha\in C^n.

8.Let A be a 2\times 2 matrix with real entries. For X,Y in R^{2\times 1} let f_A(X,Y)=Y^tAX. Show that f_A is an inner product on R^{2\times 1} if and only if A=A^t, A_{11}>0,A_{22}>0, and \det A>0.
Solution: If f_A is an inner product on R^{2\times 1}, then we let A=\begin{bmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix} and so

\displaystyle{\begin{aligned}f_A(X,Y)&=\begin{bmatrix}Y_1&Y_2\end{bmatrix}\begin{bmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix}\begin{bmatrix}X_1\\X_2\end{bmatrix}\\&=A_{11}X_1Y_1+A_{21}X_1Y_2+A_{12}X_2Y_1+A_{22}X_2Y_2\end{aligned}}

As f_A(X,Y)=f_A(Y,X), we see that A_{21}X_1Y_2+A_{12}X_2Y_1=A_{21}X_2Y_1+A_{12}X_1Y_2, so A_{21}=A_{12}, or A=A^t.
Let X=Y=(1,0), we have f_A(X,Y)=A_{11}>0, also let X=Y=(0,1), we have f_A(X,Y)=A_{22}>0.
Let X=Y=(1,-1), we have f_A(X,Y)=\det A>0.
Conversely, if A satisfy A=A^t, A_{11}>0,A_{22}>0, and \det A>0, we verify the conditions for inner products for f_A:
( a ) f_A(X+Z,Y)=Y^tA(X+Z)=Y^tAX+Y^tAZ=f_A(X,Y)+f_A(Z,Y).
( b ) f_A(cX,Y)=Y^tA(cX)=c(Y^tAX)=cf_A(X,Y).
( c ) f_A(X,Y)=A_{11}X_1Y_1+A_{21}X_1Y_2+A_{12}X_2Y_1+A_{22}X_2Y_2, and f_A(Y,X)=A_{11}X_1Y_1+A_{12}X_1Y_2+A_{21}X_2Y_1+A_{22}X_2Y_2, so due to A=A^t we have f_A(X,Y)=f_A(Y,X).
( d ) If X=(X_1,X_2)\neq 0, then

\displaystyle{\begin{aligned}f_A(X,X)&=A_{11}X_1^2+A_{22}X_2^2+2A_{12}X_1X_2\\&=A_{11}\left(X_1^2+2\frac{A_{12}}{A_{11}}X_1X_2+\frac{A_{12}^2}{A_{11}^2}X_2^2\right)+\left(\frac{A_{11}A_{22}-A_{12}^2}{A_{11}}\right)X_2^2\\&=A_{11}\left(X_1+\frac{A_{12}}{A_{11}}X_2\right)^2+\frac{\det A}{A_{11}}X_2^2>0\end{aligned}}

9.Let V be a real or complex vector space with an inner product. Show that the quadratic form determined by the inner product satisfies the parallelogram law

\displaystyle{||\alpha+\beta||^2+||\alpha-\beta||^2=2||\alpha||^2+2||\beta||^2.}

Solution: We have

\displaystyle{||\alpha+\beta||^2=||\alpha||^2+2\Re (\alpha|\beta)+||\beta||^2 \||\alpha-\beta||^2=||\alpha||^2-2\Re (\alpha|\beta)+||\beta||^2}

and the conclusion follows.

10.Let (\text{ }|\text{ }) be the inner product on R^2 defined in Example 2, and let \mathfrak B be the standard ordered basis for R^2. Find the matrix of this inner product relative to \mathfrak B.
Solution: For \alpha=(x_1,x_2) and \beta=(y_1,y_2) we let (\alpha|\beta)=x_1y_1-x_2y_1-x_1y_2+4x_2y_2, if G is the matrix of this inner product relative to \mathfrak B, then

\displaystyle{\begin{aligned}G_{11}=(\epsilon_1|\epsilon_1)=((1,0)|(1,0))=1 \\ G_{12}=(\epsilon_2|\epsilon_1)=((0,1)|(1,0))=1 \\ G_{21}=(\epsilon_1|\epsilon_2)=((1,0)|(0,1))=-1 \\ G_{22}=(\epsilon_2|\epsilon_2)=((0,1)|(0,1))=4\end{aligned}\implies G=\begin{bmatrix}1&1\\-1&4\end{bmatrix}}

11.Show that the formula

\displaystyle{(\sum_ja_jx^j|\sum_kb_kx^k)=\sum_{j,k}\frac{a_jb_k}{j+k+1}}

defines an inner product on the space R[x] of polynomials over the field R. Let W be the subsapce of polynomials of degree less than or equal to n. Restrict the above inner product to W, and find the matrix of this inner product on W, relative to the ordered basis \{1,x,x^2,\dots,x^n\}.
Solution: If f=\sum_ja_jx^j, g=\sum_kb_kx^k, we notice the relation (f|g)=\int_0^1f(t)g(t)dt exists, so

(f+h|g)=\int_0^1[f(t)+h(t)]g(t)dt=\int_0^1f(t)g(t)dt+\int_0^1h(t)g(t)dt=(f|g)+(h|g) \\ (cf|g)=\int_0^1cf(t)g(t)dt=c\int_0^1f(t)g(t)dt=c(f|g) \\ (g|f)=\int_0^1g(t)f(t)dt=\int_0^1f(t)g(t)dt=(f|g) \\ (f|f)=\int_0^1f(t)^2dt>0\text{ if }f\neq0

thus this realy defines an inner product on R[x]. To compute the matrix G of this inner product on W relative to \{1,x,x^2,\dots,x^n\}, notice that

\displaystyle{G_{ij}=(x^{j-1}|x^{k-1})=\frac{1}{j+k-1},\quad 1\leq j,k\leq n+1}

Thus we have

\displaystyle{G=\begin{bmatrix}1&\frac{1}{2}&\cdots&\frac{1}{n}\\{\frac{1}{2}}&{\frac{1}{3}}&\cdots&{\frac{1}{n+1}}\\{\vdots}&{\vdots}&&{\vdots}\\{\frac{1}{n}}&{\frac{1}{n+1}}&\cdots&{\frac{1}{2n}}\end{bmatrix}}

12.Let V be a finite-dimensional vector space and let \mathfrak B=\{\alpha_1,\dots,\alpha_n\} be a basis for V. Let (\text{ }|\text{ }) be an inner product on V. If c_1,\dots,c_n are any n scalars, show that there is exactly one vector \alpha\in V such that (\alpha|\alpha_j)=c_j,j=1,\dots,n.
Solution: Let G be the matrix of (\text{ }|\text{ }) relative to \mathfrak B. For any \beta=\sum_{i=1}^nb_i\alpha_i\in V we have (\beta|\alpha_j)=\sum_{i=1}^nb_i(\alpha_i|\alpha_j)=\sum_{i=1}^nb_iG_{ji}, thus if \alpha=\sum_{i=1}^nx_i\alpha_i satisfies the condition (\alpha|\alpha_j)=c_j,j=1,\dots,n, it means \sum_{i=1}^nx_iG_{ji}=c_j for j=1,\dots,n or

\displaystyle{\begin{bmatrix}G_{11}&\dots&G_{1n}\\{\vdots}&&{\vdots}\\G_{n1}&\cdots&G_{nn}\end{bmatrix}\begin{bmatrix}x_1\\{\vdots}\\x_n\end{bmatrix}=\begin{bmatrix}c_1\\{\vdots}\\c_n\end{bmatrix}}

the solution for the system GX=C is unique since G is invertible.

13.Let V be a complex vector space. A function J from V into V is called a conjugation if J(\alpha+\beta)=J(\alpha)+J(\beta), J(c\alpha)=\overline{c}J(\alpha), and J(J(\alpha))=\alpha, for all scalars c and \alpha,\beta\in V. If J is a conjugation show that:
( a ) The set W=\{\alpha\in V:J\alpha=\alpha\} is a vector space over R with respect to the operations defined in V.
( b ) For each \alpha\in V, \exists !\beta,\gamma\in W such that \alpha=\beta+i\gamma.
Solution:
( a ) We only have to prove W is a subspace, namely if \alpha,\beta \in W and c\in R, then c\alpha+\beta\in W. We have

\displaystyle{J(c\alpha+\beta)=J(c\alpha)+J(\beta)=\overline{c}\alpha+\beta=c\alpha+\beta}

( b ) If we let \beta=\dfrac{J(\alpha)+\alpha}{2} and \gamma=\dfrac{J(\alpha)-\alpha}{2}i, then \beta,\gamma\in W since

\displaystyle{J(\beta)=J\left(\dfrac{J(\alpha)+\alpha}{2}\right)=\dfrac{\alpha+J(\alpha)}{2}=\beta \\ J(\gamma)=J\left(\dfrac{J(\alpha)-\alpha}{2}i\right)=-\frac{i}{2}(\alpha-J(\alpha))=\gamma}

and we have \beta+i\gamma=\alpha. Now suppose there exists \beta',\gamma'\in W such that \alpha=\beta'+i\gamma', then we have (\beta-\beta')+i(\gamma-\gamma')=0, which means \beta=\beta' and \gamma=\gamma'.

14.Let V be a complex vector space and W a subset of V with the following properties:
( a ) W is a real vector space with respect to the operations defined in V.
( b ) For each \alpha\in V there exist unique vectors \beta,\gamma\in W such that \alpha=\beta+i\gamma.
Show that the equation J\alpha=\beta-i\gamma defines a conjugation on V such that J\alpha=\alpha if and only if \alpha\in W, and show also that J is the only conjugation on V with this property.
Solution: To see J is a conjugation, let \alpha,\alpha'\in V, then there exists \beta,\gamma\in W such that \alpha=\beta+i\gamma, and \beta',\gamma'\in W such that \alpha'=\beta'+i\gamma'. Since W is a vector space, we have \beta+\beta'\in W and \gamma+\gamma'\in W, and

\displaystyle{\begin{aligned}\alpha+\alpha'=\beta+\beta'+i(\gamma+\gamma')\implies J(\alpha+\alpha')&=\beta+\beta'-i(\gamma+\gamma')\\&=\beta-i\gamma+\beta'-i\gamma'\\&=J(\alpha)+J(\alpha')\end{aligned}}

Also we have J(J(\alpha))=J(\beta-i\gamma)=\beta+i\gamma=\alpha. Last, if c\in V, then c\alpha=c\beta+ci\gamma, let c=a+bi, in which a,b\in R, we have

\displaystyle{\begin{aligned}J(c\alpha)&=J((a+bi)(\beta+i\gamma))=J(a\beta-b\gamma+i(a\gamma+b\beta))\\&=a\beta-b\gamma-i(a\gamma+b\beta)=(a-bi)\beta-(ai+b)\gamma\\&=\overline{c}\beta-i(a-bi)\gamma=\overline{c}(\beta-i\gamma)=\overline{c}J(\alpha)\end{aligned}}

If \alpha\in W, then \alpha=\alpha+i0, this expression is unique, thus J\alpha=\alpha-i0=\alpha.
Conversely, if J(\beta+i\gamma)=\beta-i\gamma defines a conjugation on V and J\alpha=\alpha, then write \alpha=\beta+i\gamma, we have \beta-i\gamma=\beta+i\gamma, which means i\gamma=0 and \alpha=\beta\in W.
Suppose there is another conjugation J' which has the property J'\alpha=\alpha\Leftrightarrow \alpha\in W, then by the conjugation property we have

\displaystyle{J'\alpha=J'(\beta+i\gamma)=J'\beta+J'(i\gamma)=\beta+\overline{i}J'\gamma=\beta-i\gamma}

thus J'=J.

15.Find all conjugations on C^1 and C^2.
Solution: The conjugation on C^1 is the usual conjugation defined by J(c)=\overline{c}. The conjugation on C^2 is defined by J(c_1,c_2)=(\overline{c_1},\overline{c_2}).

16.Let W be a finite-dimensional real subspace of a complex vector space V. Show that W satisfies condition (b) of Exercise 14 if and only if every basis of W is also a basis of V.
Solution: First suppose \{\alpha_1,\dots,\alpha_n\} is a basis of W, then it is also a basis of V, and for any \alpha\in V, we can find unique c_1,\dots,c_n\in C such that \alpha=\sum_{j=1}^nc_j\alpha_j, write each c_j=a_j+ib_j,a_j,b_j\in R, then a_j,b_j,j=1,\dots,n are also uniquely determined, if we denote \beta=\sum_{j=1}^na_j\alpha_j and \gamma=\sum_{j=1}^nb_j\alpha_j, we have

\displaystyle{\alpha=\sum_{j=1}^nc_j\alpha_j=\sum_{j=1}^n(a_j+ib_j)\alpha_j=\sum_{j=1}^na_j\alpha_j+i\sum_{j=1}^nb_j\alpha_j=\beta+i\gamma}

Conversely, let \{\alpha_1,\dots,\alpha_n\} be any basis of W, choose an \alpha\in V, we select the \beta,\gamma such that \alpha=\beta+i\gamma, then \beta and \gamma can be expressed a s a linear combination of \alpha_1,\dots,\alpha_n, we may say \beta=\sum_{j=1}^na_j\alpha_j and \gamma=\sum_{j=1}^nb_j\alpha_j, in which a_j,b_j\in R for j=1,\dots,n. Let c_j=a_j+ib_j, we have

\displaystyle{\alpha=\sum_{j=1}^na_j\alpha_j+i\sum_{j=1}^nb_j\alpha_j=\sum_{j=1}^nc_j\alpha_j}

thus the vectors \{\alpha_1,\dots,\alpha_n\} spans V. If these vectors are linearly dependent in V, without loss of generality we may assume \alpha_n=\sum_{j=1}^{n-1}k_j\alpha_j,k_j\in C, since \alpha_n is part of a basis for W we have \alpha_n\neq 0, so at least one k_j\neq 0. Now \alpha_n is a vector in W and V, we can express \alpha_n=\alpha_n+i0, but if we let \beta=\sum_{j=1}^{n-1}\Re(k_j)\alpha_j and \gamma=\sum_{j=1}^{n-1}\Im(k_j)\alpha_j, we shall also have \alpha_n=\beta+i\gamma, they should be the same, in particular \sum_{j=1}^{n-1}\Re(k_j)\alpha_j=\alpha_n, this means \sum_{j=1}^{n-1}\Re(k_j)\alpha_j-\alpha_n=0, a contradiction to \{\alpha_1,\dots,\alpha_n\} being a basis of W. Thus \{\alpha_1,\dots,\alpha_n\} is linearly independent in V, and then a basis of V.

17.Let V be a complex vector space, J a conjugation on V, W the set of \alpha\in V such that J\alpha=\alpha, and f an inner product on W. Show that:
(a) There is a unique inner product g on V such that g(\alpha,\beta)=f(\alpha,\beta) for all \alpha,\beta\in W,
(b) g(J\alpha,J\beta)=g(\beta,\alpha) for all \alpha,\beta\in V.
What does part (a) say about the relation between the standard inner products on R^1 and C^1, or on R^n and C^n?
Solution:
( a ) By the previous exercises, we can express \alpha as \alpha=\beta+i\gamma with unique \beta,\gamma\in W, and we must have J\alpha=\beta-i\gamma. If \alpha=\beta+i\gamma and \alpha'=\beta'+i\gamma', define g on V by

\displaystyle{g(\alpha,\alpha')=g(\beta+i\gamma,\beta'+i\gamma')=f(\beta,\beta')+f(\gamma,\gamma')-if(\beta,\gamma')+if(\gamma,\beta')}

We can verify that g is an inner product on V, and if \alpha,\beta\in W we have \alpha=\alpha+i0 and \beta=\beta+i0, so g(\alpha,\beta)=f(\alpha,\beta). To see g is unique, let g' be any inner product on V such that g'(\alpha,\beta)=f(\alpha,\beta) for all \alpha,\beta\in W, then use the property of inner products, for \alpha=\beta+i\gamma and \alpha'=\beta'+i\gamma' we have

\displaystyle{\begin{aligned}g'(\alpha,\alpha')&=g'(\beta+i\gamma,\beta'+i\gamma')\\&=g'(\beta,\beta')+g'(\gamma,\gamma')-ig'(\beta,\gamma')+ig'(\gamma,\beta')\\&=f(\beta,\beta')+f(\gamma,\gamma')-if(\beta,\gamma')+if(\gamma,\beta')=g(\alpha,\alpha')\end{aligned}}

( b ) We write \alpha=a+ib and \beta=c+id for a,b,c,d\in W, then

\displaystyle{\begin{aligned}g(J\alpha,J\beta)&=g(a-ib,c-id)=g(a,c-id)-ig(b,c-id)\\&=g(a,c)+ig(a,d)-ig(b,c)+g(b,d)\\&=f(a,c)+if(a,d)-if(b,c)+f(b,d)\\&=f(c,a)+if(d,a)-if(c,b)+f(b,d)=g(c+id,a+ib)\\&=g(\beta,\alpha)\end{aligned}}

Part (a) says the standard inner product on C^n is the unique inner product which equals to the standard inner product on R^n, when restricted on R^n, thus the the standard inner product is unique.