Definition. If is any vector in
, the
-cyclic subspace generated by
is the subspace
of all vectors of the form
. If
, then
is called a cyclic vector for
.
Definition. If is any vector in
, the
-annihilator of
is the ideal
in
consisting of all polynomials
over
such that
. The unique monic polynomial
which generates this ideal will also be called the
-annihilator of
.
Theorem 1. Let be any non-zero vector in
and let
be the
-annihilator of
.
(i) The degree of is equal to the dimension of the cyclic subspace
.
(ii) If the degree of is
, then the vectors
form a basis for
.
(iii) If is the linear operator on
induced by
, then the minimal polynomial for
is
.
If is an operator on
of dimension
which has a cyclic vector
, then
form a basis for
, and
is the minimal polynomial for
. If we let
, and
, then
thus the matrix of in the ordered basis
is called the companion matrix of the monic polynomial
, which is
Theorem 2. If is a linear operator on the finite-dimensional space
, then
has a cyclic vector if and only if there is some ordered basis for
in which
is represented by the companion matrix of the minimal polynomial for
.
Corollary. If is the companion matrix of a monic polynomial
, then
is both the minimal and the characteristic polynomial of
.
Definition. Let be a linear operator on a vector space
and let
be a subspace of
. We say that
is
-admissible if
(i) is invariant under
.
(ii) if is in
, there exists a vector
in
such that
.
Theorem 3 (Cyclic Decomposition Theorem). Let be a linear operator on a finite-dimensional vector space
and let
be a proper
-admissible subspace of
. There exist non-zero vectors
in
with respective
-annihilators
such that
(i) ;
(ii) divides
.
Corollary. If is a linear operator on a finite-dimensional vector space, then every
-admissible subspace has a complementary subspace which is also invariant under
.
Corollary. Let be a linear operator on a finite-dimensional vector space
.
( a ) There exists a vector in
such that the
-annihilator of
is the minimal polynomial for
.
( b ) has a cyclic vector if and only if the characteristic and minimal polynomials for
are identical.
Theorem 4 (Generalized Cayley-Hamilton Theorem). Let be a linear operator on a finite-dimensional vector space
. Let
and
be the minimal and characteristic polynomials for
, respectively.
(i) divides
.
(ii) and
have the same prime factors, except for multiplicities.
(iii) If is the prime factorization of
, then
where
is the nullity of
divided by the degree of
.
Corollary. If is a nilpotent linear operator on a vector space of dimension
, then the characteristic polynomial for
is
.
An matrix
which is the direct sum of companion matrices of non-scalar monic polynomials
such that
divides
for
will be said to be in rational form.
Theorem 5. Let be a field and let
be an
matrix over
. Then
is similar over the field
to one and only one matrix which is in rational form.
The polynomials are called the invariant factors for the matrix
.
If , an elementary row operation on
is one of the following:
- multiplication of one row of
by a non-zero scalar in
.
- replacement of the
th row of
by row
plus
times row
, where
is any polynomial over
and
.
- interchange of two rows of
.
An elementary matrix in is one which can be obtained from the
identity matrix by means of a single elementary row operation.
Let , we say that
is row-equivalent to
if
can be obtained from
by a finite succession of elementary row operations:
.
Lemma. Let be a matrix in
which has some non-zero entry in its first column, and let
be the greatest common divisor of the entries in column
of
. Then
is row-equivalent to a matrix
which has
as its first column.
Theorem 6. Let be an
matrix with entries in the polynomial algebra
. The following are equivalent.
(i) is invertible.
(ii) The determinant of is a non-zero scalar polynomial.
(iii) is row-equivalent to the
identity matrix.
(iv) is a product of elementary matrices.
Corollary. Let and
be
matrices with entries in the polynomial algebra
. Then
is row-equivalent to
if and only if
where is an invertible
matrix with entries in
.
We define elementary column operations and column-equivalence in a manner analogous to row operations and row-equivalence.
Definition. The matrix is equivalent to the matrix
if we can pass from
to
by means of a sequence of operations
each of which is an elementary row operation or an elementary column operation.
Theorem 7. Let and
be
matrices with entries in the polynomial algebra
. Then
is equivalent to
if and only if
where is an invertible matrix in
and
is an invertible matrix in
.
Theorem 8. Let be an
matrix with entries in the field
, and let
be the invariant factors for
. The matrix
is equivalent to the
diagonal matrix with diagonal entries
.
Definition. Let be a matrix in
. We say that
is in (Smith) normal form if
( a ) every entry off the main diagonal of is
.
( b ) on the main diagonal of there appear (in order) polynomials
such that
, in which
.
Theorem 9. Let be an
matrix with entries in the polynomial algebra
. Then
is equivalent to a matrix
which is in normal form.
Definition. Let be an
matrix with entries in
. If
, we define
to be the greatest common divisor of the determinants of all
submatrices of
.
Theorem 10. If and
are equivalent
matrices with entries in
, then
Corollary. Each matrix in
is equivalent to precisely one matrix
which is in normal form. The polynomials
which occur on the main diagnal of
are
where, for convenience, we define .
Definition. Let be a finite-dimensional vector space over the field
, and let
be a linear operator on
. We say that
is semi-simple if every
-invariant subspace has a complementary
-invariant subspace.
Lemma. Let be a linear operator on the finite-dimensional vector space
, and let
be the primary decomposition for
. In other words, if
is the minimal polynomial for
and
is the prime factorization of
, then
is the null space of
. Let
be any subspace of
which is invariant under
. Then
Lemma. Let be a linear operator on
, and suppose that the minimal polynomial for
is irreducible over the scalar field
. Then
is semi-simple.
Theorem 11. Let be a linear operator on the finite-dimensional vector space
. A necessary and sufficient condition that
be semi-simple is that the minimal polynomial
for
be of the form
, where
are distinct irreducible polynomials over the scalar field
.
Corollary. If is a linear operator on a finite-dimensional vector space over an algebraically closed field, then
is semi-simple if and only if
is diagonalizable.
Lemma (Taylor’s Formula). Let be a field of characteristic zero and let
and
be polynomials over
. If
is any polynomial over
with
, then
Lemma. Let be a subfield of the complex numbers, let
be a polynomial over
, and let
be the derivative of
. The following are equivalent:
( a ) is the product of distinct polynomials irreducible over
.
( b ) and
are relatively prime.
( c ) As a polynomial with complex coefficients, has no repeated root.
Theorem 12. Let be a subfield of the field of complex numbers, let
be a finite-dimensional vector space over
, and let
be a linear operator on
. Let
be an ordered basis for
and let
be the matrix of
in the ordered basis
. Then
is semi-simple if and only if the matrix
is similar over the field of complex numbers to a diagonal matrix.
Theorem 13. Let be a subfield of the field of complex numbers, let
be a finite-dimensional vector space over
, and let
be a linear operator on
. There is a semi-simple operator
on
and a nilpotent operator
on
such that
(i) ;
(ii) .
Furthermore, the semi-simple and nilpotent
satisfying (i) and (ii) are unique, and each is a polynomial in
.