Find the eigenvalues ​​of the operator given by the matrix. Eigenvectors and Eigenvalues ​​of a Linear Operator

Diagonal-type matrices are most simply arranged. The question arises whether it is possible to find a basis in which the matrix of a linear operator would have a diagonal form. Such a basis exists.
Let a linear space R n and a linear operator A acting in it be given; in this case, the operator A takes R n into itself, that is, A:R n → R n .

Definition. A non-zero vector x is called an eigenvector of the operator A if the operator A transforms x into a vector collinear to it, i.e. . The number λ is called the eigenvalue or eigenvalue of the operator A corresponding to the eigenvector x .
We note some properties of eigenvalues ​​and eigenvectors.
1. Any linear combination of eigenvectors of the operator A corresponding to the same eigenvalue λ is an eigenvector with the same eigenvalue.
2. Eigenvectors operator A with pairwise distinct eigenvalues ​​λ 1 , λ 2 , …, λ m are linearly independent.
3. If the eigenvalues ​​λ 1 =λ 2 = λ m = λ, then the eigenvalue λ corresponds to no more than m linearly independent eigenvectors.

So, if there are n linearly independent eigenvectors corresponding to different eigenvalues ​​λ 1 , λ 2 , …, λ n , then they are linearly independent, therefore, they can be taken as the basis of the space R n . Let us find the form of the matrix of the linear operator A in the basis of its eigenvectors, for which we act with the operator A on the basis vectors: then .
Thus, the matrix of the linear operator A in the basis of its eigenvectors has a diagonal form, and the eigenvalues ​​of the operator A are on the diagonal.
Is there another basis in which the matrix has a diagonal form? The answer to this question is given by the following theorem.

Theorem. The matrix of a linear operator A in the basis (i = 1..n) has a diagonal form if and only if all vectors of the basis are eigenvectors of the operator A.

Rule for finding eigenvalues ​​and eigenvectors

Let the vector , where x 1 , x 2 , …, x n - coordinates of the vector x relative to the basis and x is the eigenvector of the linear operator A corresponding to the eigenvalue λ, i.e. . This relation can be written in matrix form

. (*)


The equation (*) can be considered as an equation for finding x , and , that is, we are interested in non-trivial solutions, since the eigenvector cannot be zero. It is known that nontrivial solutions of a homogeneous system linear equations exist if and only if det(A - λE) = 0. Thus, for λ to be an eigenvalue of the operator A it is necessary and sufficient that det(A - λE) = 0.
If the equation (*) is written in detail in coordinate form, then we get a system of linear homogeneous equations:

(1)
where is the matrix of the linear operator.

System (1) has a nonzero solution if its determinant D is equal to zero


We got an equation for finding eigenvalues.
This equation is called the characteristic equation, and its left side is called the characteristic polynomial of the matrix (operator) A. If the characteristic polynomial has no real roots, then the matrix A has no eigenvectors and cannot be reduced to a diagonal form.
Let λ 1 , λ 2 , …, λ n be the real roots of the characteristic equation, and there may be multiples among them. Substituting these values ​​in turn into system (1), we find the eigenvectors.

Example 12. The linear operator A acts in R 3 according to the law , where x 1 , x 2 , .., x n are the coordinates of the vector in the basis , , . Find the eigenvalues ​​and eigenvectors of this operator.
Solution. We build the matrix of this operator:
.
We compose a system for determining the coordinates of eigenvectors:

We compose the characteristic equation and solve it:

.
λ 1,2 = -1, λ 3 = 3.
Substituting λ = -1 into the system, we have:
or
Because , then there are two dependent variables and one free variable.
Let x 1 be a free unknown, then We solve this system in any way and find common decision of this system: The fundamental system of solutions consists of one solution, since n - r = 3 - 2 = 1.
The set of eigenvectors corresponding to the eigenvalue λ = -1 has the form: , where x 1 is any number other than zero. Let's choose one vector from this set, for example, by setting x 1 = 1: .
Arguing similarly, we find the eigenvector corresponding to the eigenvalue λ = 3: .
In the space R 3 the basis consists of three linearly independent vectors, but we have obtained only two linearly independent eigenvectors, from which the basis in R 3 cannot be formed. Consequently, the matrix A of a linear operator cannot be reduced to a diagonal form.

Example 13 Given a matrix .
1. Prove that the vector is an eigenvector of the matrix A. Find the eigenvalue corresponding to this eigenvector.
2. Find a basis in which the matrix A has a diagonal form.
Solution.
1. If , then x is an eigenvector

.
Vector (1, 8, -1) is an eigenvector. Eigenvalue λ = -1.
The matrix has a diagonal form in the basis consisting of eigenvectors. One of them is famous. Let's find the rest.
We are looking for eigenvectors from the system:

Characteristic equation: ;
(3 + λ)[-2(2-λ)(2+λ)+3] = 0; (3+λ)(λ 2 - 1) = 0
λ 1 = -3, λ 2 = 1, λ 3 = -1.
Find the eigenvector corresponding to the eigenvalue λ = -3:

The rank of the matrix of this system is equal to two and is equal to the number of unknowns, therefore this system has only a zero solution x 1 = x 3 = 0. x 2 here can be anything other than zero, for example, x 2 = 1. Thus, the vector (0 ,1,0) is an eigenvector corresponding to λ = -3. Let's check:
.
If λ = 1, then we get the system
The rank of the matrix is ​​two. Cross out the last equation.
Let x 3 be the free unknown. Then x 1 \u003d -3x 3, 4x 2 \u003d 10x 1 - 6x 3 \u003d -30x 3 - 6x 3, x 2 \u003d -9x 3.
Assuming x 3 = 1, we have (-3,-9,1) - an eigenvector corresponding to the eigenvalue λ = 1. Check:

.
Since the eigenvalues ​​are real and different, the vectors corresponding to them are linearly independent, so they can be taken as a basis in R 3 . Thus, in the basis , , matrix A has the form:
.
Not every matrix of a linear operator A:R n → R n can be reduced to a diagonal form, since for some linear operators there may be less than n linearly independent eigenvectors. However, if the matrix is ​​symmetric, then exactly m linearly independent vectors correspond to the root of the characteristic equation of multiplicity m.

Definition. The symmetric matrix is ​​called square matrix, in which the elements symmetric about the main diagonal are equal, that is, in which .
Remarks. 1. All eigenvalues ​​of a symmetric matrix are real.
2. Eigenvectors of a symmetric matrix corresponding to pairwise different eigenvalues ​​are orthogonal.
As one of the numerous applications of the studied apparatus, we consider the problem of determining the form of a second-order curve.

Definition: Let L be a given n- dimensional linear space. A non-zero vector L is called own vector linear transformation A, if there is such a number  that the equality holds:

A
(7.1)

In this case, the number  is called eigenvalue (characteristic number) linear transformation A corresponding to the vector .

Moving the right side of (7.1) to the left and taking into account the relation
, we rewrite (7.1) as

(7.2)

Equation (7.2) is equivalent to the system of linear homogeneous equations:

(7.3)

For the existence of a nonzero solution to the system of linear homogeneous equations (7.3), it is necessary and sufficient that the determinant of the coefficients of this system is equal to zero, i.e.

|A-λE|=
(7.4)

This determinant is an nth degree polynomial with respect to λ and is called characteristic polynomial linear transformation A, and equation (7.4) - characteristic equation matrices A.

Definition: If a linear transformation A in some basis ,,…,has matrix A =
, then the eigenvalues ​​of the linear transformation A can be found as the roots 1 , 2 , … , n of the characteristic equation:

Consider special case . Let A be some linear transformation of the plane, whose matrix is ​​equal to
. Then the transformation A can be given by the formulas:


;

in some basis
.

If the transformation A has an eigenvector with an eigenvalue , then A
.

or

Because eigenvector non-zero, then x 1 and x 2 are not equal to zero at the same time. Because If this system is homogeneous, then in order for it to have a nontrivial solution, the determinant of the system must be equal to zero. Otherwise, according to Cramer's rule, the system has a unique solution - zero, which is impossible.

The resulting equation is characteristic equation of the linear transformation A.

Thus, one can find an eigenvector (x 1, x 2) of a linear transformation A with an eigenvalue , where  is the root of the characteristic equation, and x 1 and x 2 are the roots of the system of equations when the value  is substituted into it.

It is clear that if the characteristic equation has no real roots, then the linear transformation A has no eigenvectors.

It should be noted that if is an eigenvector of the transformation A, then any vector collinear to it is also an eigenvector with the same eigenvalue.

Really,. If we take into account that vectors have one origin, then these vectors form the so-called own direction or own direct.

Because the characteristic equation can have two different real roots  1 and  2, then in this case, when substituting them into the system of equations, we obtain an infinite number of solutions. (Because the equations are linearly dependent). This set of solutions defines two own direct.

If the characteristic equation has two equal roots 1 = 2 =, then either there is only one proper line, or if, when substituting into a system, it turns into a system of the form:
. This system satisfies any values ​​of x 1 and x 2 . Then all vectors will be eigenvectors, and such a transformation is called similarity transformation.

Example.
.

Example. Find characteristic numbers and eigenvectors of a linear transformation with matrix A =
.

We write the linear transformation in the form:

Let's make the characteristic equation:

 2 - 4+ 4 = 0;

The roots of the characteristic equation:  1 = 2 = 2;

We get:

The dependency is obtained from the system: x 1 x 2 = 0. Eigenvectors for the first root of the characteristic equation have coordinates: ( t ; t ) where t- parameter.

The eigenvector can be written:
.

Consider another special case. If a - the eigenvector of the linear transformation A, given in a three-dimensional linear space, and x 1, x 2, x 3 are the components of this vector in some basis
, then

where  is the eigenvalue (characteristic number) of the transformation A.

If the linear transformation matrix A has the form:

, then

Characteristic equation:

Expanding the determinant, we obtain a cubic equation for . Any cubic equation with real coefficients has either one or three real roots.

Then any linear transformation in three-dimensional space has eigenvectors.

Example. Find the characteristic numbers and eigenvectors of the linear transformation A, the matrix of the linear transformation A = .

Example. Find the characteristic numbers and eigenvectors of the linear transformation A, the matrix of the linear transformation A =
.

Let's make the characteristic equation:

-(3 + )((1 -)(2 -) – 2) + 2(4 - 2- 2) - 4(2 - 1 +) = 0

-(3 + )(2 -- 2+ 2 - 2) + 2(2 - 2) - 4(1 +) = 0

-(3 + )( 2 - 3) + 4 - 4- 4 - 4= 0

3 2 + 9- 3 + 3 2 - 8= 0

 1 = 0; 2 = 1; 3 = -1;

For  1 = 0:

If we take x 3 \u003d 1, we get x 1 \u003d 0, x 2 \u003d -2

Eigenvectors
t, where t is a parameter.

Similarly, one can find and for  2 and  3 .

The vector X ≠ 0 is called own vector linear operator with matrix A, if there is such a number  that AX = X.

In this case, the number  is called eigenvalue operator (matrix A) corresponding to the vector x.

In other words, an eigenvector is a vector that, under the action of a linear operator, transforms into a collinear vector, i.e. just multiply by some number. In contrast, improper vectors are more difficult to transform.

We write the definition of the eigenvector as a system of equations:

Let's move all the terms to the left side:

The last system can be written in matrix form as follows:

(A - E) X \u003d O

The resulting system always has a zero solution X = O. Such systems in which all free terms are equal to zero are called homogeneous. If the matrix of such a system is square, and its determinant is not equal to zero, then according to Cramer's formulas, we will always get a unique solution - zero. It can be proved that the system has non-zero solutions if and only if the determinant of this matrix is ​​equal to zero, i.e.

|A - E| = = 0

This equation with unknown  is called characteristic equation(characteristic polynomial) matrix A (linear operator).

It can be proved that the characteristic polynomial of a linear operator does not depend on the choice of basis.

For example, let's find the eigenvalues ​​and eigenvectors of the linear operator given by the matrix A = .

To do this, we compose the characteristic equation |А - Е| = \u003d (1 -) 2 - 36 \u003d 1 - 2 +  2 - 36 \u003d 2 - 2- 35; D \u003d 4 + 140 \u003d 144; eigenvalues 1 = (2 - 12)/2 = -5; 2 = (2 + 12)/2 = 7.

To find the eigenvectors, we solve two systems of equations

(A + 5E) X = O

(A - 7E) X = O

For the first of them, the expanded matrix will take the form

,

whence x 2 \u003d c, x 1 + (2/3) c \u003d 0; x 1 \u003d - (2/3) s, i.e. X (1) \u003d (- (2/3) s; s).

For the second of them, the expanded matrix will take the form

,

whence x 2 \u003d c 1, x 1 - (2/3) c 1 \u003d 0; x 1 \u003d (2/3) s 1, i.e. X (2) \u003d ((2/3) s 1; s 1).

Thus, the eigenvectors of this linear operator are all vectors of the form (-(2/3)c; c) with eigenvalue (-5) and all vectors of the form ((2/3)c 1 ; c 1) with eigenvalue 7 .

It can be proved that the matrix of the operator A in the basis consisting of its eigenvectors is diagonal and has the form:

,

where  i are the eigenvalues ​​of this matrix.

The converse is also true: if the matrix A in some basis is diagonal, then all vectors of this basis will be eigenvectors of this matrix.

It can also be proved that if a linear operator has n pairwise distinct eigenvalues, then the corresponding eigenvectors are linearly independent, and the matrix of this operator in the corresponding basis has a diagonal form.

Eigenvalues ​​(numbers) and eigenvectors.
Solution examples

Be yourself


From both equations it follows that .

Let's put then: .

As a result: is the second eigenvector.

Let's repeat important points solutions:

– the resulting system certainly has a general solution (the equations are linearly dependent);

- "Y" is selected in such a way that it is integer and the first "x" coordinate is integer, positive and as small as possible.

– we check that the particular solution satisfies each equation of the system.

Answer .

Intermediate "checkpoints" were quite enough, so the check of equalities, in principle, is superfluous.

In various sources of information, the coordinates of eigenvectors are often written not in columns, but in rows, for example: (and, to be honest, I myself used to write them in lines). This option is acceptable, but in the light of the topic linear transformations technically more convenient to use column vectors.

Perhaps the solution seemed very long to you, but that's only because I commented on the first example in great detail.

Example 2

matrices

We train on our own! An approximate sample of the final design of the task at the end of the lesson.

Sometimes you need to perform an additional task, namely:

write the canonical decomposition of the matrix

What it is?

If the matrix eigenvectors form basis, then it can be represented as:

Where is a matrix composed of the coordinates of eigenvectors, – diagonal matrix with corresponding eigenvalues.

This matrix decomposition is called canonical or diagonal.

Consider the matrix of the first example. Her own vectors linearly independent(non-collinear) and form a basis. Let's make a matrix from their coordinates:

On the main diagonal matrices in due order eigenvalues ​​are located, and the remaining elements are equal to zero:
- once again I emphasize the importance of the order: "two" corresponds to the 1st vector and therefore is located in the 1st column, "three" - to the 2nd vector.

According to the usual algorithm for finding inverse matrix or Gauss-Jordan method find . No, that's not a typo! - in front of you is rare, like solar eclipse event when the inverse matched the original matrix.

It remains to write the canonical decomposition of the matrix :

The system can be solved using elementary transformations and in the following examples we will resort to this method. But here the “school” method works much faster. From the 3rd equation we express: - substitute into the second equation:

Since the first coordinate is zero, we obtain a system , from each equation of which it follows that .

And again pay attention to the mandatory presence of a linear relationship. If only a trivial solution is obtained , then either the eigenvalue was found incorrectly, or the system was compiled / solved with an error.

Compact coordinates gives value

Eigenvector:

And once again, we check that the found solution satisfies every equation of the system. In the following paragraphs and in subsequent tasks, I recommend that this wish be accepted as a mandatory rule.

2) For the eigenvalue, following the same principle, we obtain the following system:

From the 2nd equation of the system we express: - substitute into the third equation:

Since the "Z" coordinate is equal to zero, we obtain a system , from each equation of which a linear dependence follows.

Let

We check that the solution satisfies every equation of the system.

Thus, the eigenvector: .

3) And, finally, the system corresponds to its own value:

The second equation looks the simplest, so we express it from it and substitute it into the 1st and 3rd equations:

Everything is fine - a linear dependence was revealed, which we substitute into the expression:

As a result, "X" and "Y" were expressed through "Z": . In practice, it is not necessary to achieve just such relationships; in some cases it is more convenient to express both through or and through . Or even a “train” - for example, “X” through “Y”, and “Y” through “Z”

Let's put then:

We check that the found solution satisfies each equation of the system and write the third eigenvector

Answer: eigenvectors:

Geometrically, these vectors define three different spatial directions ("There and back again"), according to which linear transformation transforms nonzero vectors (eigenvectors) into vectors collinear to them.

If by condition it was required to find a canonical expansion of , then this is possible here, because different eigenvalues ​​correspond to different linearly independent eigenvectors. We make a matrix from their coordinates, the diagonal matrix from relevant eigenvalues ​​and find inverse matrix .

If, according to the condition, it is necessary to write linear transformation matrix in the basis of eigenvectors, then we give the answer in the form . There is a difference, and a significant difference! For this matrix is ​​the matrix "de".

A problem with simpler calculations for independent decision:

Example 5

Find eigenvectors of linear transformation given by matrix

When finding your own numbers, try not to bring the case to a polynomial of the 3rd degree. In addition, your system solutions may differ from my solutions - there is no unambiguity here; and the vectors you find may differ from the sample vectors up to proportionality to their respective coordinates. For example, and . It is more aesthetically pleasing to present the answer in the form of , but it's okay if you stop at the second option. However, there are reasonable limits to everything, the version does not look very good anymore.

An approximate final sample of the assignment at the end of the lesson.

How to solve the problem in case of multiple eigenvalues?

General algorithm remains the same, but it has its own peculiarities, and it is advisable to keep some sections of the solution in a more strict academic style:

Example 6

Find eigenvalues ​​and eigenvectors

Solution

Of course, let's capitalize the fabulous first column:

And, after factoring the square trinomial:

As a result, eigenvalues ​​are obtained, two of which are multiples.

Let's find the eigenvectors:

1) We will deal with a lone soldier according to a “simplified” scheme:

From the last two equations, the equality is clearly visible, which, obviously, should be substituted into the 1st equation of the system:

The best combination can not found:
Eigenvector:

2-3) Now we remove a couple of sentries. In this case, it may be either two or one eigenvector. Regardless of the multiplicity of the roots, we substitute the value in the determinant , which brings us the following homogeneous system of linear equations:

Eigenvectors are exactly the vectors
fundamental decision system

Actually, throughout the lesson, we were only engaged in finding the vectors of the fundamental system. Just for the time being, this term was not particularly required. By the way, those dexterous students who, in camouflage homogeneous equations, will be forced to smoke it now.


The only action was to remove extra lines. The result is a "one by three" matrix with a formal "step" in the middle.
– basic variable, – free variables. There are two free variables, so there are also two vectors of the fundamental system.

Let's express the basic variable in terms of free variables: . The zero factor in front of the “x” allows it to take on absolutely any values ​​(which is also clearly visible from the system of equations).

In the context of this problem, it is more convenient to write the general solution not in a row, but in a column:

The pair corresponds to an eigenvector:
The pair corresponds to an eigenvector:

Note : sophisticated readers can pick up these vectors orally - just by analyzing the system , but some knowledge is needed here: there are three variables, system matrix rank- unit means fundamental decision system consists of 3 – 1 = 2 vectors. However, the found vectors are perfectly visible even without this knowledge, purely on an intuitive level. In this case, the third vector will be written even “more beautifully”: . However, I warn you, in another example, there may not be a simple selection, which is why the reservation is intended for experienced people. Besides, why not take as the third vector, say, ? After all, its coordinates also satisfy each equation of the system, and the vectors are linearly independent. This option, in principle, is suitable, but "crooked", since the "other" vector is a linear combination of vectors of the fundamental system.

Answer: eigenvalues: , eigenvectors:

A similar example for a do-it-yourself solution:

Example 7

Find eigenvalues ​​and eigenvectors

An approximate sample of finishing at the end of the lesson.

It should be noted that in both the 6th and 7th examples, a triple of linearly independent eigenvectors is obtained, and therefore the original matrix can be represented in the canonical expansion . But such raspberries do not happen in all cases:

Example 8


Solution: compose and solve the characteristic equation:

We expand the determinant by the first column:

We carry out further simplifications according to the considered method, avoiding a polynomial of the 3rd degree:

are eigenvalues.

Let's find the eigenvectors:

1) There are no difficulties with the root:

Do not be surprised, in addition to the kit, variables are also in use - there is no difference here.

From the 3rd equation we express - we substitute into the 1st and 2nd equations:

From both equations follows:

Let then:

2-3) For multiple values, we get the system .

Let us write down the matrix of the system and, using elementary transformations, bring it to a stepped form:

The simplest linear operator is the multiplication of a vector by a number \(\lambda \). This operator simply expands all vectors by \(\lambda \) times. Its matrix form in any basis is \(diag(\lambda ,\lambda ,...,\lambda)\). For definiteness, we fix a basis \(\(e\)\) in the vector space \(\mathit(L)\) and consider a linear operator with a diagonal matrix form in this basis, \(\alpha = diag(\lambda _1,\lambda _2,...,\lambda _n)\). This operator, according to the definition of the matrix form, expands \(e_k\) by \(\lambda _k\) times, i.e. \(Ae_k=\lambda _ke_k\) for all \(k=1,2,...,n\). Diagonal matrices are convenient to work with; a functional calculus is simply constructed for them: for any function \(f(x)\) one can put \(f(diag(\lambda _1,\lambda _2,...,\lambda _n))= diag(f(\lambda _1),f(\lambda _2),...,f(\lambda _n))\). Thus arises natural question: let there be a linear operator \(A\), is it possible to choose a basis in the vector space so that the matrix form of the operator \(A\) is diagonal in this basis? This question leads to the definition of eigenvalues ​​and eigenvectors.

Definition. Let for a linear operator \(A\) there exists nonzero vector\(u\) and a number \(\lambda \) such that \[ Au=\lambda \cdot u. \quad \quad(59) \] Then the vector \(u\) is called own vector operator \(A\), and the number \(\lambda \) - corresponding own number operator \(A\). The set of all eigenvalues ​​is called spectrum of the linear operator \(A\).

A natural problem arises: to find for a given linear operator its eigenvalues ​​and corresponding eigenvectors. This problem is called the problem of the spectrum of a linear operator.

Equation for eigenvalues

For definiteness, we fix a basis in the vector space, i.e. we will assume that it is set once and for all. Then, as discussed above, the consideration of linear operators can be reduced to the consideration of matrices - matrix forms of linear operators. Equation (59) can be rewritten as \[ (\alpha -\lambda E)u=0. \] Here \(E\) is the identity matrix, and \(\alpha\) is the matrix form of our linear operator \(A\). This relation can be interpreted as a system of \(n\) linear equations for \(n\) unknowns - the coordinates of the vector \(u\). And this homogeneous system equations, and we should find it non-trivial solution. Previously, the condition for the existence of such a solution was given - for this it is necessary and sufficient that the rank of the system be less than number unknown. This implies the equation for the eigenvalues: \[ det(\alpha -\lambda E)=0. \quad \quad(60) \]

Definition. Equation (60) is called characteristic equation for the linear operator \(A\).

Let us describe the properties of this equation and its solutions. If it is written explicitly, we get an equation of the form \[ (-1)^n\lambda ^n+...+det(A)=0. \quad \quad(61) \] On the left side there is a polynomial in the variable \(\lambda \). Such equations are called algebraic equations of degree \(n\). Let's bring necessary information about these equations.

Help about algebraic equations.

Theorem. Let all eigenvalues ​​of the linear operator \(A\) be simple. Then the set of eigenvectors corresponding to these eigenvalues ​​forms the basis of the vector space.

It follows from the conditions of the theorem that all eigenvalues ​​of the operator \(A\) are distinct. Suppose that the set of eigenvectors is linearly dependent, so that there are constants \(c_1,c_2,...,c_n\), not all of which are zeros, satisfying the condition: \[ \sum_(k=1)^nc_ku_k=0. \quad \quad(62) \]

Consider among such formulas one that includes the minimum number of terms, and act on it with the operator \(A\). Due to its linearity, we get: \[ A\left (\sum_(k=1)^nc_ku_k \right)=\sum_(k=1)^nc_kAu_k=\sum_(k=1)^nc_k\lambda _ku_k=0. \quad \quad(63) \]

Let, for definiteness, \(c_1 \neq 0\). Multiplying (62) by \(\lambda _1\) and subtracting from (63), we obtain a relation of the form (62), but containing one term less. The contradiction proves the theorem.

So, under the conditions of the theorem, there appears a basis associated with a given linear operator - the basis of its eigenvectors. Consider the matrix form of the operator in such a basis. As mentioned above, the \(k\)th column of this matrix is ​​the decomposition of the vector \(Au_k\) in the basis. However, by definition, \(Au_k=\lambda _ku_k\), so this expansion (what is written on the right side) contains only one term, and the constructed matrix turns out to be diagonal. As a result, we obtain that, under the conditions of the theorem, the matrix form of the operator in the basis of its eigenvectors is equal to \(diag(\lambda _1,\lambda _2,...,\lambda _n)\). Therefore, if it is necessary to develop a functional calculus for a linear operator, it is reasonable to work in the basis of its eigenvectors.

If there are multiples among the eigenvalues ​​of the linear operator, the description of the situation becomes more complicated and may include the so-called Jordan cells. We will refer the reader to more advanced guides to explore the relevant situations.