An iterative algorithm for finding a solution to a linear equation
that takes account of information about the inclusion of
— the spectrum of the operator
— in a certain set
, and uses the properties and parameters of those polynomials that deviate least from zero on
and are equal to 1 at 0.
The most well-developed Chebyshev iteration method is obtained when in
(1),

is a linear self-adjoint operator and

, where

are the boundary points of the spectrum; then the Chebyshev iteration method uses the properties of the
Chebyshev polynomials of the first kind,

. For this case one considers two types of Chebyshev iteration methods:
in which for a given
one obtains a sequence
as
. In (2) and (3)
and
are the numerical parameters of the method. If
, then the initial error
and the error at the
-th iteration
are related by the formula
where
The polynomials
are calculated using the parameters of each of the methods (2), (3): for method(2)
where
are the elements of the permutation
, while for method (3)they are calculated from the recurrence relations
Here
The methods
(2) and
(3) can be optimized on the class of problems for which

by choosing the parameters such that

in
(4) is the polynomial least deviating from zero on

. It was proved in
1881 by
P.L. Chebyshev that this is the polynomial
where
. Then
where
Substituting
(7) for

in
(6), the parameters

of the method
(3) are determined:
where
Thus, computing

and

by the formulas
(9) and
(10), one obtains the Chebyshev iteration method
(3) for which

is optimally small for each

.
To optimize
(2) for a given

, the parameters

are chosen corresponding to the permutation

in formula
(5) in such a way that
(7) holds, that is,
Then after
iterations, inequality (8) holds for
.
An important problem for small

is the question of the stability of the method
(2),
(5),
(11). An imprudent choice of

may lead to a catastrophic increase in

for some

, to the loss of significant figures, or to an increase in the rounding-off errors allowed on intermediate iteration. There exist algorithms that mix the parameters in
(11) and guarantee the stability of the calculations: for

see
Iteration algorithm; and for

one of the algorithms for constructing

is as follows. Let

, and suppose that

has been constructed, then
There exists a class of methods
(2) — the stable infinitely repeated optimal Chebyshev iteration methods — that allows one to repeat the method
(2),
(5),
(11) after

iterations in such a way that it is stable and such that it becomes optimal again for some sequence

. For the case

, it is clear from the formula
that
agrees with (11). If after
iterations one repeats the iteration (2), (5), (11) further, taking for
in (11) the
values
then once again one obtains a Chebyshev iteration method after
iterations. To ensure stability, the set(14) is decomposed into two sets: in the
-th set,
, one puts the
for which
is a root of the
-th bracket in (13); within each of the subsets the
are permuted according to the permutation
. For
one substitutes elements of the first set in (5), (11), and for
one uses the second subset; the permutation
is defined in the same way. Continuing in an analogous way the process of forming parameters, one obtains an infinite sequence
, uniformly distributed on
, called a
-sequence, for which the method (2) becomes optimal with
and
The theory of the Chebyshev iteration methods
(2),
(3) can be extended to partial eigen value problems. Generalizations also exist to a certain class of non-self-adjoint operators, when

lies in a certain interval or within a certain domain of special shape (in particular, an ellipse); when information is known about the distribution of the initial error; or when the Chebyshev iteration method is combined with the method of conjugate gradients.
One of the effective methods of speeding up to the convergence of the iterations
(2),
(3) is a preliminary transformation of equation
(1) to an equivalent equation of the form
and the application of the Chebyshev iteration method to this equation. The operator
is defined by taking account of two facts: 1) the algorithm for computing a quantity of the form
should not be laborious; and 2)
should lie in a set that ensures the fast convergence of the Chebyshev iteration method.