Isometry

Isometry

From Wikipedia, the free encyclopedia
For the mechanical engineering and architecture usage, see isometric projection. For isometry in differential geometry, seeisometry (Riemannian geometry).
In mathematics, an isometry is a distance-preserving map between metric spaces. Geometric figures which can be related by an isometry are called congruent.
Isometries are often used in constructions where one space is embedded in another space. For instance, the completion of a metric space M involves an isometry from M into M’, a quotient set of the space of Cauchy sequences on M. The original space M is thus isometrically isomorphic to a subspace of a complete metric space, and it is usually identified with this subspace. Other embedding constructions show that every metric space is isometrically isomorphic to a closed subset of some normed vector space and that every complete metric space is isometrically isomorphic to a closed subset of some Banach space.
An isometric surjective linear operator on a Hilbert space is called a unitary operator.

Unitary matrix

Unitary matrix

From Wikipedia, the free encyclopedia
In mathematics, a unitary matrix is an ntimes n complex matrix U satisfying the condition
U^{dagger} U = UU^{dagger} = I_n,
where In is the identity matrix in n dimensions and U^{dagger} is the conjugate transpose (also called the Hermitian adjoint) of U. Note this condition says that a matrix U is unitary if and only if it has an inverse which is equal to its conjugate transpose U^{dagger} ,
U^{-1} = U^{dagger} ,;
A unitary matrix in which all entries are real is an orthogonal matrix. Just as an orthogonal matrix G preserves the (realinner productof two real vectors,
langle Gx, Gy rangle = langle x, y rangle
so also a unitary matrix U satisfies
langle Ux, Uy rangle = langle x, y rangle
for all complex vectors x and y, where langlecdot,cdotrangle stands now for the standard inner product on mathbb{C}^n.
If U , is an n by n matrix then the following are all equivalent conditions:
  1. U , is unitary
  2. U^{dagger} , is unitary
  3. the columns of U , form an orthonormal basis of mathbb{C}^n with respect to this inner product
  4. the rows of U , form an orthonormal basis of mathbb{C}^n with respect to this inner product
  5. U , is an isometry with respect to the norm from this inner product
  6. U , is a normal matrix with eigenvalues lying on the unit circle.

Self-adjoint operator

Self-adjoint operator

From Wikipedia, the free encyclopedia
In mathematics, on a finite-dimensional inner product space, a self-adjoint operator is one that is its own adjoint, or, equivalently, one whose matrix is Hermitian, where a Hermitian matrix is one which is equal to its own conjugate transpose. By the finite-dimensional spectral theorem such operators have an orthonormal basis in which the operator can be represented as a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.
Self-adjoint operators are used in functional analysis and quantum mechanics. In quantum mechanics their importance lies in the fact that in the Diracvon Neumann formulation of quantum mechanics, physical observables such as position, momentumangular momentum and spin are represented by self-adjoint operators on a Hilbert space. Of particular significance is the Hamiltonian
 H psi = V psi - frac{hbar^2}{2 m} nabla^2 psi
which as an observable corresponds to the total energy of a particle of mass m in a real potential field V. Differential operators are an important class of unbounded operators.
The structure of self-adjoint operators on infinite dimensional Hilbert spaces essentially resembles the finite dimensional case, that is to say, operators are self-adjoint if and only if they are unitarily equivalent to real-valued multiplication operators. With suitable modifications, this result can be extended to possibly unbounded operators on infinite dimensional spaces. Since an everywhere defined self-adjoint operator is necessarily bounded, one needs be more attentive to the domain issue in the unbounded case. This is explained below in more detail.

Spectral theorem

Spectral theorem

From Wikipedia, the free encyclopedia
In mathematics, particularly linear algebra and functional analysis, the spectral theorem is any of a number of results about linear operators or about matrices. In broad terms the spectral theorem provides conditions under which an operator or a matrix can bediagonalized (that is, represented as a diagonal matrix in some basis). This concept of diagonalization is relatively straightforward for operators on finite-dimensional spaces, but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modelled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
Examples of operators to which the spectral theorem applies are self-adjoint operators or more generally normal operators on Hilbert spaces.
The spectral theorem also provides a canonical decomposition, called the spectral decompositioneigenvalue decomposition, or eigendecomposition, of the underlying vector space on which the operator acts.
In this article we consider mainly the simplest kind of spectral theorem, that for a self-adjoint operator on a Hilbert space. However, as noted above, the spectral theorem also holds for normal operators on a Hilbert space.

Normal matrix

Normal matrix

From Wikipedia, the free encyclopedia
complex square matrix A is a normal matrix if
A^*A=AA^*
where A* is the conjugate transpose of A. That is, a matrix is normal if it commutes with its conjugate transpose.
If A is a real matrix, then A*=AT; it is normal if ATA = AAT.
Normality is a convenient test for diagonalizability: every normal matrix can be converted to a diagonal matrix by a unitary transform, and every matrix which can be made diagonal by a unitary transform is also normal, but finding the desired transform requires much more work than simply testing to see whether the matrix is normal.
The concept of normal matrices can be extended to normal operators on infinite dimensional Hilbert spaces and to normal elements in C*-algebras. As in the matrix case, normality means commutativity is preserved, to the extent possible, in the noncommutative setting. This makes normal operators, and normal elements of C*-algebras, more amenable to analysis.

Hermitian matrix

Hermitian matrix

From Wikipedia, the free encyclopedia
In mathematics, a Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries that is equal to its ownconjugate transpose – that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:
a_{i,j} = overline{a_{j,i}},.
If the conjugate transpose of a matrix A is denoted by A^dagger, then the Hermitian property can be written concisely as
 A = A^dagger,.
Hermitian matrices can be understood as the complex extension of real symmetric matrices.
Hermitian matrices are named after Charles Hermite, who demonstrated in 1855 that matrices of this form share a property with real symmetric matrices of having eigenvalues always real.

Inverse iteration

Inverse iteration

From Wikipedia, the free encyclopedia
In numerical analysisinverse iteration is an iterative eigenvalue algorithm. It allows to find an approximate eigenvector when an approximation to a corresponding eigenvalue is already known. The method is conceptually similar to the power method and is also known as the inverse power method.

Rayleigh quotient iteration

Rayleigh quotient iteration

From Wikipedia, the free encyclopedia
Rayleigh quotient iteration is an eigenvalue algorithm which extends the idea of the inverse iteration by using the Rayleigh quotient to obtain increasingly accurate eigenvalue estimates.
Rayleigh quotient iteration is an iterative method, that is, it must be repeated until it converges to an answer (this is true for all eigenvalue algorithms). Fortunately, very rapid convergence is guaranteed and no more than a few iterations are needed in practice. The Rayleigh quotient iteration algorithm converges cubically, given an initial vector that is sufficiently close to an eigenvector of thematrix that is being analyzed.

Power iteration

Power iteration

From Wikipedia, the free encyclopedia
In mathematics, the power iteration is an eigenvalue algorithm: given a matrix A, the algorithm will produce a number λ (theeigenvalue) and a nonzero vector v (the eigenvector), such that Av = λv.
The power iteration is a very simple algorithm. It does not compute a matrix decomposition, and hence it can be used when A is a very large sparse matrix. However, it will find only one eigenvalue (the one with the greatest absolute value) and it may converge only slowly.