Decomposition

Matrix decomposition is a key focus area in linear algebra. The concept involves rewriting a matrix as a product of matrices, with the goal of improving computational efficiency. Gaussian elimination provides one method of matrix decomposition. While this particular decomposition may not offer significant computational advantages, it has paved the way for more advanced decomposition techniques.

Here’s an example illustrating how decomposition can reduce arithmetic complexity. Consider an n by n matrix M. It requires at least n2 many steps to find M2. However, if we can write M as ADA1 where D is a diagonal matrix, then we have

M2=(ADA1)(ADA1)=ADIDA1=AD2A1.

Compare to M2, D2 only requires n many steps to determine. This reduces the computation of power of M dramatically.

LU and LDU decomposition

  • L: a lower triangular matrix

  • D: a diagonal matrix

  • U: an upper triangular matrix

- L: a lower triangular matrix
- D: a diagonal matrix
- U: an upper triangular matrix

As the name suggests, LU decomposition expresses a matrix as the product of a lower triangular matrix and an upper triangular matrix. LDU decomposition takes this a step further by isolating the row multiplication as a diagonal matrix between the lower and upper triangular matrices. Let’s begin with a squared matrix A (LU decomposition doesn’t reuqire A to be square; this assumption merely simplifies our discussion). Regardless of whether A is invertible or not, Gaussian elimination can always produce an upper triangular matrix U from A. Assuming no row interchange is necessary to obtain the matrix U, we know that through the elimination e1,,en with correspondent elementary matrices E1,,En (all lower triangular), we have EnE1A=U. It follows that A=E11En1U. Now, we need several lemmas.

Lemma. The product of two lower (resp. upper) triangular matrices is still lower (resp. upper) triangular.

proof. Let L1 and L2 be a lower triangular matrix. We only need to show that every (i,j)-entry of L1L2 for i<j is zero. The (i,j)entry is the product of the i-th row vector of L1, denoted by v1, and the j-th column of L2, denoted by v2. Note that

v1=[,,,0,,0],

where the first i components can be any number, and

v2=[00],

where the first j components are zero. Therefore, because j>i, v2 provides enough many 0 to multiply with from v1. Hence, we have v1v2=0++0+0++0=0. This is the desired result.


Lemma. The inverse matrix of a lower triangular matrix is lower triangular. Similarly, the inverse matrix of an upper triangular matrix is upper triangular.

proof. (see https://math.stackexchange.com/questions/245871/the-inverse-of-a-lower-triangular-matrix-is-lower-triangular)


Thus, we can conclude that E11En1 is a lower triangular matrix L through mathematical induction (to extend the first lemma’s applicability to any number of matrix multiplication, we use mathematical induction). Consequently, we have A=E1En1U=LU. If the entries on the diagonal of U are not 1 or 0, we can multiply by a diagonal matrix D such that DU=U or U=D1U where U is of the desired form. Therefore, A is LD1U is an LDU decomposition. Example. Let’s consider

M=[1234]

The elementary row operation 3×(1)+(2)(2) makes matrix become an upper triangular matrix. Its correspondent elementary E matrix is [10 31], so we have

[1031][1234]=[1202].

The inverse of E is [10 31], so we find LU decomposition of M. That is

[1234]=[1031][1202]

If we want the entries on the diagonal of the upper triangular matrix to be either 0 or 1, we can further decompose the upper triangular matrix as

[1202]=[1002][1201].

This demonstrates an LDU decomposition of the original matrix.

English booster

decomposition/decompose decompose 分解