Factorize a positive semi-definite matrix A
into a lower-unitriangular
matrix, L
, and a set of diagonals, d
, such that:
L * diag(d) * L.' == A
Additionally, this function can output the number of zero eigenvalues.
No check is performed to ensure that the input matrix is positive, semi-definite.
[L, d] = ldfactor(A);
Z = ldfactor(A);
For the compact form, Z = ldfactor(A)
, d
will be on the diagonal of
Z
, the sub-diagonal of Z
will contain the sub-diagonal of L
, and
the super-diagonal of Z
will be garbage. To recover L
and d
from
Z
:
d = diag(Z);
n = size(Z, 1);
L = tril(Z);
L(1:n+1:end) = 1;
An optional tolerance can be provided. When diagonals are below this value, they will be considered zero and will not be used for division. This is intended for safety in the face of very small numerical problems.
A | Positive, semi-definite matrix |
---|---|
tol | Division tolerance; when divisors are below this value, no division will actually take place. |
L | Lower unitriangular matrix |
---|---|
d | Vector of diagonals |
nz | Number of zero diagonals found during factorization |
A = randcov(3);
[L, d] = ldfactor(A)
L =
1.0000 0 0
-0.4576 1.0000 0
-0.4076 -0.4072 1.0000
d =
0.5052
0.4066
0.5989
We can re-use the matrix from above to show the compact form.
% Run the "compact" decomposition.
Z = ldfactor(A)
Z =
0.5052 -0.2311 -0.2059
-0.4576 0.4066 -0.0714
-0.4076 -0.4072 0.5989
Show that we can extra the same factors from it.
d = diag(Z) % Extract the diagonals.
n = size(Z, 1);
L = tril(Z); % Zero-out the superdiagonal.
L(1:n+1:end) = 1 % Put 1s on the diagonal.
d =
0.5052
0.4066
0.5989
L =
1.0000 0 0
-0.4576 1.0000 0
-0.4076 -0.4072 1.0000
Suppose that, as some algorithm is running, a nominally positive,
semi-definite matrix ends up with a zero or very slightly negative
eigenvalue due to roundoff error or some such thing. This would normally
cause an error in LDL decomposition, but the tolerance input protects
against this. We'll make a matrix with a slightly negative eigenvalue and
show that the diagonals returned from ldfactor
are all greater than or
equal to 0.
Make a random rotation matrix to hide the underlying eigenvalues from the LD algorithm.
theta = 2*pi*rand();
R = [ cos(theta), sin(theta); ...
-sin(theta), cos(theta)];
Create a matrix with a problematic eigenvalue.
A = R * [-eps 0; 0 1] * R.'
A =
0.0293 0.1687
0.1687 0.9707
Factor it to see that the slightly negative eigenvalue is dropped. Also, return the number of zeros found.
[L, d, nz] = ldfactor(A)
L =
1.0000 0
5.7537 1.0000
d =
0.0293
0
nz =
1
How close is this to the original A
matrix?
L * diag(d) * L.'
ans =
0.0293 0.1687
0.1687 0.9707
What's the actual difference?
A - L * diag(d) * L.'
ans =
1.0e-14 *
0 0
0 -0.7550
It is negligible, on scale with the error we intentionally introduced in
the construction of A
.
When these diagonals will continue to be used further in an algorithm, for instance in a filter, it's important that they remain positive.
Bierman, Gerald J. Factorization Methods for Discrete Sequential Estimation. Meneola, New York: Dover Publications, Inc., 1977. Print. Pages 42-43.
ld2mat, udfactor, ud2mat, sqrtpsdm
*kf
v1.0.3