Hostname: page-component-78c5997874-94fs2 Total loading time: 0 Render date: 2024-11-15T23:33:24.448Z Has data issue: false hasContentIssue false

A Decomposition Theorem for Matrices

Published online by Cambridge University Press:  20 November 2018

Martin H. Pearl*
Affiliation:
The University of Maryland, College Park, Maryland
Rights & Permissions [Opens in a new window]

Extract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

According to a classical theorem originally proved by L. Autonne (1; 3) in 1915, every m × n matrix of rank r with entries from the complex field can be decomposed as

where U1 and U2 are unitary matrices of order m and n respectively and D is an m × n matrix having the form

1

where Δ is a non-singular diagonal matrix whose rank is r. If r = m, then the row of zero matrices of (1) does not actually appear. If r = n, then the column of zero matrices of (1) does not appear. The main purpose of this paper is to give a necessary and sufficient condition under which both U1 and U2 may be chosen to be real orthogonal matrices.

Type
Research Article
Copyright
Copyright © Canadian Mathematical Society 1967

References

1. Autonne, L., Sur les matrices hypohermitiennes et sur les matrices unitaires, Ann. Univ. Lyon (2), 38 (1915), 177.Google Scholar
2. Bellman, R., An introduction to matrix analysis (New York, 1960).Google Scholar
3. Penrose, R., A generalized inverse for matrices, Proc. Cambridge Philos. Soc., 52 (1955), 406413.Google Scholar
4. Schwerdtfeger, H., Introduction to linear algebra and the theory of matrices (Groningen, 1950).Google Scholar