Not only is matrix theory significant in a wide range of fields mathematical economics, quantum physics, geophysics, electrical network synthesis, crystallography, and structural engineering, among others-but with the vast proliferation of digital computers, knowledge of matrix theory is a must for every modern engineer, mathematician, and scientist. Matrices represent linear transformations from a finiteset of numbers to another finite set of numbers.
Since many important problems are linear, and since digital computers with finite memory manipulate only finite sets of numbers, the solution of linear problems by digital computers usually involves matrices. Developed from the author's course on matrix theory at the California
Institute of Technology, the book begins with a concise presentation of the theory of determinants, continues with a discussion of classical linear algebra, and an optional chapter on the use of matrices to solve systems of linear triangularizations of Hermitian and nonHermitian matrices, as well as a chapter presenting a proof of the difficult and important matrix theory of Jordan. The book concludes with discussions of variational principles and perturbation theory of matrices, matrix numerical analysis, and an introduction to the subject of linear computations.
The book is designed to meet many different needs, and because it is mathematically rigorous, it may be used by students of pure and applied mathematics. Since it is oriented towards applications, it is valuable to students of engineering, science, and the social sciences. And because it contains the basic preparation in matrix theory required for numerical analysis, it can be used by students whose main interest is computers. The book assumes very little mathematical preparation, and except for the single section on the continuous dependence of eigenvalues on matrices, a knowledge of elementary algebra and calculus is sufficient.
When first published in 2005, Matrix Mathematics quickly became the essential reference book for users of matrices in all branches of engineering, science, and applied mathematics. In this fully updated and expanded edition, the author brings together the latest results on matrix theory to make this the most complete, current, and easy-to-use book on matrices.
Each chapter describes relevant background theory followed by specialized results. Hundreds of identities, inequalities, and matrix facts are stated clearly and rigorously with cross references, citations to the literature, and illuminating remarks. Beginning with preliminaries on sets, functions, and relations, Matrix Mathematics covers all of the major topics in matrix theory, including matrix transformations; polynomial matrices; matrix decompositions; generalized inverses; Kronecker and Schur algebra; positive-semidefinite matrices; vector and matrix norms; the matrix exponential and stability theory; and linear systems and control theory. Also included are a detailed list of symbols, a summary of notation and conventions, an extensive bibliography and author index with page references, and an exhaustive subject index. This significantly expanded edition of Matrix Mathematics features a wealth of new material on graphs, scalar identities and inequalities, alternative partial orderings, matrix pencils, finite groups, zeros of multivariable transfer functions, roots of polynomials, convex functions, and matrix norms.Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra.