- Файл формата djvu
- размером 2,51 МБ

- Добавлен пользователем Shushimora
- Отредактирован

Society for Industrial and Applied Mathematics, 1998, -479 pp.This book, Basic Decompositions, is the first volume in a projected five-volume series entitled Matrix Algorithms. The other four volumes will treat eigensystems, iterative methods for linear systems, sparse direct methods, and special topics, including fast algorithms for structured matrices.

My intended audience is the nonspecialist whose needs cannot be satisfied by black boxes. It seems to me that these people will be chiefly interested in the methods themselves—how they are derived and how they can be adapted to particular problems. Consequently, the focus of the series is on algorithms, with such topics as rounding error analysis and perturbation theory introduced impromptu as needed. My aim is to bring the reader to the point where he or she can go to the research literature to augment what is in the series.

The series is self-contained. The reader is assumed to have a knowledge of elementary analysis and linear algebra and a reasonable amount of programming experience— about what you would expect from a beginning graduate engineer or an undergraduate in an honors program. Although strictly speaking the individual volumes are not textbooks, they are intended to teach, and my guiding principle has been that if something is worth explaining it is worth explaining fully. This has necessarily restricted the scope of the series, but I hope the selection of topics will give the reader a sound basis for further study.

The focus of this and part of the next volume will be the computation of matrix decompositions—that is, the factorization of matrices into products of simpler ones. This decompositional approach to matrix computations is relatively new: it achieved its definitive form in the early 1960s, thanks to the pioneering work of Alston Householder and James Wilkinson. Before then, matrix algorithms were addressed to specific problems—the solution of linear systems, for example —and were presented at the scalar level in computational tableaus. The decompositional approach has two advantages. First, by working at the matrix level it facilitates the derivation and analysis of matrix algorithms. Second, by deemphasizing specific problems, the approach turns the decomposition into a computational platform from which a variety of problems can be solved. Thus the initial cost of computing a decomposition can pay for itself many times over.

In this volume we will be chiefly concerned with the LU and the QR decompositions along with certain two-sided generalizations. The singular value decomposition also plays a large role, although its actual computation will be treated in the second volume of this series. The first two chapters set the stage not only for the present volume but for the whole series. The first is devoted to the mathematical background matrices, vectors, and linear algebra and analysis. The second chapter discusses the realities of matrix computations on computers.

The third chapter is devoted to the LU decomposition—the result of Gaussian elimination. This extraordinarily flexible algorithm can be implemented in many different ways, and the resulting decomposition has innumerable applications. Unfortunately, this flexibility has a price: Gaussian elimination often quivers on the edge of instability. The perturbation theory and rounding-error analysis required to understand why the algorithm works so well (and our understanding is still imperfect) is presented in the last two sections of the chapter.

The fourth chapter treats the QR decomposition—the factorization of a matrix into the product of an orthogonal matrix and an upper triangular matrix. Unlike the LU decomposition, the QR decomposition can be computed two ways: by the Gram-Schmidt algorithm, which is old, and by the method of orthogonal triangularization, which is new. The principal application of the decomposition is the solution of least squares problems, which is treated in the second section of the chapter. The last section treats the updating problem—the problem of recomputing a decomposition when the original matrix has been altered. The focus here is on the QR decomposition, although other updating algorithms are briefly considered.

The last chapter is devoted to decompositions that can reveal the rank of a matrix and produce approximations of lower rank. The issues stand outmost clearly when the decomposition in question is the singular value decomposition, which is treated in the first section. The second treats the pivoted QR decomposition and a new extension, the QLP decomposition. The third section treats the problem of estimating the norms of matrices and their inverses—the so-called problem of condition estimation. The estimators are used in the last section, which treats rank revealing URV and ULV decompositions. These decompositions in some sense lie between the pivoted QR decomposition and the singular value decomposition and, unlike either, can be updated. Many methods treated in this volume are summarized by displays of pseudocode (see the list of algorithms following the table of contents). These summaries are for purposes of illustration and should not be regarded as finished implementations. In the first place, they often leave out error checks that would clutter the presentation. Moreover, it is difficult to verify the correctness of algorithms written in pseudocode. In most cases, I have checked the algorithms against MATLAB implementations.Matrices, Algebra, and Analysis

Matrices and Machines

Gaussian Elimination

The QR Decomposition and Least Squares

Rank-Reducing Decompositions

My intended audience is the nonspecialist whose needs cannot be satisfied by black boxes. It seems to me that these people will be chiefly interested in the methods themselves—how they are derived and how they can be adapted to particular problems. Consequently, the focus of the series is on algorithms, with such topics as rounding error analysis and perturbation theory introduced impromptu as needed. My aim is to bring the reader to the point where he or she can go to the research literature to augment what is in the series.

The series is self-contained. The reader is assumed to have a knowledge of elementary analysis and linear algebra and a reasonable amount of programming experience— about what you would expect from a beginning graduate engineer or an undergraduate in an honors program. Although strictly speaking the individual volumes are not textbooks, they are intended to teach, and my guiding principle has been that if something is worth explaining it is worth explaining fully. This has necessarily restricted the scope of the series, but I hope the selection of topics will give the reader a sound basis for further study.

The focus of this and part of the next volume will be the computation of matrix decompositions—that is, the factorization of matrices into products of simpler ones. This decompositional approach to matrix computations is relatively new: it achieved its definitive form in the early 1960s, thanks to the pioneering work of Alston Householder and James Wilkinson. Before then, matrix algorithms were addressed to specific problems—the solution of linear systems, for example —and were presented at the scalar level in computational tableaus. The decompositional approach has two advantages. First, by working at the matrix level it facilitates the derivation and analysis of matrix algorithms. Second, by deemphasizing specific problems, the approach turns the decomposition into a computational platform from which a variety of problems can be solved. Thus the initial cost of computing a decomposition can pay for itself many times over.

In this volume we will be chiefly concerned with the LU and the QR decompositions along with certain two-sided generalizations. The singular value decomposition also plays a large role, although its actual computation will be treated in the second volume of this series. The first two chapters set the stage not only for the present volume but for the whole series. The first is devoted to the mathematical background matrices, vectors, and linear algebra and analysis. The second chapter discusses the realities of matrix computations on computers.

The third chapter is devoted to the LU decomposition—the result of Gaussian elimination. This extraordinarily flexible algorithm can be implemented in many different ways, and the resulting decomposition has innumerable applications. Unfortunately, this flexibility has a price: Gaussian elimination often quivers on the edge of instability. The perturbation theory and rounding-error analysis required to understand why the algorithm works so well (and our understanding is still imperfect) is presented in the last two sections of the chapter.

The fourth chapter treats the QR decomposition—the factorization of a matrix into the product of an orthogonal matrix and an upper triangular matrix. Unlike the LU decomposition, the QR decomposition can be computed two ways: by the Gram-Schmidt algorithm, which is old, and by the method of orthogonal triangularization, which is new. The principal application of the decomposition is the solution of least squares problems, which is treated in the second section of the chapter. The last section treats the updating problem—the problem of recomputing a decomposition when the original matrix has been altered. The focus here is on the QR decomposition, although other updating algorithms are briefly considered.

The last chapter is devoted to decompositions that can reveal the rank of a matrix and produce approximations of lower rank. The issues stand outmost clearly when the decomposition in question is the singular value decomposition, which is treated in the first section. The second treats the pivoted QR decomposition and a new extension, the QLP decomposition. The third section treats the problem of estimating the norms of matrices and their inverses—the so-called problem of condition estimation. The estimators are used in the last section, which treats rank revealing URV and ULV decompositions. These decompositions in some sense lie between the pivoted QR decomposition and the singular value decomposition and, unlike either, can be updated. Many methods treated in this volume are summarized by displays of pseudocode (see the list of algorithms following the table of contents). These summaries are for purposes of illustration and should not be regarded as finished implementations. In the first place, they often leave out error checks that would clutter the presentation. Moreover, it is difficult to verify the correctness of algorithms written in pseudocode. In most cases, I have checked the algorithms against MATLAB implementations.Matrices, Algebra, and Analysis

Matrices and Machines

Gaussian Elimination

The QR Decomposition and Least Squares

Rank-Reducing Decompositions

- Чтобы скачать этот файл зарегистрируйтесь и/или войдите на сайт используя форму сверху.
- Регистрация

- Узнайте сколько стоит уникальная работа конкретно по Вашей теме:
- Сколько стоит заказать работу?

- Раздел: Математика → Вычислительная математика

Издательство Springer, 2009, -717 pp.
This book began as a revision of Elements of Computational Statistics, published by Springer in 2002. That book covered computationally-intensive statistical methods from the perspective of statistical applications, rather than from the standpoint of statistical computing.
Most of the students in my courses in computational statistics were...

- 6,16 МБ
- добавлен
- изменен

- Раздел: Линейная алгебра → Матрицы и определители

Springer, 2007. — 530 p. — (Springer text in Statistics). — ISBN 0387708723; ISBN 978-0-387-70873-7. Matrix algebra is one of the most important areas of mathematics for data analysis and for statistical theory. The first part of this book presents the relevant aspects of the theory of matrix algebra for applications in statistics. This part begins with the fundamental concepts of...

- 3,87 МБ
- добавлен
- изменен

- Раздел: Математика → Вычислительная математика

Wiley, 2011. — 743 p.
A comprehensive overview of Monte Carlo simulation that explores the latest topics, techniques, and real-world applications More and more of today's numerical problems found in engineering and finance are solved through Monte Carlo methods. The heightened popularity of these methods and their continuing development makes it important for researchers to...

- 10,32 МБ
- дата добавления неизвестна
- изменен

- Раздел: Линейная алгебра → Матрицы и определители

Издательство World Scientific, 2010, -598 pp. This book is unique in covering the whole of a triptych consisting of algebraic theory, algorithmic problems, and numerical applications, all united by the essential use of matrix methods. This was the spirit of the 2nd International Conference on Matrix Methods and Operator Equations (23-27 July 2007, Moscow) hosted by the Institute...

- 71,44 МБ
- добавлен
- изменен

- Раздел: Математика → Вычислительная математика

Society for Industrial and Applied Mathematics, 2003, -446 pp.
Perhaps the most widely known example of fast algorithms is the fast Fourier transform (FFT) algorithm. Its importance is widely acknowledged and nicely described in numerous papers and monographs, e.g., as follows: "The fast Fourier transform (FFT) is one of the truly great computational developments of this...

- 3,90 МБ
- добавлен
- изменен

Society for Industrial and Applied Mathematics, 2001. — 490 pp.
This book, Eigensystems, is the second volume in a projected five-volume series entitled Matrix Algorithms. The first volume treated basic decompositions. The three following this volume will treat iterative methods for linear systems, sparse direct methods, and special topics, including fast algorithms for...

- 3,74 МБ
- добавлен
- изменен