- Файл формата djvu
- размером 3,90 МБ

- Добавлен пользователем Shushimora
- Отредактирован

Society for Industrial and Applied Mathematics, 2003, -446 pp.Perhaps the most widely known example of fast algorithms is the fast Fourier transform (FFT) algorithm. Its importance is widely acknowledged and nicely described in numerous papers and monographs, e.g., as follows: "The fast Fourier transform (FFT) is one of the truly great computational developments of this century. It has changed the face of science and engineering so that it is not an exaggeration to say that life as we know it would be very different without FFT" (Charles Van Loan, Computational Frameworks for the Fast Fourier Transform, SIAM Publications, 1992). There are many different mathematical languages which can be used to derive and describe the FFT, and the "structured matrices language" is one of them, yielding the following interpretation. Though the usual matrix-vector multiplication uses n(2n—1) arithmetic operations, the special structure of the discrete Fourier transform matrix allows us to reduce the latter complexity to the nearly linear cost of 0(n log n) operations. The practical importance of such a dramatic speed-up is impossible to overestimate; even for moderately sized problems one can compute the result hundreds of times faster.

This is a model example showing why reducing the computational burden via structure exploitation is an important issue in many applied areas. Thus, it is not surprising that in recent years the design of fast algorithms for structured matrices has become an increasingly important activity in a diverse variety of branches of the exact sciences. Unfortunately, until recently there was not much interaction between the different branches. There were no comprehensive meetings bringing together "all interested parties," and no cross-disciplinary publishing projects. Such situations caused several disadvantages.

First, there clearly was a certain parallelism, and several algorithms have independently been rediscovered in different areas. For example, the Chebyshev continuous fraction algorithm for interpolation, the Stiltjes procedure for generating orthogonal polynomials, the Lanczos algorithm for computing the eigenvalues of a symmetric matrix, and the Berlekamp-Massey algorithm for decoding of BCH codes are closely related. Another example: the classical Nevanlinna algorithm for rational passive interpolation and the Darlington procedure for passive network synthesis are variations on the same theme.

The second disadvantage of the lack of cross-disciplinary interaction is that it was not clear that the research efforts in different branches are part of what one can call a "full research cycle," starting from an actual application, through developing deep theories, to the design of efficient algorithms and their implementation. Researchers in different branches often had somewhat narrower focuses. Electrical engineers used structured matrices to efficiently solve applied problems. Mathematicians exploited the structure to obtain elegant solutions for various fundamental problems. Computer scientists utilized the structure to speed up related algorithms and to study the corresponding complexity problems. Numerical analysts took advantage of the structure to improve accuracy in many cases. In recent years such an unfortunate situation has changed. We had a number of cross-disciplinary conferences in Santa Barbara (USA. Aug. 1996). Cortona (Italy. Sept. 1996 and Sept. 2000), Boulder (USA. July 1999). Chemnitz (Germany. Jan. 2000), South Hadley (USA, Aug. 2001). In fact, it was the "cross-fertilization" atmosphere of the South Hadley meeting that suggested the idea to pursue this publishing project. We hope it demonstrates the following two points. First, the approaches, ideas and techniques of engineers, mathematicians, and numerical analysts nicely complement each other, and despite their differences in techniques and agendas they can be considered as important parts of a joint research effort.

Secondly, the theory of structured matrices and design of fast algorithms for them seem to be positioned to bridge several diverse fundamental and applied areas. The volume contains twenty-two survey and research papers devoted to a variety of theoretical and practical aspects of design of fast algorithms for structured matrices and related issues. It contains a number of papers on direct fast algorithms and also on iterative methods. The convergence analysis of the latter requires studying spectral properties of structured matrices. The reader will find here several papers containing various affirmative and negative results in this direction. The theory of rational interpolation is one of the excellent sources providing intuition and methods to design fast algorithms. This volume contains several computational and theoretical papers on the topic. There are several papers on new applications of structured matrices, e.g., the design of fast decoding algorithms, computing state-space realizations, relations to Lie algebras, unconstrained optimization, solving matrix equations, etc.

We hope that the reader will enjoy a plethora of different problems, different focuses, and different methods that all contribute to one unified theory of fast algorithms for structured matrices and related theoretical issues.

Pivoting for Structured Matrices and Rational Tangential Interpolation.

Inversion of Toeplitz-Plus-Hankel Matrices with Arbitrary Rank Profile.

A Lanczos-type Algorithm for the QR Factorization of Cauchy-like Matrices.

Fast and Stable Algorithms for Reducing Diagonal Plus Semiseparable Matrices to Tridiagonal and Bidiagonal Form.

A Comrade-Matrix-Based Derivation of the Eight Versions of Fast Cosine and Sine Transforms.

Solving Certain Matrix Equations by Means of Toeplitz Computations: Algorithms and Applications.

A Fast Singular Value Algorithm for Hankel Matrices.

A Modified Companion Matrix Method Based on Newton Polynomials.

A Fast Direct Method for Solving the Two-dimensional Helmholtz Equation, with Robbins Boundary Conditions.

Structured Matrices in Unconstrained Minimization Methods.

Computation of Minimal State Space Realizations in Jacobson Normal Form.

High Order Accurate Particular Solutions of the Biharmonic Equation on General Regions.

A Fast Projected Conjugate Gradient Algorithm for Training Support Vector Machines.

A Displacement Approach to Decoding Algebraic Codes.

Some Convergence Estimates for Algebraic Multilevel Preconditioners.

Spectral Equivalence and Matrix Algebra Preconditioners for Multilevel Toeplitz Systems: A Negative Result.

Spectral Distribution of Hermitian Toeplitz Matrices Formally Generated by Rational Functions.

From Toeplitz Matrix Sequences to Zero Distribution of Orthogonal Polynomials.

On Lie Algebras, Submanifolds and Structured Matrices.

Riccati Equations and Bitangential Interpolation Problems with Singular Pick Matrices.

Functions with Pick Matrices having Bounded Number of Negative Eigenvalues.

One-dimensional Perturbations of Selfadjoint Operators with Finite or Discrete Spectrum.

This is a model example showing why reducing the computational burden via structure exploitation is an important issue in many applied areas. Thus, it is not surprising that in recent years the design of fast algorithms for structured matrices has become an increasingly important activity in a diverse variety of branches of the exact sciences. Unfortunately, until recently there was not much interaction between the different branches. There were no comprehensive meetings bringing together "all interested parties," and no cross-disciplinary publishing projects. Such situations caused several disadvantages.

First, there clearly was a certain parallelism, and several algorithms have independently been rediscovered in different areas. For example, the Chebyshev continuous fraction algorithm for interpolation, the Stiltjes procedure for generating orthogonal polynomials, the Lanczos algorithm for computing the eigenvalues of a symmetric matrix, and the Berlekamp-Massey algorithm for decoding of BCH codes are closely related. Another example: the classical Nevanlinna algorithm for rational passive interpolation and the Darlington procedure for passive network synthesis are variations on the same theme.

The second disadvantage of the lack of cross-disciplinary interaction is that it was not clear that the research efforts in different branches are part of what one can call a "full research cycle," starting from an actual application, through developing deep theories, to the design of efficient algorithms and their implementation. Researchers in different branches often had somewhat narrower focuses. Electrical engineers used structured matrices to efficiently solve applied problems. Mathematicians exploited the structure to obtain elegant solutions for various fundamental problems. Computer scientists utilized the structure to speed up related algorithms and to study the corresponding complexity problems. Numerical analysts took advantage of the structure to improve accuracy in many cases. In recent years such an unfortunate situation has changed. We had a number of cross-disciplinary conferences in Santa Barbara (USA. Aug. 1996). Cortona (Italy. Sept. 1996 and Sept. 2000), Boulder (USA. July 1999). Chemnitz (Germany. Jan. 2000), South Hadley (USA, Aug. 2001). In fact, it was the "cross-fertilization" atmosphere of the South Hadley meeting that suggested the idea to pursue this publishing project. We hope it demonstrates the following two points. First, the approaches, ideas and techniques of engineers, mathematicians, and numerical analysts nicely complement each other, and despite their differences in techniques and agendas they can be considered as important parts of a joint research effort.

Secondly, the theory of structured matrices and design of fast algorithms for them seem to be positioned to bridge several diverse fundamental and applied areas. The volume contains twenty-two survey and research papers devoted to a variety of theoretical and practical aspects of design of fast algorithms for structured matrices and related issues. It contains a number of papers on direct fast algorithms and also on iterative methods. The convergence analysis of the latter requires studying spectral properties of structured matrices. The reader will find here several papers containing various affirmative and negative results in this direction. The theory of rational interpolation is one of the excellent sources providing intuition and methods to design fast algorithms. This volume contains several computational and theoretical papers on the topic. There are several papers on new applications of structured matrices, e.g., the design of fast decoding algorithms, computing state-space realizations, relations to Lie algebras, unconstrained optimization, solving matrix equations, etc.

We hope that the reader will enjoy a plethora of different problems, different focuses, and different methods that all contribute to one unified theory of fast algorithms for structured matrices and related theoretical issues.

Pivoting for Structured Matrices and Rational Tangential Interpolation.

Inversion of Toeplitz-Plus-Hankel Matrices with Arbitrary Rank Profile.

A Lanczos-type Algorithm for the QR Factorization of Cauchy-like Matrices.

Fast and Stable Algorithms for Reducing Diagonal Plus Semiseparable Matrices to Tridiagonal and Bidiagonal Form.

A Comrade-Matrix-Based Derivation of the Eight Versions of Fast Cosine and Sine Transforms.

Solving Certain Matrix Equations by Means of Toeplitz Computations: Algorithms and Applications.

A Fast Singular Value Algorithm for Hankel Matrices.

A Modified Companion Matrix Method Based on Newton Polynomials.

A Fast Direct Method for Solving the Two-dimensional Helmholtz Equation, with Robbins Boundary Conditions.

Structured Matrices in Unconstrained Minimization Methods.

Computation of Minimal State Space Realizations in Jacobson Normal Form.

High Order Accurate Particular Solutions of the Biharmonic Equation on General Regions.

A Fast Projected Conjugate Gradient Algorithm for Training Support Vector Machines.

A Displacement Approach to Decoding Algebraic Codes.

Some Convergence Estimates for Algebraic Multilevel Preconditioners.

Spectral Equivalence and Matrix Algebra Preconditioners for Multilevel Toeplitz Systems: A Negative Result.

Spectral Distribution of Hermitian Toeplitz Matrices Formally Generated by Rational Functions.

From Toeplitz Matrix Sequences to Zero Distribution of Orthogonal Polynomials.

On Lie Algebras, Submanifolds and Structured Matrices.

Riccati Equations and Bitangential Interpolation Problems with Singular Pick Matrices.

Functions with Pick Matrices having Bounded Number of Negative Eigenvalues.

One-dimensional Perturbations of Selfadjoint Operators with Finite or Discrete Spectrum.

- Чтобы скачать этот файл зарегистрируйтесь и/или войдите на сайт используя форму сверху.
- Регистрация

- Узнайте сколько стоит уникальная работа конкретно по Вашей теме:
- Сколько стоит заказать работу?

Springer, 2007. - 288 Pages.
This book brings together linear algebra, numerical methods and an easy to use programming environment under Matlab (or Scilab). One of the key features of the book are the worked out examples and exercises at the end of each chapter. The reader is asked to do some numerical experiments in Matlab and then to prove the results theoretically. The...

- 1,90 МБ
- добавлен
- изменен

- Раздел: Линейная алгебра → Матрицы и определители

Second Edition. — Princeton University Press, 2009. — 1184 p.
When first published in 2005, Matrix Mathematics quickly became the essential reference book for users of matrices in all branches of engineering, science, and applied mathematics. In this fully updated and expanded edition, the author brings together the latest results on matrix theory to make this the most...

- 9,17 МБ
- добавлен
- изменен

- Раздел: Линейная алгебра → Матрицы и определители

Springer, 2007. — 530 p. — (Springer text in Statistics). — ISBN 0387708723; ISBN 978-0-387-70873-7. Matrix algebra is one of the most important areas of mathematics for data analysis and for statistical theory. The first part of this book presents the relevant aspects of the theory of matrix algebra for applications in statistics. This part begins with the fundamental concepts of...

- 3,87 МБ
- добавлен
- изменен

- Раздел: Линейная алгебра → Матрицы и определители

Издательство World Scientific, 2010, -598 pp. This book is unique in covering the whole of a triptych consisting of algebraic theory, algorithmic problems, and numerical applications, all united by the essential use of matrix methods. This was the spirit of the 2nd International Conference on Matrix Methods and Operator Equations (23-27 July 2007, Moscow) hosted by the Institute...

- 71,44 МБ
- добавлен
- изменен

Cambridge University Press, 2007. — 1262 p.
Authors:
William H. Press - Raymer Chair in Computer Sciences and Integrative Biology The University of Texas at Austin
Saul A. Teukolsky - Hans A. Bethe Professor of Physics and Astrophysics Cornell University
William T. Vetterling - Research Fellow and Director of Image Science ZINK Imaging, LLC
Brian P. Flannery - Science,...

- 7,57 МБ
- дата добавления неизвестна
- изменен

- Раздел: Математика → Вычислительная математика

Society for Industrial and Applied Mathematics, 1998, -479 pp.
This book, Basic Decompositions, is the first volume in a projected five-volume series entitled Matrix Algorithms. The other four volumes will treat eigensystems, iterative methods for linear systems, sparse direct methods, and special topics, including fast algorithms for structured matrices.
My intended audience...

- 2,51 МБ
- добавлен
- изменен