A general matrix A can be reduced to tridiagonal form by orthogonal
transformations on the left and right: UTAV = T. We can arrange that the
rst columns of U and V are proportional to given vectors b and c. An iterative
form of this process was given by Saunders, Simon, and Yip (SINUM 1988) and
used to solve square systems Ax = b and ATy = c simultaneously. (One of the
resulting solvers becomes MINRES when A is symmetric and b = c.)
The approach was rediscovered by Reichel and Ye (NLAA 2008) with emphasis
on rectangular A and least-squares problems Ax ~ b. The resulting solver was
regarded as a generalization of LSQR (although it doesn't become LSQR in
any special case). Careful choice of c was shown to improve convergence.
In his last year of life, Gene Golub became interested in \GLSQR" for
estimating cTx = bTy without computing x or y. Golub, Stoll, and Wathen
(ETNA 2008) revealed that the orthogonal tridiagonalization is equivalent to a
certain block Lanczos process. This reminds us of Golub, Luk, and Overton
(TOMS 1981): a block Lanczos approach to computing singular vectors.
On solving indefinite least squares problems via anti-triangular factorizations:
Nicola Mastronardi, IAC-CNR, Bari, Italy and Paul Van Dooren, UCL, Louvain-la-Neuve, Belgium
Suppose you have a collection of data matrices each of which has the same number of columns. The HO-GSVD can be used to identify common features that are implicit across the collection. It works by identifying a certain (approximate) invariant subspace of a matrix that is a challenging combination of the collection matrices. In describing the computational process I will talk about the Higher Order
CS decomposition and a really weird optimization problem that I bet you have never seen before! Joint work with Orly Alter, Priya Ponnapalli, and Mike Saunders.
Differential equations for the approximation of the distance to the closest defective matrix. Joint work with P. Butta' and S. Noschese (Università di Roma La Sapienza) and M. Manetta (Università dell'Aquila).
Model Order Reduction Methods for linear systems are well studied and many successful methods exist. We will review some and explain more recent advances in Parametric Model Order Reduction. The focus will be on methods where we interpolate certain signicant measures, that are computed for specic values of the parameter by
Radial Basis Function Interpolation. These measures have a disadvantage as they behave like eigenvalues of matrices depending on parameters and we will explain how that can be dealt with in practice. We will furthermore need to introduce a technique to create a medium size model.