|
- Orly Alter (University of Utah, USA)
Comparative Spectral Decompositions for Personalized Cancer Diagnostics and Prognostics
(Abstract)
I will describe the development of novel, multi-tensor generalizations of
the singular value decomposition, and their use in the comparisons of
brain, lung, ovarian, and uterine cancer and normal genomes, to uncover
patterns of DNA copy-number alterations that predict survival and response
to treatment, statistically better than, and independent of, the best
indicators in clinical use and existing laboratory tests. Recurring
alterations have been recognized as a hallmark of cancer for over a
century, and observed in these cancers' genomes for decades; however,
copy-number subtypes predictive of patients' outcomes were not
identified before. The data had been publicly available, but the patterns
remained unknown until the data were modeled by using the multi-tensor
decompositions, illustrating the universal ability of these decompositions
--- generalizations of the frameworks that underlie the theoretical
description of the physical world --- to find what other methods miss.
- Mark Embree* (Virginia Tech, USA)
Nonlinear Eigenvalue Problems: Interpolatory Algorithms and Transient Dynamics
(Abstract)
Nonlinear eigenvalue problems pose intriguing challenges for analysis and
computation. For starters, a finite-dimensional problem can give
infinitely many eigenvalues, quantities that reveal the asymptotic behavior
of associated dynamical systems. Delay differential equations provide an
especially rich source of such problems. We describe several approaches
for approximating solutions to nonlinear eigenvalue problems, based on
ideas originally deployed for model-order reduction: a data-driven
(Loewner) rational interpolation technique, and a structure-preserving
interpolatory projection method. While eigenvalues describe asymptotic
dynamics, the behavior of systems on transient time scales can be more
subtle. We will describe how careful use of pseudospectra can give insight
about the transient behavior of solutions to delay differential equations.
This talk describes collaborative work with Michael Brennan, Alex Grimm,
and Serkan Gugercin.
- Chen Greif (The University of British Columbia, Canada)
Null-space Based Block Preconditioners for Saddle-Point Systems
(Abstract)
The need to iteratively solve large and sparse saddle-point
systems continues to be a challenging task in numerical linear algebra.
In particular, it is important to design preconditioning techniques that
take into account the numerical properties of the underlying discrete
operators. In this talk we consider saddle-point matrices whose leading
block has a low rank. We show that under specific assumptions on the
rank and a few additional mild assumptions, the inverse has unique
mathematical properties. It is possible to utilize null spaces of the
leading block or the off-diagonal blocks as an alternative to Schur
complements. Consequently, a family of indefinite block preconditioners
can be developed. We also introduce a new minimum residual
short-recurrence method for solving saddle-point systems, which is
capable of handling singularity of the leading block.
- Laura Grigori (INRIA Paris, France)
Enlarged Krylov Subspace Methods and Robust Preconditioners
(Abstract)
This talk discusses robust preconditioners and iterative methods for
solving large sparse linear systems of equations. The issues addressed
include robustness as well as communication reduction for increasing
the scalability of linear solvers on large scale computers. The focus
is in particular on enlarged Krylov subspace methods and
preconditioners based on low rank corrections, as well as associated
computational kernels as computing a low rank approximation of a
sparse matrix. The efficiency of the proposed methods is tested on
matrices arising from linear elasticity problems as well as convection
diffusion problems with highly heterogeneous coefficients.
- Per Christian Hansen (Technical University of Denmark, Denmark)
Convergence Stories of Algebraic Iterative Reconstruction
(Abstract)
Kaczmarz's and Cimmino's methods are examples of algebraic iterative reconstruction methods, primarily used to solve discretized inverse problems in computed tomography. They are very flexible because the underlying system $A x = b$ requires no assumption about the scanning geometry, and it is easy to incorporate convex constraints (e.g., box constraints). Their success in computing regularized solutions is due to a mechanism called semi-convergence.
While the asymptotic convergence for noise-free data is well understood, there are surprising few theoretical results related to the convergence for real-world problems with noisy data and model errors. For the same reason, we lack efficient and robust stopping rules that terminate the iterations at the point of semi-convergence.
In this talk I will survey some recent results related to the convergence and semi-convergence of various algebraic iterative reconstruction methods, both (block) row and (block) column versions, and I will illustrate the results with numerical results.
- Daniel Kressner (École Polytechnique Fédérale de Lausanne, Switzerland)
Fast Algorithms from Low-Rank Updates
(Abstract)
The development of efficient numerical algorithms for solving large-scale linear systems is one of the success stories of numerical linear algebra that has had a tremendous impact on our ability to perform complex numerical simulations and large-scale statistical computations. Many of these developments are based on multilevel and domain decomposition techniques, which are intimately linked to Schur complements and low-rank updates of matrices. These tools do not carry over in a direct manner to other important linear algebra problems, including matrix functions and matrix equations. In this talk, we describe a new framework for performing low-rank updates of matrix functions. This allows to address a wide variety of matrix functions and matrix structures, including sparse matrices as well as matrices with hierarchical low rank and Toeplitz-like structures. The versality of this framework will be demonstrated with several applications and extensions. This talk is primarily based on joint work with Bernhard Beckermann and Marcel Schweitzer.
- Valeria Simoncini*^ (Universita di Bolognà, Italy)
Matrix Equation Techniques for a Class of PDE Problems with Data Uncertainty
(Abstract)
Linear matrix equations arise in an amazingly growing number of
applications. Classically, they have been extensively
encountered in Control theory and eigenvalue problems.
More recently they have been shown to provide a natural platform
in the discretization of certain partial differential equations (PDEs), both in
the deterministic setting, and in the presence of uncertainty in the
data.
We first review some numerical techniques
for solving various classes of large scale linear matrix equations
commonly occurring in applications.
Then we focus on recent developments in the solution of
(systems of) linear matrix equations associated with the numerical
treatment of various stochastic PDE problems.
- Yangfeng Su (Fudan University, China)
Theory and Computation of 2D Eigenvalue Problems
(Abstract)
The 2D eigenvalue problem (2dEVP) is a class of
the double eigenvalue problems first studied by Blum
and Chang in 1970s. The 2dEVP seeks scalars \lambda, \mu,
and a corresponding vector x satisfying the following equations
Ax = \lambda x + \mu Cx,
xHCx=0,
xHx=1,
where A and C are Hermitian and C is indefinite.
We show the connections between 2dEVP with
well-known numerical linear algebra and optimization
problems such as quadratic programming, the distance to
instability and H\infty-norm.
We will discuss (1) fundamental properties of 2dEVP including
well-posedness, types and regularity, (2) perturbation theory
and (3) numerical algorithms with backward error analysis.
- Ulrike Yang (Lawrence Livermore National Laboratory, USA)
On the Design of Algebraic Multigrid for High Performance Computers
(Abstract)
Algebraic multigrid (AMG) is an efficient solver for large-scale
scientific computing and an essential component of many simulation codes.
However, with single-core speeds plateauing, future increases in computing
performance have to rely on more complicated, often heterogenous computer
architectures, which provide new challenges for efficient implementations
of AMG. How one views the linear system, e.g. in terms of structured grids
and stencils, or a traditional matrix-vector system, can significantly
affect the design and performance of AMG. Structured AMG can take
advantage of additional information in the structured matrix data
structures, potentially leading to more efficient implementations but is
confined to structured problems, whereas unstructured AMG can be applied
to more general problems. We will discuss these methods, their
implementation and performance and introduce a new semi-structured
multigrid method that can take advantage of the structured parts of a
problem, but is capable to solve more general problems.
- Lexing Ying (Stanford University, USA)
Interpolative Decomposition and Its Applications
(Abstract)
Interpolative decomposition is a simple and yet powerful tool for approximating low-rank
matrices. After discussing the theory and algorithm, we will present a few new applications of
interpolative decomposition in numerical linear algebra, partial differential equations, quantum
chemistry, and machine learning.
* These speakers are supported in cooperation with the International Linear Algebra Society.
^ Hans Schneider ILAS Lecturer
|
|