Organizers: Daniel Kressner (EPFL, Switzerland) - Olga Holtz (University of California at Berkeley, USA) - Alan Edelman (MIT, USA)
December 18, 14:35 ~ 15:25 - Room B11
We describe a framework for solving a broad class of infinite dimensional linear equations, consisting of almost banded operators, which can be used to resepresent linear ordinary differential equations with general boundary conditions. The framework contains a data structure on which row operations can be performed, allowing for the solution of linear equations by the adaptive QR approach. The algorithm achieves $O(n)$ complexity, where $n$ is the number of degrees of freedom required to achieve a desired accuracy, which is determined adaptively. In addition, special tensor product equations, such as partial differential equations on rectangles, can be solved by truncating the operator along one dimension and using a generalized Schur decomposition. The framework is implemented in the ApproxFun.jl package written in the Julia programming language.
December 18, 15:30 ~ 16:00 - Room B11
We describe the use of the generalized singular value decomposition in numerical linear algebra and random matrix theory. The gsvd is well known in numerical linear algebra but not as well known in other circles. We describe applications.
Joint work with Alan Edelman (MIT, USA).
December 18, 16:00 ~ 16:30 - Room B11
I will introduce new users to Julia, and tell some of the history of the project.
December 18, 17:05 ~ 17:55 - Room B11
We show that every $n\times n$ matrix can be decomposed into (i) a product of $n/2 $ Toeplitz matrices, (ii) a product of $n/2$ Hankel matrices, (iii) a product of $4n$ Vandermonde matrices, (iv) a product of $16n$ bidiagonal matrices, or (v) a product of $n^2$ companion matrices. We will see that such decompositions do not in general hold with other types of structured matrix factors (e.g. circulant, symmetric Toeplitz, persymmetric Hankel, etc).
Joint work with Ke Ye (University of Chicago, USA).
December 18, 18:00 ~ 18:30 - Room B11
During the last years, the Cauchon algorithm (also called deleting derivation algorithm  and Cauchon reduction algorithm ) has been applied in algebra, e.g., to tours invariant prime ideals in the quantum matrices. In  and  this algorithm was employed for the recognition of totally nonnegative cells. A real matrix is called totally nonnegative if all its minors are nonnegative. Such matrices arise in a variety of ways in mathematics and its applications, e.g., in differential and integral equations, numerical mathematics, combinatorics, statistics, and computer aided geometric design. For background information we refer to the recently published monographs , . In  we apply the Cauchon algorithm ,  for a proof of a conjecture posed by the speaker in 1982 , see also [3, Section 3.2] and [7, Section 3.2]. This conjecture originated in the interpolation of interval-valued data by using B-splines. It concerns the checkerboard ordering which is obtained from the usual entry-wise ordering in the set of the square real matrices of fixed order by reversing the inequality sign for each entry in a checkerboard fashion. The conjecture concerns the interval property of the nonsingular totally nonnegative matrices, i.e., if the two bound matrices of an interval with respect to this ordering are nonsingular and totally nonnegative then all matrices lying between the two bound matrices are nonsingular and totally nonnegative, too.
In our talk we also brie y report on some very recent applications of the Cauchon algorithm, viz.
$\bullet$ new determinantal tests for testing a given matrix for total nonnegativity and related properties which require much fewer minors to be checked as the tests known so far,
$\bullet$ finding for each entry of a nonsingular totally nonnegative matrix the largest amount by which this entry can be perturbed without losing the property of total nonnegativity,
$\bullet$ identifying other subclasses exhibiting the interval property of the sign regular matrices, i.e., of matrices with the property that all their minors offixed order have one specfied sign or are allowed also to vanish.
References and Literature for Further Reading:
 M. Adm and J. Garloff, Invariance of total nonnegativity of a tridiagonal matrix under element-wise perturbation, Oper. Matrices 8 (1) (2014) 129-137.
 M. Adm and J. Garloff, Intervals of totally nonnegative matrices, Linear Algebra Appl., 439 (2013) 3796-3806.
 S. M. Fallat and C. R. Johnson, Totally Nonnegative Matrices, Princeton Series in Applied Mathematics, Princeton University Press, Princeton and Oxford, 2011.
 J. Garloff, Criteria for sign regularity of sets of matrices, Linear Algebra Appl., 44 (1982) 153-160.
 K. R. Goodearl, S. Launois and T. H. Lenagan, Totally nonnegative cells and matrix Poisson varieties, Adv. Math., 226 (2011) 779-826.
 S. Launois and T. H. Lenagan, Efficient recognition of totally nonnegative matrix cells, Found. Comput. Mat., 14 (2) (2014) 371-387.
 A. Pinkus, Totally Positive Matrices, Cambridge Tracts in Mathematics 181, Cambridge Univ. Press, Cambridge, UK, 2010.
Joint work with Mohammad Adm (University of Konstanz, Germany).
December 19, 14:30 ~ 15:00 - Room B11
We describe the shortcomings of the current algorithmic solutions for the eigenvalue problem and the basic ingredients for a homotopy method for this problem.
December 19, 15:05 ~ 15:55 - Room B11
We describe possible choices for the initial triple (matrix, eigenvalue, eigenvector) for the homotopy and their corresponding average complexity analyses in the Hermitian and general case.
December 19, 16:00 ~ 16:30 - Room B11
In tensor completion, the goal is to fill in missing entries of a partially known tensor under low-rank constraints. We consider low-rank formats that form Riemannian manifolds, such as the Tucker or the tensor train (TT) format. This allows for the application of Riemannian optimization techniques for solving the tensor completion problem. In particular, the nonlinear CG algorithm can be implemented such that it scales linearly in the size of the tensor. We illustrate the use of this algorithm for approximating multivariate functions as well as for parameter-dependent and stochastic partial differential equations.
December 19, 17:05 ~ 17:55 - Room B11
Let $ A$ and $B$ be invertible, and let $C$ have low rank. Then it is trivial that the rank of the solution to the matrix equation $AXB^T = C$ is not larger than the rank of $C$. Formally speaking, this matrix equation admits a low-rank solution. There is some numerical evidence that the solution of frequently occurring linear equations like $A_1 XB_1^ T + \cdots + A_R XB_R^T = C$, where the number of terms is still considerably smaller than the matrix dimension (which might be infinite in an Hilbert space setting), has good low-rank approximability, that is, fast decaying singular values. Exponential decay, as sometimes observed in practice, seems very challenging to prove. It is, however, rather easy to obtain a nontrivial algebraic decay rate by relating the convergence speed of some polynomial approximation to the Kronecker rank of the corresponding operator polynomial. For an eigenvalue problem $AXB^T = \lambda X$ a similar argumentation is possible, although with considerable restrictions. This simple method of estimating the low-rank approximation error for matrices has important consequences for the low-rank approximability of solutions to Kronecker-structured operator equations in higher-order tensor product spaces, as it provides estimates on the singular value decay of certain matrix unfoldings of the solution tensor, which in turn govern the error in higher-order SVD truncations. The talk is based on joint-work with Daniel Kressner.
December 19, 18:00 ~ 18:30 - Room B11
We describe how cross-kernel matrices, that is, kernel matrices between the data and a custom chosen set of `feature spanning points' can be used for learning. At the core is a Nyström-type relation which allows to numerically approximate kernel matrices in terms of cross-kernel matrices. We demonstrate how cross-kernel matrices (A) provide a universal speed-up of kernel learning algorithms through replacing kernel matrices with cross-kernel matrices that scale linearly in the data, and (B) allow learning of features which approximately vanish on the data. We further give an algorithm, called ideal principal component analysis (IPCA), which we derive as the cross-kernel variant of kernel PCA, serving as an exemplification of both the new features and the speed-up to linear runtime. We conclude with some open questions regarding the approximating properties of cross-kernel matrices.
Joint work with Louis Theran, Martin Kreuzer.