**Organizers:** Stefan Heinrich (University of Kaiserslautern, Germany) - Aicke Hinrichs (Friedrich Schiller University Jena, Germany)

December 15, 14:30 ~ 15:00 - Room B23

In this work we extend our earlier work motivated by path-dependent option pricing problems, in which we tried to understand how it is that sparse grid and QMC methods can be applied successfully to option pricing problems, even though the integrands do not live in any mixed derivative smoothness class. That difficulty derives from the ``max function'' in the integrand, describing the fact that options are considered worthless if the payoff falls below the strike price.

In a previous paper (Math. Comp. 82, 383-400, 2013) we showed that if the expected value is expressed as an integral over $\mathbb{R}^d$ then the classical ANOVA decomposition of the integrand for an arithmetic Asian option can have every term smooth except for the very highest term. That highest ANOVA term has many discontinuities in first partial derivatives, but in most cases is expected to be pointwise very small.

In the present work we consider the ANOVA decomposition of the corresponding continuous problem in the Brownian bridge (or Levy-Ciesielski) formulation, and show that in this case \textbf{every term in the (infinite) ANOVA decomposition is smooth.} With this result we are preparing for an error analysis of the cubature problem for option pricing problem, in which the discrete-time problem is approximated by the continuous problem, and the error analysis then applied to the truncated infinite ANOVA expansion, in which every term is smooth.

Joint work with Frances Kuo (UNSW), Michael Griebel (Bonn).

December 15, 15:00 ~ 15:30 - Room B23

We consider anchored and ANOVA spaces of functions with mixed first order partial derivatives bounded in $L_1$ and $L_\infty$ norms. The spaces are weighted and we provide necessary and sufficient conditions so that the corresponding spaces have equivalent norms with constants independent of the number of variables. This includes the case of countably many variables.

Joint work with Grzegorz Wasilkowski.

December 15, 15:30 ~ 16:00 - Room B23

We extend the recent investigation of G. Wasilkowski on the equivalence of weighted anchored and ANOVA $L_1$ and $L_\infty$ norms of function spaces with mixed partial derivatives of order one. Among other norms, we consider $L_p$ norms for $1\le p\le \infty$ and confirm the conjecture that for product weights, summability of the weight sequence is necessary and sufficient.

Joint work with Jan Schneider (University of Rostock, Germany).

December 15, 16:00 ~ 16:30 - Room B23

We consider a quadrature problem on the sequence space $\mathbb R^{\mathbb N}$, where the underlying measure $\mu=N(0,1)^{\mathbb N}$ is given by the countable product of standard normal distributions and the integrands $f:\mathbb R^{\mathbb N}\to\mathbb R$ belong to a unit ball of a reproducing kernel Hilbert space. This space has tensor product form, and the building blocks are weighted Sobolev spaces of once differentiable functions where the function itself and the derivative have bounded $L^2$-norm with respect to the centered normal distribution $N(0,\sigma^2)$ with variance $\sigma^2$.

We consider deterministic algorithms in the worst-case-setting, where the cost of evaluating a function at a point is the index of the highest nonzero component. Upper and lower bounds for the complexity are derived.

December 15, 17:00 ~ 17:30 - Room B23

We study the approximation of high-dimensional rank one tensors using point evaluations. We prove that for certain parameters (smoothness and norm of the $r$th derivative) this problem is intractable while for other parameters the problem is tractable and the complexity is only polynomial in the dimension. We completely characterize the set of parameters that lead to easy or difficult problems, respectively. In the ``difficult'' case we modify the class to obtain a tractable problem: The problem gets tractable with a polynomial (in the dimension) complexity if the support of the function is not too small.

Joint work with Daniel Rudolf (Jena University, Germany).

December 15, 17:30 ~ 18:00 - Room B23

Within the framework of information-based complexity theory, we study the approximate solution of certain classes of scalar first order partial differential equations, both in the deterministic and the randomized setting. We consider standard information. The analysis is based on the classical method of characteristics and on recent results by Th. Daun and the author on the complexity of parametric ordinary differential equations. We also discuss the case of first order PDEs depending on parameters.

December 15, 18:00 ~ 18:30 - Room B23

We give an overview of recent results on approximation of ridge functions $f(x)=g(a\cdot x)$. As the class of ridge functions is non-linear, number of interesting questions appear. We present, and analyse, several algorithms for recovery of such functions, study their numerical performance and their optimality. Surprisingly, nearly all sorts of tractability appear when trying to approximate ridge functions in high dimension from a limited number of its function values.

Joint work with Ingrid Daubechies (Duke), Massimo Fornasier (TU Munich), Anton Kolleck (TU Berlin), Sebastian Mayer (Uni Bonn), Karin Schnass (Uni Innsbruck) and Tino Ullrich (Uni Bonn).

December 15, 18:30 ~ 19:00 - Room B23

Calculating optimal weights for construction of rank-1 lattice rules for high dimensional integration problems in weighted spaces requires in general knowledge of the (weighted) norm of the functions at hand. The latter norm usually needs evaluation of mixed partial derivatives w.r.t. each variable at most once (cross--derivatives) in high dimensions.\\ Consider a function $f : \Omega \rightarrow \mathbb{R}^d$ that has cross--derivatives that are continuous over some convex compact set $\Omega \subset \mathbb{R}^d$. As for $d$ up to about $20$, evaluating all $2^d$ cross--derivatives in a single point $x \in \Omega$ is still quite feasible, but for much larger dimensions this task becomes prohibitively expensive. A new method for automatic bounding of cross--derivatives at a relative cost of $O(d^2)$ is given, where $d$ is the real dimension of the problem. In the new method we are content with the computation of $2\,d +1$ product and order dependent (POD) bounding coefficients $ \Lambda_{0} , \quad \overline{\Lambda} \; := \; ( \Lambda_{1}, \dots, \Lambda_{d}) \quad \text{ and } \quad \overline{\lambda} \; := \; (\lambda_{1}, \ldots, \lambda_{d})\;, $ such that for $ \mathbf{i} \subset \{1,2,\ldots,d\}$ we have $ \left | f_\mathbf{i}(x) \right | \; \leq \; {|\mathbf{i}|!}\, \Lambda_{|\mathbf{i}|} \prod_{j \in \mathbf{i}} \lambda_j .\; \\ $ This method delivers an effective way of calculating recently introduced POD weights by minimizing a less effective integration error upper bound.

Joint work with Andreas Griewank (Humboldt University of Berlin).

December 16, 14:35 ~ 15:25 - Room B23

In this talk we discuss various results on numerical integration for functions defined on a domain $D \subseteq \mathbb{R}^d$. In quasi-Monte Carlo theory one considers the domain $[0,1]^s$. In this case one wants quadrature points which are well distributed. If the dimension is large this can be a challenging problem. Another problem which often occurs is that the domain is not the unit cube (a standard example is $\mathbb{R}^s$). In this case one can either use some transformation to obtain an integral over the unit cube or construct quadrature rules in the given domain. Applications of such integration techniques include option pricing, the estimation of the expectation value of solutions of PDEs with random coefficients and some problems from machine learning.

Joint work with Christoph Aistleitner (JKU Linz, Austria), Johann Brauchart (TU Graz, Austria), Takashi Goda (University of Tokyo, Japan), Peter Kritzer (JKU Linz, Austria), Frances Kuo (UNSW, Australia), Gunther Leobacher (JKU Linz, Austria), Quoc Thong Le Gia (UNSW, Australia), Dirk Nuyens (KU Leuven, Belgium), Friedrich Pillichshammer (JKU Linz, Austria), Christoph Schwab (ETH Z\"urich, Switzerland), Kosuke Suzuki (University of Tokyo, Japan), Takehito Yoshiki (University of Tokyo, Japan) and Houying Zhu (UNSW, Australia).

December 16, 15:30 ~ 16:00 - Room B23

Tractability properties of various notions of discrepancy have been intensively studied in the last decade. In this presentation we consider the so-called weighted star discrepancy which was introduced by Sloan and Wo\'{z}niakowski in 1998. We give an overview of existence results and constructions of point sets which can achieve polynomial or even strong polynomial tractability for suitable choices of weights. Then, for prime numbers $p$, we consider Korobov's $p$-sets:

$P_{p,s}=\{\boldsymbol{x}_0,\ldots,\boldsymbol{x}_{p-1}\}$ with $$\boldsymbol{x}_n=\left(\left\{\frac{n}{p}\right\},\left\{\frac{n^2}{p}\right\},\ldots,\left\{\frac{n^s}{p}\right\}\right)\ \ \ \mbox{ for }\ n=0,1,\ldots,p-1,$$

$Q_{p^2,s}=\{\boldsymbol{x}_0,\ldots,\boldsymbol{x}_{p^2-1}\}$ with $$\boldsymbol{x}_n=\left(\left\{\frac{n}{p^2}\right\},\left\{\frac{n^2}{p^2}\right\},\ldots,\left\{\frac{n^s}{p^2}\right\}\right)\ \ \ \mbox{ for }\ n=0,1,\ldots,p^2-1,$$

and $R_{p^2,s}=\{\boldsymbol{x}_{a,k}\ : \ a,k \in \{0,\ldots,p-1\}\}$ with $$\boldsymbol{x}_{a,k}=\left(\left\{\frac{k}{p}\right\},\left\{\frac{a k}{p}\right\},\ldots,\left\{\frac{a^{s-1} k}{p}\right\}\right)\ \ \ \mbox{ for }\ a,k=0,1,\ldots,p-1,$$

and prove bounds on their weighted star discrepancy which hold for any choice of weights. For product weights we give conditions under which the discrepancy bounds are independent of the dimension $s$. This implies strong polynomial tractability for the weighted star discrepancy. We also show that a very weak condition on the product weights suffices to achieve polynomial tractability.

Joint work with Josef Dick (UNSW Sydney, Australia).

December 16, 16:00 ~ 16:30 - Room B23

We consider numerical integration of $d$-dimensional functions in (Besov and Triebel-Lizorkin) spaces $A^s_{p,\theta}$ of dominating mixed smoothness on the unit cube. For functions with zero boundary condition we prove that a (equally weighted) cubature rule, as proposed by Frolov in 1976, has the optimal order of convergence if $p,\theta\in (0,\infty)$ and $s > \max\{1/p,1/\theta\}$, where $s$ is the smoothness parameter. In particular, this implies the optimal order of convergence in Sobolev spaces of mixed smoothness in the same range of the parameters, which generalizes the previously known results to $p<2$ and non-integer smoothness. In the region of "small smoothness", $s\in (1/p, 1/\theta]$, we prove an upper bound that is expected to be optimal. This bound for Triebel-Lizorkin spaces was only known for $d=2$. By standard modifications of the algorithm we obtain the same results for functions without boundary conditions.

Joint work with Tino Ullrich (Hausdorff Center for Mathematics, Germany).

December 16, 17:00 ~ 17:30 - Room B23

The talk is concerned with optimal linear approximation of functions in isotropic periodic Sobolev spaces $H^s(\mathbb{T}^d)$ of fractional smoothness $s>0$ on the $d$-dimensional torus, where the error is measured in the $L_2$-norm. The asymptotic rate -- up to multiplicative constants -- of the approximation numbers is well known. For any fixed dimension $d\in\mathbb{N}$ and smoothness $s>0$ one has $$ (\star)\qquad a_n(I_d: H^s(\mathbb{T}^d)\to L_2(\mathbb{T}^d))\sim n^{-s/d}\qquad\text{as}\quad n\to \infty\,. $$ In the language of IBC, the n-th approximation number $a_n(I_d)$ is nothing but the worst-case error of linear algorithms that use at most $n$ arbitrary linear informations. Clearly, for numerical issues and questions of tractability one needs precise information on the constants that are hidden in $(\star)$, in particular their dependence on $d$.

For any fixed smoothness $s>0$, the exact asymptotic behavior of the constants as $d\to\infty$ will be given in the talk. Moreover, I will present sharp two-sided estimates in the preasymptotic range, that means for `small' $n$. Hereby an interesting connection to entropy numbers in finite-dimensional $\ell_p$-spaces turns out to be very useful.

Joint work with Sebastian Mayer (Bonn), Winfried Sickel (Jena) and Tino Ullrich (Bonn).

December 16, 17:30 ~ 18:00 - Room B23

Using tools taken from the theory of operator ideals and $s$-numbers, we develop a general approach to transfer estimates for $L_2$-approximation of Sobolev functions into results for $L_\infty$-approximation under a detailed control of all involved constants. As illustration, we derive some results for isotropic Sobolev spaces $H^s (\mathbb{T}^d)$ and Sobolev spaces of dominating mixed smoothness $H^s _{\text{mix}} (\mathbb{T}^d)$, always equipped with natural norms. Also some comments to related questions for Besov spaces will be given.

Joint work with Fernando Cobos (Universidad Complutense de Madrid, Spain) and Thomas Kuehn (University of Leipzig, Germany).

December 16, 18:00 ~ 18:30 - Room B23

We generalize the results of Bilyk et al. on discrepancy in spaces with bounded mean oscillation and in exponential Orlicz spaces to arbitrary dimension. In particular, we use dyadic harmonic analysis to prove upper bounds of the BMO and exponential Orlicz space norms of the discrepancy function for the so-called order 2 digital nets. Such estimates play an important role as an intermediate step between the well-understood $L_p$ bounds and the still elusive $L_\infty$ asymptotics of the discrepancy function in arbitrary dimensions.

Joint work with Dmitriy Bilyk (University of Minnesota, USA).

December 17, 14:30 ~ 15:00 - Room B23

For analytic multivariate problems we modify the usual concepts of tractability by replacing the pair $(d,\varepsilon)$ by $(d,1+\log\,\varepsilon^{-1})$, where $d$ denotes the number of variables and $\varepsilon$ is an error threshold. It turns out that for some analytic multivariate problems we can get positive tractability results for this more demanding setting. We survey current results for multivariate integration and approximation defined over reproducing kernel Hilbert spaces. These results were obtained by J. Dick, G. Larcher, P. Kritzer, F. Kuo, F. Pillichshammer, I. Sloan, and the author.

December 17, 15:00 ~ 15:30 - Room B23

We consider integration and approximation of functions in a class of Hilbert spaces of analytic functions defined on the $\mathbb{R}^s$. The functions are characterized by the property that their Hermite coefficients decay exponentially fast. The function spaces are weighted by two weight sequences. For numerical integration, we use Gauss-Hermite quadrature rules and show that the errors of our algorithms decay exponentially fast. Furthermore, we consider $L_2$-approximation where the algorithms use information based on either arbitrary linear functionals or function evaluations. Also in the case of approximation we obtain exponential error convergence. For given $\varepsilon>0$, we study tractability in terms of $s$ and $\log\varepsilon^{-1}$ and give necessary and sufficient conditions under which we achieve exponential convergence with various types of tractability.

Joint work with Christian Irrgeher (Johannes Kepler University Linz, Austria), Gunther Leobacher (Johannes Kepler University Linz, Austria), Friedrich Pillichshammer (Johannes Kepler University Linz, Austria) and Henryk Wozniakowski (Columbia University, USA; University of Warsaw, Poland).

December 17, 15:30 ~ 16:00 - Room B23

In the last 20 years a whole hierarchy of notions of tractability was proposed and analyzed by several authors. These notions are used to classify the computational hardness of continuous numerical problems in terms of the behavior of their information complexity $n(\varepsilon, d)$ as a function of the accuracy $\varepsilon$ and the dimension $d$. By now a lot of effort was spend on either proving quantitative positive results (such as, e.g., the concrete dependence on $\varepsilon$ and $d$ within the well-established framework of polynomial tractability), or on qualitative negative results (which, e.g., state that a given problem suffers from the so-called curse of dimensionality). Although several weaker types of tractability were introduced recently, the theory of information-based complexity still lacks a notion which allows to quantify the exact (sub-/super-)exponential dependence of $n(\varepsilon,d)$ on both parameters $\varepsilon$ and $d$.

In this talk we present the notion of $(s,t)$-weakly tractable problems which attempts to fill this gap. Within this framework the parameters $s$ and $t$ are used to quantitatively refine the huge class of polynomially intractable problems. For compact operators between Hilbert spaces we provide characterizations of $(s,t)$-weak tractability (w.r.t. various settings) in terms of singular values. In addition, our new notion will be illustrated by examples such as embedding problems of Sobolev spaces (as studied recently by Kühn, Sickel, and Ullrich). In particular, we complete the characterization of weak tractability for these problems.

Joint work with Pawel Siedlecki (University of Warsaw, Poland).

December 17, 16:00 ~ 16:30 - Room B23

We study the information complexity $n(\varepsilon,d)$ of linear tensor product problems in the worst and average case settings in terms of two recently introduced notions of tractability: uniform weak tractability and $(s,t)$-weak tractability. Here, $d$ denotes the number of variables and $\varepsilon$ is an error threshold. Uniform weak tractability means that the information complexity $n(\varepsilon,d)$ is not an exponential function of any positive power of $\varepsilon^{-1}$ and/or $d$, whereas $(s,t)$-weak tractability means that the information complexity $n(\varepsilon,d)$ is not an exponential function of $\varepsilon^{-s}$ and/or $d^{\,t}$. These notions allow us to refine tractability hierarchy of multivariate problems. We give necessary and sufficient conditions on these notions of tractability for homogeneous linear tensor product problems. This is also done for some nonhomogeneous linear tensor product problems in terms of their intrinsic nonhomogeneity parameters such as their regularity with respect to succesive variables or weights associated to variables and groups of variables. The work on $(s,t)$-weak tractability has been jointly done with Markus Weimar.

December 17, 17:05 ~ 17:55 - Room B23

The evaluation of integrals of functions with a large or even infinite number of variables is an important task in physics, quantum chemistry and quantitative finance. If one, for instance, wants to compute the expectation of a funtional of a stochastic process, then employing a suitable series expansion of the stochastic process the problem can be transformed into the problem of calculating the integral of a function of infinitely many variables.

For the moment let us focus on infinite-dimensional integration. A valuable technique to tackle the integration problem is to use a function decomposition -- the integrand is represented as an infinite sum of functions of finitely many variables. Cleverly designed algorithms, such as multilevel algorithms or multivariate decomposition methods, deal with a finite number of important summands. They balance the computational burden spent on the corresponding finite-dimensional integration sub-problems in order to minimize the computational cost for achieving a given error tolerance.

Important function decompositions are, e.g., the well-known ANOVA ("Analysis of Variance'') decomposition or anchored decompositions.

It turns out one function decomposition may be more convenient for error analysis while another is more convenient for the actual design of competitive algorithms. An example is the approximation of certain classes of integrands of infinitely many variables by randomized algorithms: On the one hand the ANOVA decomposition is a valuable tool for the error analysis, but even the explicit computation of the ANOVA components of low order is infeasible (in the sense that it is more expensive than the original integration problem). On the other hand, the components of anchored decompositions can be computed efficiently and therefore they can be used to design algorithms, but the direct error analysis may be complicated.

In this talk we present optimal results for high- and infinite-dimensional integration. In some settings (depending on the class of integrands we consider, the class of algorithms we admit and the way we account for the computational cost) one can derive these results directly, and we will see situations were the results can be transfered to other settings.

Joint work with Jan Baldeaux (UTS Sydney, Australia), Josef Dick (UNSW Sydney, Australia), Mario Hefter (TU Kaiserslautern, Germany), Aicke Hinrichs (JKU Linz, Austria), and Klaus Ritter (TU Kaiserslautern, Germany).

December 17, 18:00 ~ 18:30 - Room B23

We study algorithms for $L^p$-approximation ($1\le p<\infty$) of functions $f:[a,b]\to{\mathbb R}$ that are piecewise $r$-times continuously differentiable and $f^{(r)}$ are piecewise Hölder continuous with exponent $\rho\in(0,1]$. The singular points of $f$ are unknown. Available information is blurred and given as $y_i=f(x_i)+e_i$, $1\le i\le n$, where $|e_i|\le\delta$. We show that a necessary and sufficient condition for the error of approximation to be at most $\varepsilon$ is that $n\asymp\varepsilon^{-1/(r+\rho)}$ and $\delta\asymp\varepsilon$. The optimal algorithms use adaptive information. This generalizes previous results where information was exact and we had $\rho=1$.

Joint work with Pawel Morkisz (AGH Krakow, Poland).

December 17, 18:30 ~ 19:00 - Room B23

Problems defined on spaces of piecewise smooth functions are known to be difficult as nonadaptive algorithms that work well for globally smooth functions usually fail. Such problems are even more difficult when information is in addition corrupted by noise. Then fundamental questions are what is the acceptable noise level that allows to solve a problem within given error, and what algorithms should be used? In this talk we consider the problem of detecting singular points of piecewise Hölder functions based on noisy function evaluations. The noise is assumed to be bounded. This problem is important since for many other problems like function approximation or integration one has to localize the singular points in the first place. We also provide a numerical illustration.

Joint work with Leszek Plaskota (University of Warsaw, Poland).