**Organizers:** Annalisa Buffa (IMATI, Italy) - Angela Kunoth (University of Cologne, Germany) - Pedro Morín (Universidad Nacional del Litoral, Argentina)

December 11, 14:30 ~ 15:10 - Room B22

In this talk I will discuss recent progress in constructing adaptive algorithms for controlling errors in the maximum norm. The main focus of the talk will be the construction of robust a posteriori error estimators for singularly perturbed elliptic reaction-diffusion problems. Time permitting, I may also discuss the sharpness of logarithmic factors that typically arise in maximum norm a posteriori estimates.

Joint work with Natalia Kopteva (University of Limerick).

December 11, 15:35 ~ 16:25 - Room B22

I will review the literature on the weak convergence analysis of numerical methods for stochastic evolution problems driven by noise. Then I will present a new method of proof, which is based on refined Sobolev-Malliavin spaces from the Malliavin calculus. It does not rely on the use of the Kolmogorov equation or the Ito formula and is therefore applicable also to non-Markovian equations, where these are not available. We use it to prove weak convergence of fully discrete approximations of the solution of the semilinear stochastic parabolic evolution equation with additive noise as well as a semilinear stochastic Volterra integro-differential equation.

Joint work with Adam Andersson (Chalmers University of Technology and University of Gothenburg, Sweden), Mihaly Kovacs (University of Otago, New Zealand) and Raphael Kruse (TU Berlin, Germany).

December 11, 17:00 ~ 17:20 - Room B22

We consider the heat equation in a weak space-time formulation with random right hand side and random spatial operator. The existence and uniqueness of a solution can be proven by the Banach-Necas-Babuska theorem. In this course we allow the spatial operator $A$ to have lower and upper bounds depending on a stochastic parameter $\omega$, i.e., we consider random variables $A_{\rm \min }(\omega)$ and $A_{\rm max}(\omega)$ as lower and upper bounds. The $L_p$-regularity of the solution and its connection to the random variables bounding $A$ can be proven. A similar approach is applied to a full space-time Petrov-Galerkin discretization. The stability of such an approach requires a lower bound for the discrete inf-sup condition independent of the grid spacing. Using similar ideas, we can prove stability when allowing a finer discretization for the test space than for the solution space.

Joint work with Stig Larsson (Chalmers University of Technology) and Matteo Molteni (Chalmers University of Technology).

December 11, 17:30 ~ 17:50 - Room B22

We study regularity properties of solutions to operator equations on patchwise smooth manifolds $\partial\Omega$ such as, e.g., boundaries of polyhedral domains $\Omega \subset \mathbb{R}^3$. Using suitable biorthogonal wavelet bases $\Psi$, we introduce a new class of Besov-type spaces $B_{\Psi,q}^\alpha(L_p(\partial \Omega))$ of functions $u\colon\partial\Omega\rightarrow\mathbb{C}$. Special attention is paid on the rate of convergence for best $n$-term wavelet approximation to functions in these scales since this determines the performance of adaptive numerical schemes. We show embeddings of (weighted) Sobolev spaces on $\partial\Omega$ into $B_{\Psi,\tau}^\alpha(L_\tau(\partial \Omega))$, $1/\tau=\alpha/2 + 1/2$, which lead us to regularity assertions for the equations under consideration. Finally, we apply our results to a boundary integral equation of the second kind which arises from the double layer ansatz for Dirichlet problems for Laplace's equation in $\Omega$. The talk is based on two recent papers which arose from the DFG-Project ''BIOTOP: Adaptive Wavelet and Frame Techniques for Acoustic BEM'' (DA 360/19-1):

Dahlke, S. and Weimar, M.: Besov regularity for operator equations on patchwise smooth manifolds. Preprint 2013-03, Fachbereich Mathematik und Informatik, Philipps-Universität Marburg. To appear in J. Found. Comput. Math.

Weimar, M.: Almost diagonal matrices and Besov-type spaces based on wavelet expansions. Preprint 2014-06, Fachbereich Mathematik und Informatik, Philipps-Universität Marburg. Submitted.

Joint work with Stephan Dahlke (Philipps-University Marburg, Germany).

December 12, 14:30 ~ 15:10 - Room B22

The hp-adaptive numerical methods for PDEs combine the domain decomposition with assignment of degrees of freedom at each element of a particular refinement. The main objective of this talk is to introduce a framework that streamlines the process of making adaptive decisions.

We consider domain partitioning based on a fixed binary refinement scheme and a coarse-to-fine routine for making adaptive decisions about the elements to be split and the polynomial orders to be assigned. The problem of finding near-optimal results is managed by using greedy algorithms on binary trees and a modification of the local errors that take into account the local complexity of the adaptive approximation. We prove that the algorithm provides near-best approximation to any given function.

December 12, 15:30 ~ 16:10 - Room B22

We consider the $h$-version of the boundary element method (BEM) in 2D and 3D on shape regular meshes. We show convergence and prove quasi-optimality of an adaptive BEM (ABEM), taking Symm's integral equation as our model problem for a first kind integral equation. Optimality here means that the algorithm realizes the optimal rate achievable for solutions for an approximation class that is characterized by the best possible decay rate achievable for the error indicator under the mesh refinements allowed (here: newest vertex bisection). The error indicators that drive the adaptive algorithm are of residual type and hark back to \cite{carstensen-stephan95,carstensen-maischak-stephan01}. For the FEM on shape regular meshes, similar convergence and optimality results are available, (Stevenson 2007; Cascon, Kreuzer, Nochetto, Siebert 2008). The BEM setting is more involved and requires different mathematical tools since the operators and pertinent norms are non-local. This mandates in particular the use of non-standard inverse estimates for integral operators, which we present in this talk. We will also discuss extensions of the algorithm to account for data approximations and applications to FEM-BEM coupling.

Joint work with Michael Feischl (Vienna University of Technology, Austria), Michael Karkulik (Pontificia Universidad Catolica de Chile, Chile) and Dirk Praetorius (Vienna University of Technology, Austria).

December 12, 17:05 ~ 17:55 - Room B22

Problems in high spatial dimensions are typically subject to the "curse of dimensionality" which roughly means that the computational work, needed to approximate a given function within a desired target accuracy, increases exponentially in the spatial dimension. A possible remedy is to seek problem dependent dictionaries with respect to which the function possesses sparse approximations. Employing linear combinations of particularly adapted rank-one tensors falls into this category. In this talk we highlight some recent developments centering on the adaptive solution of high dimensional operator equations in terms of stable tensor expansions. Some new concepts related to tensor contractions, tensor recompression, coarsening, and rescaling operators are outlined. Some essential issues are addressed that arise in the convergence and complexity analysis but have been largely ignored when working in a fully discrete setting. In particular, when dealing with high-dimensional diffusion problems, a central obstruction is related to the spectral properties of the underlying operator which is an isomorphism only when acting between spaces that are not endowed with tensor product norms. The theoretical results are illustrated by numerical experiments.

Joint work with Markus Bachmayr (RWTH-Aachen, Germany).

December 12, 18:00 ~ 18:40 - Room B22

We present equilibrated flux a posteriori error estimates in a unified setting for conforming, nonconforming, discontinuous Galerkin, and mixed finite element discretizations of the two-dimensional Poisson problem. Relying on the equilibration by mixed finite element solution of patchwise Neumann problems, the estimates are guaranteed, locally computable, locally efficient, and robust with respect to polynomial degree. Maximal local overestimation is guaranteed as well. Numerical experiments suggest asymptotic exactness for the incomplete interior penalty discontinuous Galerkin scheme.

Joint work with Alexandre Ern (CERMICS, Universite Paris-Est, France).

December 13, 14:30 ~ 15:10 - Room B23

This talk is concerned with developing numerical techniques for the adaptive application of global operators of potential type in wavelet coordinates. This is a core ingredient for a new type of adaptive solvers that has so far been explored primarily for partial differential equations. We shall show how to realize asymptotically optimal complexity in the present context of global operators. Asymptotically optimal means here that any target accuracy can be achieved at a computational expense that stays proportional to the number of degrees of freedom (within the setting determined by an underlying wavelet basis) that would ideally be necessary for realizing that target accuracy if full knowledge about the unknown solution were given. The theoretical findings are supported and quantified by numerical experiments.

Joint work with Manuela Utzinger (University of Basel).

December 13, 15:30 ~ 16:10 - Room B23

Adaptive finite element methods (AFEM) usually rely on an iterative procedure that consists of four fundamental modules. For any iterative step, the adaptive loop starts with the approximation of the solution with respect to the current computational mesh. A posteriori error estimates are then computed in terms of local indicators associated to any single mesh element. Subsequently, a marking strategy selects the elements with higher values of the local error indicator. Finally, a refinement procedure constructs the refined mesh starting from the set of marked elements. In general, the refinement procedure identifies the mesh with an increased level of resolution for the next iteration by refining not only the marked elements, but also a suitable set of elements in their neighbourhood. This may allow to guarantee certain properties of the resulting mesh that preserve the error estimates previously computed. In particular, specific bounds for the number of the non-zero basis functions on any mesh element plays a key role for the development of an adaptivity theory. In order to extend recent results obtained in the AFEM context to the isogeometric setting, we rely on local refinement techniques based on adaptive spline spaces. In particular, by exploiting the truncated basis for hierarchical B-spline spaces together with suitable mesh configurations, we will provide simple residual-type error estimates and prove the convergence of the adaptive procedure.

Joint work with Annalisa Buffa (IMATI “E. Magenes” - CNR, Italy).

December 13, 17:00 ~ 17:40 - Room B23

The computation of singular phenomena (shocks, defects, dislocations, interfaces, cracks) arises in many complex systems. For computing such phenomena, it is natural to seek methods that are able to detect them and to devote the necessary computational recourses to their accurate resolution. At the same time, we would like to have mathematical guarantees that our computational methods approximate physically relevant solutions. Our purpose in this talk is to review results and discuss related computational challenges for such nonlinear problems modeled by PDEs. In addition we shall briefly discuss issues related to Micro / Macro adaptive modeling and methods, in particular related to atomistic/continuum coupling in crystalline materials.

December 13, 17:50 ~ 18:30 - Room B23

We study the adaptive finite element approximation of the Dirichlet problem $-\Delta u = f$ with zero boundary values using newest vertex bisection. Our approach is based on the minimization of the corresponding Dirichlet energy. Our approach works for lower and higher order elements. We show that the maximums strategy attains every energy level with a number of degrees of freedom, which is proportional to the optimal number. As a consequence we achieve instance optimality of the error.

Joint work with Christian Kreuzer (Bochum, Germany) and Rob Stevenson (Amsterdam, Netherlands).

In this work we develop a posteriori error estimates for second order linear elliptic problems with point sources in two- and three-dimensional domains. We prove a global upper bound and a local lower bound for the error measured in a weighted Sobolev space. The weight considered is a (positive) power of the distance to the support of the Dirac delta source term, and belongs to the Muckenhoupt's class $A_2$. The theory hinges on local approximation properties of either Cl\'ement or Scott-Zhang interpolation operators, without need of modifications, and makes use of weighted estimates for fractional integrals and maximal functions. Numerical experiments with an adaptive algorithm yield optimal meshes and very good effectivity indices.

Joint work with Eduardo M. Garau (Universidad Nacional del Litoral and IMAL-CONICET, Argentina) and Pedro Morin (Universidad Nacional del Litoral and IMAL-CONICET, Argentina).

We develop an algorithm to approximate minimal surface problems using isogeometric spaces. In particular we take advantage of the higher regularity of these spaces. The scheme is implemented in the software library IGATOOLS (www.igatools.org), which allows a seamless coding of the so called direct tensor notation. The latter permitting us to implement a Newton method for which we find an explicit formula for the second derivative of the area functional.

Joint work with Sebastián Pauletti (UNL, IMAL, Argentina) and Diego Sklar (UNL, IMAL, Argentina).

{\bf QUATERNION VORTEX METHODS}

We show the advantages of using the quaternionic expression of some PDEs in this case Navier Stokes in this case to obtain a faster and more simple numerical solution via vortex methods.

The Navier Stokes equations for incompressible viscous flow are:

\begin{eqnarray} \frac{Du}{Dt}=-\bigtriangledown P+\frac{1}{R}\bigtriangledown^2u\;\;in\, D\\ \bigtriangledown\cdot u=0\;\;in\, D\\ u=0\;\;on\;\partial D \end{eqnarray}

Taking $\xi$ as: \begin{equation} \xi=\bigtriangledown\times v \end{equation} The vorticity we have the vorticity transport equation: \begin{equation} \frac{D\xi}{Dt}=(\xi\cdot \bigtriangledown )u+\frac{1}{R}\bigtriangledown^2\xi \end{equation}

Here $u$ is the velocity, $P$ the pressure, $R$ the Reynolds number.

As $\bigtriangledown\cdot u=0$ and $\xi =\bigtriangledown\times u$

there exists a vector function $\psi (x)$ such that $u=\bigtriangledown\times\psi$ then \begin{equation} \bigtriangledown^2\psi=-\xi \end{equation}

In 3D $\psi$ is the velocity potencial in 2D is the stream function.

Applying ideas of quaternionic and Clifford analysis we can find a transformation into one non-linear equation only for the vorticity $\xi$. For this reason we use the higher-dimensional version of the Borel-Pompeiu formula: \begin{equation} TDu(x)=u(x)-Fu(x) \end{equation}

where $T$ is the $T$-operator (Teodorescu transform), $D$ the Dirac-operator and $F$ the Cauchy integral. The Cauchy integral depends only on the boundary values of $u$.

That means that if $u=0$ on the boundary then this part can be deleted of the formula.

Moreover, $Du$ means for a quaternion valued function $(0,u)$ ($u$ is the vector of velocity) \begin{equation} Du=(-div\, u, \ rot\, u) \end{equation}

As we are working with divergence free vectors and consequently $$Du=(0,\ rot u)$$ Remembering that $$rot \ u= \nabla\times u$$

we have that

$u=TDu$ and with $Du=rot\, u= \xi$ it follows \begin{equation} u=T\xi \end{equation} This is an expression to describe the velocity $u$ explicitely by the vorticity $\xi$. If the boundary values of $u$ are not zero but some known quantity then we have this additional known summand F(boundary values of $u$). The operators $T$ and $F$ are defined as:

$$(T_Gu)(x)=-\int_Ge(x-y)u(y)dG_y$$

$$(F_{\gamma}u)(x)=\int_{\gamma}e(x-y)\alpha (y)u(y)d\gamma_y$$

$\alpha$ is the outer normal to $\gamma$ at the point $y$ and $e(x)$ the fundamental solution (generalized Cauchy kernel) of the Dirac-operator.

In this way substituting in the above equations we obtain a nonlinear equation in $\xi$ instead a system in $u$ and $\xi$. To find representation formulas and numerical methods for $\xi$ is one of the goals of the project. Because we have to evaluate only the vorticity (and not in addition the velocity, too) a better efficiency of this approach is expected.

So to apply numerical methods to the equations we only need to have the numerical expressions of the main operators: We show the discrete expression of the main operators used: \begin{equation} \Delta u= \sum D^{+}_{k,j}D^{-}_{k,j}u \end{equation}

\begin{equation} D^{+}D^{-}=\frac{1}{h}\left[ u_{k+1,j}-u_{j,k}-u_{k+1,j+1}+u_{k,j-1}\right] \end{equation} The Teodorescu has also an expression: \begin{equation} \left( T_{h}f\right) =\sum_{y=G_{k}}e_{h}(x-y)f(y)h^{3} \end{equation} Where e is the fundamental solution of D We also have that: \begin{equation} \left( F_{h}f\right)(x) =\sum_{y=G_{k}}e_{h}(x-y)\alpha (y)f(y)h^{2} \end{equation} So now we only have to apply the above in the classical vortex method as follows: 1) Create an initial particle field that approximates $\xi_{i}$ , by placing uniformly spaced particles into the support of $\xi_{i}$ and by setting their vectorial circulation to the local value of $\xi_{i}$

2) Create an triangulation $S={S_{i}}$ of the boundaries and place immovable vortex particles on the triangles’ centres. 3) Use a standard time-stepping technique, e.g., a Runge-Kutta method or a multistep method, for advancing the ODEs in time. In order to evaluate the vorticities and their gradients in each (sub-)step, do the following: $\xi_{i}$ at each particle location as well as at the quadrature points on the boundaries with the help of the Biot-Savart law b) Compute

$\bigtriangledown\xi_{i} $

at each particle location, again by using the Biot-Savart law. c) Compute the unknown vortex sheet strength on the boundaries. d) Now pretend the particles on the boundaries are ordinary particles. Use the Biot- Savart law (6) in order to compute $\xi_{i}$

$φ$ and $\bigtriangledown\xi_{i} $

at each particle inside the domain. 4. Remove any particles that might have escaped the fluid domain. This should only rarely happen if the surface triangulation is sufficiently refined. 5. Repeat steps 3 and 4 until the requested termination time is reached