Joint Fudan - RICAM Seminar on Inverse Problems
Numerical simulation of high-frequency wave fields is critical in fields such as electromagnetics and geophysics. In this work, we develop novel Hadamard integrators for self-adjoint wave equations in both time and frequency domain in an inhomogeneous medium. Based on asymptotic ansatz of Green's function and butterfly algorithms, the proposed methods can construct high-frequency wave field beyond caustics implicitly and advance spatially overturning waves in time naturally with quasi-optimal computational complexity and memory usage. Numerical examples illustrate the accuracy and efficiency of the proposed methods.
Bioluminescence tomography (BLT), as one kind of optical imaging, involves reconstructing bioluminescence signals (an internal light source) from external measurements of Cauchy data. As a method, it boasts numerous advantages including a high signal-to-noise ratio, non-destructiveness, and cost-effectiveness. Potential applications span cancer diagnosis, drug discovery and development, as well as gene therapies. In the literature, BLT is extensively researched using the diffusion approximation equation, focusing on reconstructing the distribution of peak sources. However, without adequate a priori information, solution uniqueness cannot be guaranteed. In this talk, motivated by the solution uniqueness issue, we revisit the BLT problem, and several theoretical results are explored. For numerical inversion, we introduce the coupled complex boundary method (CCBM), followed by an improved version. By incorporating an imaginary unit and a positive parameter, we achieve a choice-free regularization parameter. Our findings are supported by numerous problem-oriented numerical experiments.
Classical results for quadratic regularisation in Hilbert spaces show convergence rates with respect to the norm under the assumption that the true solution satisfies a "source condition" in the sense that it is an element of some fractional power of the operator to be inverted. These results have later been generalised to non-quadratic regularisation terms, though with two significant changes: First, the norm had to be replaced by the Bregman distance for the regularisation term, particularly to be able to deal with terms that fail to be strictly convex. Second, although the first results were derived using basic source conditions, these were later replaced by "variational inequalities" in order to obtain sharper and more general estimates, and also to include Banach space settings.
In this talk we return to the study of source conditions for non-quadratic problems on Hilbert spaces. We will derive convergence rates with respect to the Bregman distance under the assumptions of uniform convexity of either the regularisation term or its convex conjugate and satisfaction of a Hölder type source condition. In the quadratic case, both the conditions and the convergence rates turn out to be identical to the classical results.
Inverse techniques aim to reconstruct unknown quantities from observed data. Traditional variational methods rely on iterative approaches. While these data-agnostic methods have achieved significant success, they face computational challenges such as ill-posedness, uncertainty quantification, and scalability. In contrast, modern data-driven approaches directly learn reconstructions from training data, offering computational advantages but potentially compromising physical properties and interpretability. This talk will explore recent advancements that combine the strengths of both approaches, striving for a balance between computational efficiency and interpretability.
The principle of developing structure-conforming numerical algorithms widely exists in scientific computing. In this work, following this principle, we propose an operator learning method for solving a class of geometric inverse problems. The architecture here is inspired by Direct Sampling Methods and is also closely related to convolutional network and Transformer. The latter one is state-of-art architecture for many scientific computing tasks. To obtain the optimal hyperparameters in this method, we propose a FEM and OpL joint-training framework and a Leaning-Automated FEM package. Numerical examples demonstrate that the proposed architecture outperforms many existing operator learning methods in the literature.
Runge approximation is significant for inverse problems and the learning based numerical method. In this presentation, we firstly delve into the qualitative and quantitative Runge approximation property for Lamé system under the optimal regular conditions pertaining for Lamé coefficients. This relies on the stability of the Cauchy problem for the Lamé system. In contrast to the qualitative Runge approximation, the quantitative Runge approximation requires a rather strong condition for Lamé coefficients to guarantee Lp-Integrability of the gradient for the solution. We propose the learning based numerical method and apply the quantitative Runge approximation to derive the error analysis. Numerical examples verify the theoretical results.
In this talk, we present new results on the degree and on the interval of ill-posedness from a series of recently published studies. We distinguish ill-posedness of type I and type II in the sense of Nashed for linear inverse problems in Hilbert spaces with compact and non-compact forward operators. Since only for compact operators the strength of ill-posedness can be simply characterized by decay rates of singular values, we also discuss concepts for handling the non-compact case. In particular, the potential of non-compact operators in compositions with compact ones for changing the ill-posedness degree is of interest. The Hausdorff moment operator and the Cesàro operator as non-compact examples provide considerable insights in combination with compact integration operators. Moreover, the specifics of singular value decay rates of compact operators in higher dimensions is under consideration by using the multivariate integration operator, which occurs for example in copula density reconstruction.
The talk partially presents results of joint work with Stefan Kindermann (Linz), Frank Werner (Würzburg), Robert Plato (Siegen), Hans-Jürgen Fischer (Dresden), Peter Mathé (Berlin) and Yu Deng (Chemnitz). Research is supported by the Deutsche Forschungsgemeinschaft (DFG) under grant HO 1454/13-1.
In ground-based astronomy, images of objects captured by ground-based telescopes often suffer from atmospheric turbulence, leading to degraded image quality. To address this challenge, adaptive optics techniques are commonly employed to correct distortions in the wavefront. In this presentation, we introduce an algorithm designed for phase super-resolution using a sequence of measurements, the wavefront gradient obtained from wavefront sensors in adaptive optics systems. Our approach considers tomography and incorporates the Taylor frozen flow hypothesis, leveraging knowledge of wind velocities within the standard ESO 3-layer atmospheric model. Our research emphasizes understanding turbulence characteristics in the $H^(11/6)$ space, based on Kolmogorov’s Theorem. We propose the H2L2 model, incorporating the $H^2$ norm as a regularization term, and provide an analysis of approximating $H^(11/6)$ using $H^2$ in the context of ground-based astronomy. Numerical simulations and visualizations are carried out to demonstrate the effectiveness of our approach.
An inpainting method based on mechanism learning is proposed, from the perspective of inverse problems. The underlying data mechanisms, characterized by linear differential equations, are identified from data on the known area and then exploited to infer that on the missing part. Special attention is paid to the incorporation of historical or prior information as the higher-order mechanism. Numerical examples show the effectiveness, robustness, and flexibility of the method and it performs specifically well over mechanism/scientific data. Additionally, in this talk, I will also introduce the application of the proposed inpainting method in data compression. This is a joint work with Prof. Jin Cheng (Fudan University) and Dr. Yu Chen (Shanghai University of Finance and Economics).
In this talk, I will present optimisation algorithms for imaging inverse problems in non-standard Banach spaces.It is divided into two parts: in the former, the setting of Lebesgue spaces with a variable exponent map $L^{p(\cdot)}$ is considered to improve adaptivity of the solution with respect to standard Hilbert reconstructions; in the latter a modelling in the space of Radon measures is used to avoid the biases observed in sparse regularisation methods due to discretisation.
In more detail, the first part explores both smooth and non-smooth optimisation algorithms in reflexive $L^{p(\cdot)}$ spaces, which are Banach spaces endowed
with the so-called Luxemburg norm. As a first result, we provide an expression of the duality maps in those spaces, which are an essential ingredient for the
design of effective iterative algorithms.
To overcome the non-separability of the underlying norm and the consequent heavy computation times, we then study the class of modular functionals which directly
extend the (non-homogeneous) $p$-power of $L^p$-norms to the general $L^{p(\cdot)}$. In terms of the modular functions, we formulate handy analogues of duality
maps, which are amenable for both smooth and non-smooth optimisation algorithms due to their separability. We thus study modular-based gradient descent (both in
deterministic and in a stochastic setting) and modular-based proximal gradient algorithms in $L^{p(\cdot)}$, and prove their convergence in function values. The
spatial flexibility of such spaces proves to be particularly advantageous in addressing sparsity, edge-preserving and heterogeneous signal/noise statistics, while
remaining efficient and stable from an optimisation perspective. We numerically validate this extensively on 1D/2D exemplar inverse problems (deconvolution, mixed
denoising, CT reconstruction).
The second part focuses on off-the-grid Poisson inverse problems formulated within the space of Radon measures. We consider a variational model which couples a Kullback-Leibler data term with the Total Variation regularisation of the desired measure (that is, a weighted sum of Diracs) together with a non-negativity constraint. We study optimality conditions and the corresponding dual problem is carried out and an improved version of the Sliding Frank-Wolfe algorithm is used for computing the numerical solution efficiently. To mitigate the dependence of the results on the choice of the regularisation parameter, an homotopy strategy is proposed for its automatic tuning, where, at each algorithmic iteration checks whether an informed stopping criterion defined in terms of the noise level is verified and updates the regularisation parameter accordingly. Several numerical experiments are reported on both simulated 2D and real 3D fluorescence microscopy data.
In certain imaging methods the orientation of the object of interest is unknown and has to be recovered in order to
reconstruct the object itself. This is the case in optical microscopy of optically or acoustically trapped particles,
that undergo a continuous motion during the imaging process. Opposed to standard microscopic imaging, where the probe
is fixated, this technique allows imaging in a more natural environment.
We consider two different models for recovering the rigid motion of a single particle: parallel beam tomography and
diffraction tomography. Based on the Fourier slice and the Fourier diffraction theorem, we develop infinitesimal methods
of the well-known common-line and common-circle method, respectively. Those methods assume a smooth motion over time
and allow us to calculate the angular velocity of the rotational motion.
In this talk, we place special emphasis on uniqueness of the reconstruction that is closely linked to asymmetry properties
of the object of interest.
Information reconstruction from interior local observation has wide applications in many indirect and remote measurement problems. Theoretically, the unique continuation property for elliptic equations justifies such problems. However, the ill-posedness makes numerical computations difficult due to the lack of continuous dependence on data. It is observed that if some bound is imposed on the solution, there exists a conditional stability estimate. In this talk, we will discuss the conditional stability for unique continuation of harmonic functions along analytic sub-manifolds in three dimensions. A stable numerical algorithm can be constructed based on Tikhonov regularization according to the conditional stability. The error estimate of the algorithm will be provided and demonstrated by numerical examples. The present estimate provides a way to quantitatively evaluate the numerical result, which further enables us to define some reliable subdomain of reconstructions. This is a joint work with Prof. Jin Cheng (Fudan University).
Optical imaging uses visible or near-infrared light to interrogate internal properties of biological tissues based on endogenous (for example oxygenated and deoxygenated haemoglobin) or exogenous (contrast agents) contrast. Several optical imaging modalities have been developed, and these techniques are of great interest due to the special contrast that rises from the physiological nature of light absorbing molecules. One example of tomographic imaging techniques based on light is diffuse optical tomography. In diffuse optical tomography, distributions of optical parameters inside an imaged target are estimated from light transport measurements made on its boundary. This is a highly ill-posed inverse problem, and the technique can be used to provide images with a unique contrast. However, it suffers from a low resolution due to the diffuse behaviour of light in biological tissues. Utilising so-called coupled physics imaging can overcome the limitations of diffuse imaging modalities. Perhaps the most developed of these coupled techniques is photoacoustic imaging, that combines the contrast of light with the resolution of ultrasound utilising the photoacoustic effect. In this talk, I will discuss tomography using light and ultrasound with the focus on modelling and inverse problems. Principles of diffuse optical tomography and photoacoustic tomography together with the applications are reviewed. Furthermore, problem of quantitative tomography is discussed.
We consider wave equations on Lorentzian manifolds whose coefficients depend on both space and time variables and prove global Lipschitz stability for an inverse source problem. Global Lipschitz stability for hyperbolic equations has been proved, starting from the Euclidean wave equations with constant coefficients, and also for wave equations on Riemannian manifolds whose coefficients depend on space variables. The basic tool is called the Bukhgeim--Klibanov method, which is based on a Carleman estimate with boundary terms and an energy estimate, and this is compatible with analysis on manifolds. However, applying this method to the Lorentzian wave equations requires a more detailed analysis than in the above cases, where the coefficients are independent of time. In this presentation, these analytical methods will be presented.
The first RICAM-Fudan Seminar will take place on November 9, 2023, 14:00-16:00 Vienna time. This seminar in honour of the 70th birthday of Prof. Bernd Hofmann will consist of a live broadcast of four talks given on the Annual Meeting of the German Inverse Problems Society. The speakers are
We are looking forward to meeting you either in Würzburg or online!
O. Scherzer, S. Lu and R. Ramlau
The scattering problems of marine acoustics have attracted great attention in recent years since they have wide applications in identifications of submarines, mineral deposits, wreckages, reef, submerged scatterers, etc. In this talk, we will present the direct problems, the inverse problems and the direct reconstruction methods (DRMs) for the ocean waveguide. The direct reconstruction methods can be viewed as simple and efficient numerical techniques for providing reliable initial approximate locations of the marine sources and scatterers for any existing more refined and advanced but computationally more demanding algorithms to recover the accurate physical profiles. Moreover, as corroborated by extensive numerical experiments, the DRMs are computationally efficient and highly robust against noises, and the DRMs can identify multiple sources and the scatterers of different shapes and scales from a few incidences and receivers.
Cardiac pulsations in the human brain have recently garnered interest due to their potential involvement in the pathogenesis of neurodegenerative diseases. The (pulse) wave, which describes the velocity of blood flow along an intracranial artery, consists of a forward (anterograde) and backward (retrograde, reflected) part, but the measurement usually consists of a superposition of these components. In this talk, we provide a mathematical framework for the inverse problem of estimating the pulse wave velocity as well as the forward and backward component of the pulse wave separately, using MRI measurements on the middle cerebral artery. Additionally, we provide an analysis of the problem, which is necessary for the application of a solution method based on an alternate direction approach. The proposed method's applicability is demonstrated through numerical experiments using simulation data. This is a joint work with S. Hubmer, R. Ramlau (RICAM) and H. Voss (Cornell University).
We aim to find the time-dependent source term in the diffusion equation from the boundary measurement, which allows for the possibility of tracing back the source of pollutants in the environment. Based on the idea of dynamic complex geometrical optics (CGO) solutions, we analyze a variational formulation of the inverse source problem and prove the uniqueness and stability result. A two-step reconstruction algorithm is proposed, which first recovers the locations of the point sources, and then the Fourier components of the emission concentration functions are reconstructed. Numerical experiments on simulated data are conducted. The results demonstrate that our proposed two-step reconstruction algorithm can reliably reconstruct multiple point sources and accurately reconstruct the emission concentration functions. In addition, we decompose the algorithm into two parts: online and offline computation, with most of the work done offline. This paves the way towards real-time traceability of pollution. The proposed method can be used in many fields, particularly those related to water pollution, to identify the source of a contaminant in the environment and can be a valuable tool in protecting the environment.
Optical Coherence Tomography (OCT), an imaging modality based on the interferometric measurement of back-scattered light,
is known for its high-resolution images of biological tissues and its versatility in medical imaging. Especially in its
main field of application, ophthalmology, the continuously increasing interest in OCT, aside from improving image quality,
has also driven the need for quantitative information, like physical properties, which is considered as complementary information
in medical diagnosis. In this talk, we discuss the inverse problem of quantifying the refractive index, an optical property
describing the change of wavelength between different materials, from OCT data.
The analysis of this inverse problem is based on a Gaussian beam forward model which is considered as best match for the
strongly focused laser light typically used within an OCT setup. Samples with layered structure are considered, meaning that
the refractive index as a function of depth is well approximated by a piece-wise constant function. For the reconstruction,
a layer-by-layer method is presented where in every step the refractive index is obtained via discretized least-squares
minimization. The applicability of this method is then verified by reconstructing refractive indices of layered media from
both simulated and experimental OCT data.
This is joint work with Peter Elbau (University of Vienna) and Leonidas Mindrinos (Agricultural University of Athens).
Due to rapid growth of data sizes in practical applications, in recent years stochastic optimization methods have received tremendous attention and proved to be efficient in various applications of science and technology including in particular the machine learning applications. In this talk we propose a novel stochastic mirror descent method for solving linear ill-posed inverse problems. The convergence and convergence rate are provided. Several numerical examples validate the efficiency of the proposed algorithms.
The mathematical imaging problem of diffraction tomography is an inverse scattering technique used to find the material properties of an object. Here, the object is exposed to a certain form of radiation and the scattered wave is recorded. In conventional diffraction tomography, the incident wave is assumed to be a monochromatic plane wave arriving from a fixed direction of propagation. However, this plane wave assumption is violated in all realistic illumination applications: There, the size of the emitting device is limited and therefore cannot produce plane waves. Besides, it is common to emit focused beams to achieve a better resolution in the far field. In this talk, I will present our recent results that allow diffraction tomography to be applied to these realistic illumination scenarios. We use a new forward model, that incorporates individually generated incident fields. Based on this, a new reconstruction algorithm is developed.
Optical flow is the apparent motion in a sequence of images. Its estimation is a classical problem in computer vision with a wide variety of applications such as robot navigation, video compression, surveillance or biomedical image analysis. This talk is about a Horn-Schunck-type spatiotemporal regularization functional for image sequences that have a non-Euclidean, time-varying image domain. Volumetric microscopy images depicting a live zebrafish embryo serve as both biological motivation and test data.
Consider an inverse problem of reconstructing a source term from boundary measurements for the wave equation. We propose a novel approach to recover the unknown source through measuring the wave fields after injecting small particles, enjoying a high contrast, into the medium. For this purpose, we first derive the asymptotic expansion of the wave field, based on the time-domain Lippmann-Schwinger equation. The dominant term in the asymptotic expansion is expressed as an infinite series in terms of the eigenvalues of the Newtonian operator (for the pure Laplacian). Such expansions are useful under a certain scale between the size of the particles and their contrast. Second, we observe that the relevant eigenvalues appearing in the expansion have non-zero averaged eigenfunctions. By introducing a Riesz basis, we reconstruct the wave field, generated before injecting the particles, on the center of the particles. Finally, from these last fields, we reconstruct the source term. A significant advantage of our approach is that we only need the measurements for a single point away from the support of the source. This is a joint work with Prof. Mourad Sini from RICAM.
In a recent effort to push modern tools from machine learning into several areas of science and engineering, deep learning based methods have emerged as a promising alternative to classical numerical schemes for solving problems in the computational sciences – example applications include fluid dynamics, computational finance, or computational chemistry.
This talk seeks to illuminate the limitations and opportunities of this approach, both on a mathematical and an empirical level. In a first part we present computational hardness results for deep learning based algorithms and find that the computational hardness of a deep learning problem highly depends on the specific norm in which the error is measured. In a second part we present a deep learning based numerical algorithm that outperforms the previous state of the art in solving the multi electron Schrödinger equation – one of the key challenges in computational chemistry.
We investigate the use of two-layer networks with the rectified power unit, which is called the ReLU^k activation function, for function and derivative approximation. By extending and calibrating the corresponding Barron space, we show that two-layer networks with the $ReLU^k$ activation function are well-designed to simultaneously approximate an unknown function and its derivatives. When the measurement is noisy, we propose a Tikhonov type regularization method, and provide error bounds when the regularization parameter is chosen appropriately. Several numerical examples support the efficiency of the proposed approach.
This talk concerns the source reconstruction problem in a transport problem through an absorbing and scattering medium from measurements of boundary fluxes at the boundary. I will focus on the full boundary data in the scattering case, and the partial data (measurements on an arc) in the non-scattering case, and explain how a combination of these two cases solves the reconstruction problem in the partial data case. The method, specific to two dimensional domains, relies on Bukgheim’s theory of A-analytic maps and it is joint work with A. Tamasan (UCF) and H. Fujiwara (Kyoto U).
We present and discuss numerical algorithms for the solution of dynamic inverse problems in which for each time point, a time-dependent linear forward operator mapping the space of measures to a time-dependent Hilbert space has to be inverted. These problems are regularized with dynamic optimal-transport energies that base on the continuity equation as well as convex functionals of Benamou—Brenier-type [ESAIM:M2AN 54(6):2351—2382, 2020]. For the purpose of deriving properties of the solutions as well as numerical algorithms, we present sparsity results for general inverse problems that are connected with the extremal points of the Benamou—Brenier energy subject to the continuity equation. For the latter, it is proven that the extremal points are realized by point masses moving along curves with Sobolev regularity [Bull. LMS 53(5):1436—1452, 2021]. This result will be employed in numerical optimization algorithms of generalized conditional gradient type. We present instances of this algorithm that are tailored towards dynamic inverse problems associated with point tracking. Finally, the application and numerical performance of the method is demonstrated for sparse dynamic superresolution [FOCM, 2022].
Joint work with Marcello Carioni, Silvio Fanzon and Francisco Romero
Image Quality Transfer (IQT) is a seminal machine-learning framework used to enhance low-quality clinical images with more abundant information in high-quality images. However, the imprecise biophysical modelling and the property of ill-posedness constitute the intrinsic mathematical challenges when utilising IQT in practice. In this talk, we present Stochastic Image Quality Transfer (SIQT) that incorporate with IQT (i) a stochastic decimation simulator as the forward model to capture uncertainty and variation in the contrast of low-field images corresponding to a particular high-field image, and (ii) an anisotropic U-Net variant specifically designed for the IQT inverse problem. We experimentally show the efficacy of SIQT in better improving contrast and resolution of low-field magnetic resonance images, as well as the uncertainty quantification for addressing reliability of IQT.
In Extremely Large Telescopes the impact of the turbulent atmosphere is corrected by different Adaptive Optics (AO) systems. Even
though AO correction is used, the quality of astronomical images is still degraded. The resulting blur can be described by a convolution
of the original image with the time and field dependent point spread function (PSF).
We propose to use a super resolution approach to obtain more detailed information on atmospheric turbulences above the telescope,
allowing for much better PSF reconstruction in guide star direction(s) when using dedicated algorithms. Combined with fast methods
for atmospheric tomography, precise PSFs for many directions of view can be reconstructed.
The obtained PSF can be used by Astronomers to assess the image quality and for further image improvement, e.g., using a blind
deconvolution scheme.
Our results in simulation and using on-sky data from the LBT suggest a good qualitative performance and reasonable computational effort.
On a Riemannian manifold with boundary, the X-ray transform integrates a function or a tensor field along all geodesics through the manifold. The reconstruction of the integrand of interest from its X-ray transform is the basis of important inverse problems with applications to seismology and medical imaging.
The inversion of the X-ray transform is often done by inverting the normal operator (composition of the X-ray transform and its adjoint, the "backprojection" operator). The inversion problem includes the design of appropriate function spaces where to formulate forward and backward mapping properties of the X-ray transform, the backprojection operator, and their composites. Such spaces need to incorporate boundary behavior, and include Frechet spaces of 'polyhomogeneous conormal' type, or non-standard Sobolev scales (e.g., transmission spaces a la Hormander, or modeled after degenerate elliptic operator of Kimura type).
In this talk, I will survey recent results attempting to shed additional light on the (forward and backward) mapping properties of the X-ray transform and its normal operator(s) on convex, non-trapping manifolds. I will discuss recent joint works with Gabriel Paternain and Richard Nickl; Rafe Mazzeo; Rohit Mishra and Joey Zou; Joey Zou.
We study an inverse problem for the wave equation, concerned with estimating the wave speed from data gathered by an array of sources and receivers that emit probing signals and measure the resulting waves. The typical mathematical formulation of velocity estimation is a nonlinear least squares minimization of the data misfit, over a search velocity space. There are two main impediments to this approach, which manifest as multiple local minima of the objective function: The nonlinearity of the mapping from the velocity to the data, which accounts for multiple scattering effects, and poor knowledge of the kinematics (smooth part of the wave speed) which causes cycle-skipping. We show that the nonlinearity can be mitigated using a data driven estimate of the internal wave field. This leads to improved performance of the inversion for a reasonable initial guess of the kinematics.
Inverse Schrodinger potential problem concerns about the recovery of potential function in the Schrodinger equation in bounded domain throught DtN map. In this talk, we introduce the linearized local DtN map in the partial data setting, and prove a stability estimate with explicit dependence on wavenumber. This is an increasing stability result, in the sense that the logarithmic stable term decays when wavenumber increases. Furthermore a heuristic DNN algorithm based on a regularization scheme is applied and the stability improvement with respect to the wavenumber is observed. It is a joint work with Shuai Lu (Fudan) and Boxi Xu (SUFE).
In this work, we consider an elastic diffraction tomography problem for propagated and evanescent waves. More precisely, we are interested to reconstruct, quantitatively, the elastic properties (i.e. mass density and elastic Lamé parameters) for a weakly scattering object embedded in a full-space. Firstly, the elastic inverse scattering problem under consideration is linearized using the first-order Born approximation. Then, the Fourier diffraction theorem is proved in the distributional sense for transmission or reflection acquisitions. Wave mode separation is performed using the properties of the P- and S- waves and specific filters defined by the propagation vectors. A new multi-parameter inversion process is developed in the Fourier domain. Backpropagation formulae is established for different modes (PP,PS,SP,SS) with measurements in rotated observations space. Different coverages of k-space are obtained in terms of angular diversity by varying the illumination direction for a fixed frequency. This is a joint work with Otmar Scherzer.
EKI is a technique for the numerical solution of inverse problems. A great advantage of the EKI's ensemble approach is that derivatives are not required in its implementation. But theoretically speaking, EKI's ensemble size needs to surpass the dimension of the problem. This is because of EKI's "subspace property", i.e., that the EKI solution is a linear combination of the initial ensemble it starts off with. We show that the ensemble can break out of this initial subspace when "localization" is applied. In essence, localization enforces an assumed correlation structure onto the problem, and is heavily used in ensemble Kalman filtering and data assimilation. We describe and analyze how to apply localization to the EKI, and how localization helps the EKI ensemble break out of the initial subspace. Specifically, we show that the localized EKI (LEKI) ensemble will collapse to a single point (as intended) and that the LEKI ensemble mean will converge to the global optimum at a sublinear rate. Under strict assumptions on the localization procedure and observation process, we further show that the data misfit decays uniformly. We illustrate our ideas and theoretical developments with numerical examples with simplified toy problems, a Lorenz model, and an inversion of electromagnetic data, where some of our mathematical assumptions may only be approximately valid.
Advanced Adaptive Optics (AO) instruments have applications in ophthalmic imaging, Free-Space Optical Communications (FSOC)
and the future generation of Extremely Large Telescopes (ELTs). These AO systems are designed to perform real-time corrections
of dynamic wavefront aberrations. Many current and upcoming AO systems include nonlinear Fourier-type Wavefront Sensors (WFSs)
in their design. Wavefront reconstruction for AO systems with Fourier-type WFSs is an inverse problem.
This talk looks at overcoming nonlinear wavefront sensing regimes by introducing nonlinear, iterative algorithms for
Fourier-type wavefront reconstruction. The underlying mathematical theory for modeling Fourier-type WFSs is provided, along
with how these models can be used to perform nonlinear wavefront reconstruction. A significant advantage of the analysis presented
is its generalised applicability to any Fourier-type sensor. The only input required is the mathematical expression for the optical
element transfer function.
The generalised and full mathematical model of Fourier-type WFSs is introduced in a Sobolev space setting. The generalised study is
then expanded to solve the inverse problem of wavefront reconstruction for all Fourier-type WFSs. The developed theory is verified
using a numerical example for an ELT-scale instrument.
Dataset shift is a common issue in machine learning: Medical diagnostic systems should be applied to data from different physical human variations; Industrial quality inspection systems should be accurate for new products; Self-driving cars should work under different geographical environments and weather conditions. However, the corresponding learning setups violate the classical assumption of statistical learning theory, that the source (training) and target (test) distribution are the same. Even though various methods for correcting dataset shift exist, they are often purely heuristically driven or limited in their assumptions. This presentation deals with the analysis and the development of principled methods for correcting dataset shift in learning algorithms.
In this talk, we show the stability of the inverse source problem for the three-dimensional Helmholtz equation in an inhomogeneous background medium. The stability estimate consists of the Lipschitz type data discrepancy and the high frequency tail of the source function, where the latter decreases as the upper bound of the frequency increases. The analysis employs scattering theory to obtain the holomorphic domain and an upper bound for the resolvent of the elliptic operator.
Recently, mapping a signal/image into a low rank Hankel/Toeplitz matrix has become an emerging alternative to the traditional sparse regularization, due to its ability to alleviate the basis mismatch between the true support in the continuous domain and the discrete grid. In this talk, we introduce a novel interpretation on the structured low rank matrix framework for image restoration. We observe that the SVD of a low rank Hankel matrix corresponds to a tight wavelet frame system which can represent the image with sparse coefficients. Based on this observation, we introduce two image restoration models; a balanced approach based on the data driven tight frame, and a two-stage approach based on the analysis approach.
Choosing models from a hypothesis space is a frequent task in approximation theory and inverse problems. Cross-validation is a
classical tool in the learner’s repertoire to compare the goodness of fit for different reconstruction models. Much work has
been dedicated to computing this quantity in a fast manner but tackling its theoretical properties occurs to be difficult. So
far, most optimality results are stated in an asymptotic fashion. In this talk we propose a concentration inequality on the
difference of cross-validation score and the risk functional with respect to the squared error. This gives a preasymptotic
bound which holds with high probability. For the assumptions we rely on bounds on the uniform error of the model which allow
for a broadly applicable framework.
We support our claims by applying this machinery to Shepard's model, where we are able to determine precise constants of the
concentration inequality. Numerical experiments in combination with fast algorithms indicate the applicability of our results.
Inverse medium problems involve the reconstruction of a spatially varying unknown medium from available
observations of a scattered wave field.
Typically, they are formulated as PDE-constrained optimization problems and solved by an inexact Newton-like
iteration.
Clearly, standard grid-based representations are very general but often too expensive due to the resulting
high-dimensional search space. Adaptive spectral inversion (ASI) instead expands the unknown medium in a
basis of eigenfunctions of a judicious elliptic operator, which depends itself on the current iterate.
Thus, instead of a grid-based discrete representation combined with standard Tikhonov regularization, the
unknown medium is projected to a small finite-dimensional subspace, which is iteratively adapted using dynamic
thresholding.
Rigorous error estimates of the adaptive spectral decomposition (ASD) are proved for an arbitrary piecewise
constant medium.
I will discuss some recent developments providing theoretical guarantees for Bayesian inversion with Gaussian
priors for a class of nonlinear inverse problems. These developments will be illustrated in a specific inverse
problem: the non-abelian X-ray transform. This problem underpins new experiments designed to measure magnetic
fields inside materials by shooting them with neutron beams from different directions, like in a CT scan.
The theoretical guarantees will be of two types: first I will discuss consistency and then a suitable Bernstein
von-Mises theorem.
This is joint work with F. Monard and R. Nickl.
I will discuss some recent developments providing theoretical guarantees for Bayesian inversion with Gaussian
priors for a class of nonlinear inverse problems. These developments will be illustrated in a specific inverse
problem: the non-abelian X-ray transform. This problem underpins new experiments designed to measure magnetic
fields inside materials by shooting them with neutron beams from different directions, like in a CT scan.
The theoretical guarantees will be of two types: first I will discuss consistency and then a suitable Bernstein
von-Mises theorem.
This is joint work with F. Monard and R. Nickl.
In this talk, our recent progress will be discussed on the inverse random source and potential problems for the wave equations. I will present a new model for the random source and potential. The well-posedness and regularity of the solutions will be addressed for the direct problems. Some uniqueness results will be shown for the inverse problems. I will also highlight some ongoing and future projects in the inverse random medium problems.
In this talk, we propose a randomized low-rank approach to approximate the ensembles at each assimilation step of Ensemble Kalman Filter (EnKF). The advantage of this randomized low-rank method are twofold. First, the number of forward model simulations at each time step is reduced from $N_{en}$ to $r_k$ , where $N_{en}$ is the size of ensembles for the standard EnKF, $r_k$ is the reduced rank, and $N_{en} \ll r_k$. Second, our randomized low-rank approach further reduces the computational cost of the EnKF update step. To further reduce the computational complexity, we make use of the multi-fidelity framework where coarse fidelity ensembles can help to reduce the rank of of high-fidelity ensembles. Numerical experiments show that the performance of our multilevel randomized low-rank EnKF is comparable with EnKF of a ensemble size $N_{en}$ while the computational cost is reduced dramatically.
Astronomical imaging with ground-based telescopes suffers from quickly varying optical distortions, which cause blurring and loss of contrast. Since the contrast and sharpness of images are essential for astronomical observations, a method that compensates the aberrations in the earths atmosphere is required. This technique is called Adaptive Optics (AO). It uses a combination of wavefront sensors, that measure the deformations of wavefronts emitted by guide stars, and deformable mirrors to correct for them. AO systems that achieve a good correction over a large field involve a tomographic estimation of the 3D atmospheric wavefront disturbance. Mathematically, the reconstruction of turbulent layers in the atmosphere is severely ill-posed, hence, limits the achievable solution accuracy. Moreover, the reconstruction has to be performed in real-time at a few hundred to thousand Hertz frame rates. This leads to a computational challenge, especially for the AO systems of future Extremely Large Telescopes (ELTs). In this talk, we present a conjugate gradient based solver for atmospheric tomography called augmented Finite Element Wavelet Hybrid Algorithm (augmented FEWHA). Due to a dual domain discretization the algorithm allows a matrix-free representation of all operators involved. This leads to a significant reduction in the computational load and memory consumption. Moreover, the method is highly parallelizable. A crucial indicator for the run-time of iterative solvers is the number of iterations. Our algorithm utilizes an augmented Krylov subspace method in order to reduce the number of CG iterations. We analyze the performance of the solver in terms of quality and run-time via numerical simulations for ELT-sized test configurations.
Approximation of functions with shallow neural networks is a classical topic of machine learning and in approximation theory. For single-layer neural networks, the approximating function takes the form of a linear combination of the evaluations of a given activation function on affine linear functions (neurons) of the variable.
In this talk I will present the joint work with Otmar Scherzer and Cong Shi, where we study neural network functions where the linear neurons are replaced by quadratic ones.
First, I will describe that a shallow quadratic neural network is competitive to a deep affine linear neural network structure, by providing convergence rates results based on the theory of wavelets and statistical learning. Then, I will describe numerical experiments that support these theoretical results.
Conductivity imaging represents one of the most important tasks in medical imaging. In this talk we discuss a neural network-based technique for imaging the conductivity from the magnitude of the internal current density. It is achieved by formulating the problem as the relaxed weighted least-gradient problem, and then approximating the minimizer by standard feedforward neural networks. We derive bounds on two components of the generalization error, i.e., approximation error and statistical error, explicitly in terms of properties of the neural networks (i.e., depth, total number of parameters, and the bound of the network parameters). We illustrate the performance and distinct features of the proposed approach on several numerical experiments.
Abstract (PDF).
Primal dual splitting algorithms are largely adopted for composited optimization problems arising in imaging. In this talk, I will present stochastic extensions of some popular composite optimization algorithms in both convex and nonconvex settings and their applications to image restoration. The first class of algorithms is designed for convex linearly composite problems by combining stochastic gradient with the so-called primal dual fixed-point method (PDFP). As a natural generalization of proximal stochastic gradient types methods, we proposed stochastic PDFP (SPDFP) and its variance reduced version SVRG-PDFP which do not require subproblem solving. The convergence and convergence rates are established for the proposed algorithms based on some standard assumptions. The numerical examples on graphic Lasso, graphics logistic regressions and image reconstruction are provided to demonstrate the effectiveness of the proposed algorithm. In particular, we observe that for large scale image reconstruction problems, SVRG-PDFP exhibits some advantages in terms of accuracy and computation speed, especially in the case of relatively limited high-performance computing resource. The second class of algorithms is based on Alternating direction method of multipliers (ADMM) for nonconvex composite problems. In particular, we study the ADMM method combined with a class of variance reduction gradient estimators and established the global convergence of the sequence and convergence rate under the assumption of Kurdyka-Lojasiewicz (KL) function. Moreover, we also show that the popular SAGA and SARAH gradient estimators satisfy the variance reduction property. Finally, the efficiency of the algorithms is verified through statistical learning examples and L0 based sparse regularization for 3D image reconstruction.
In this talk we report on recent work concerning tomographic imaging of objects undergoing irregular motion, the application in mind being imaging of microscopic biological particles that are moved using optical tweezers. As the wavelength of the imaging beam typically is comparable to the object size in such experiments, the wave nature of light should be taken into account meaning that classical projection tomography has limited applicability. Instead diffraction tomography within the Born approximation will be considered. Working in frequency space and assuming plane wave illumination, we introduce a Fourier diffraction theorem relating measurements of the scattered waves to the refractive index distribution of the object. Based on the Fourier diffraction theorem, reconstruction formulae covering a large class of rotations will be presented.
This talk will discuss some recent progress on the mathematical analysis and numerical calculation of the composite scattering and inverse scattering for the rough surfaces and obstacles. Based on the boundary integral equation and variation method, we prove the composite scattering problem's well-posedness. For the inverse problem, we show that any two obstacles and unbounded rough surfaces are identical if they generate the same data. Furthermore, we studied the numerical methods for the direct and inverse problems. We show some results for the inverse problems by RTM. The impact of different parameters on the inversion algorithm is also discussed. Finally, we will introduce our ongoing research work. It is a joint work with Gang Bao (Zhejiang University), Peijun Li(Purdue University, Huayan Liu(Zhejiang University), and Jue Wang (Harbin Engineering University).
In this work we propose a two-step method for the numerical solution of parabolic and hyperbolic Cauchy problems in two dimensions. The method can be applied in both direct and inverse problems. It is a combination of a semi-discretization with respect to the time variable together with a boundary integral equation method for the spatial variables. The time discretization results to a sequence of elliptic stationary problems. We describe the derived coefficients using a single-layer ansatz for some unknown boundary density functions. We solve the discretized problem on the boundary with the collocation method applying quadrature rules for handling the singularities. Numerical results are presented for the wave, Navier and heat equations. This is a joint work with R. Chapko (Ivan Franko University of Lviv, Ukraine) and B. T. Johansson (Linköping University, Sweden).
Wavefront sensors encode phase information of an incoming wavefront into intensity patterns measured by a camera. We consider Fourier based wavefront sensors which use Fourier filtering with an optical element located in the focal plane. These sensors are used in astronomical adaptive optics to correct for atmospheric turbulence which degrades the quality of observations from ground-based telescopes. They can also be utilised in retinal imaging for medical diagnostics, especially early detection of anomalies and diseases. In ophthalmic AO, the distortions of the laser beam are not caused by turbulence of the air, but mainly by the patient’s eye itself.
In this talk we investigate underlying mathematical models of Fourier based wavefront sensors which are,
e.g., variations of the Hilbert transform. We additionally present wavefront reconstruction algorithms
based on a thorough analysis of the nonlinear models like a singular value type reconstructor or iterative
methods.
As an example of a Fourier based wavefront sensor we particularly study the pyramid wavefront sensor
which is the baseline for future instruments of extremely large telescopes.
In this talk, we will discuss some recent progress on numerical algorithms for inverse spectral problems for the Sturm-Liouville, Euler-Bernoulli and damped wave operator. Instead of inverting the map from spectral data to unknown coefficients directly, we propose a novel method to reconstruct the coefficients based on inverting a sequence of trace formulas which bridge the spectral and geometry information clearly in terms of a series of nonlinear Fredholm integral equations. Numerical experiments are presented to verify the validity and effectiveness of the proposed numerical algorithm. The impact of different parameters involved in the algorithm is also discussed. This is a joint work with Gang Bao (Zhejiang U) and Jian Zhai (HKUST).
We consider whether minimizers for total variation regularization of linear inverse problems belong to $L^\infty$ even if the measured data does not. We present a simple proof of boundedness of the minimizer for fixed regularization parameter, and derive the existence of uniform bounds for small enough noise under a source condition and adequate a priori parameter choices. To show that such a result cannot be expected for every fidelity term and dimension we compute an explicit radial unbounded minimizer, which is accomplished by proving the equivalence of weighted one-dimensional denoising with a generalized taut string problem. This is a joint work with K. Bredies (Graz) and J.A. Iglesias (RICAM, Linz).
The Yang-Mills equation addresses the fields of the electroweak and strong interactions of gauge bosons. We shall discuss the uniqueness of the recovery of the fields of gauge bosons by doing local measurements. In terms of differential geometry, the fields of gauge bosons are modelled by the connections on a principal G-bundle, the parallel transport of the Yang-Mills connection is viewed as a broken light ray transform, the detection of gauge bosons amounts to the reconstruction of the connections via the light ray transform. This is a joint work with M. Lassas (Helsinki), L. Oksanen (Helsinki) and G. Paternain (Cambridge).
We propose and analyze a projected two-point gradient method for solving nonlinear inverse problems. The approach is based on the Bregman projection onto stripes the width of which is controlled by both the noise level and the structure of the operator, and the two-point gradient method is efficient for acceleration. The method allows to use L1−liked penalty terms, which is significant in sparsity reconstructions. We present a proof for the regularizing properties of the method, some parameter identification examples are presented to illustrate the effectiveness of the proposed method. It is a joint work with Wei Wang (Jiaxing University).
Single molecule localization microscopy has the potential to resolve structural details of biological samples at the nanometer length scale. However, to fully exploit the resolution it is crucial to account for the anisotropic emission characteristics of fluorescence dipole emitters. In case of slight residual defocus, localization estimates may well be biased by tens of nanometers. We show that astigmatic imaging in combination with information about the dipole orientation allows to extract the position of the dipole emitters without localization bias and down to a precision of 1nm, thereby reaching the corresponding Cramér Rao bound. The approach is showcased with simulated data forvarious dipole orientations, and parameter settings realistic for real life experiments.
In this talk we are going to discuss the problem of hyperparameters tuning in the context of learning from different domains known also as domain adaptation. The domain adaptation scenario arises when one studies two input-output relationships governed by probabilistic laws with respect to different probability measures, and uses the data drawn from one of them to minimize the expected prediction risk over the other measure.
The problem of domain adaptation has been tackled by many approaches, and most domain adaptation algorithms depend on the so-called hyperparameters that change the performance of the algorithm and need to be tuned. Usually, algorithm performance variation can be attributed to just a few hyperparameters, such as a regularization parameter in kernel ridge regression, or batch size and number of iterations in stochastic gradient descent training. In spite of its importance, the question of selecting these parameters has not been much studied in the context of domain adaptation. In this talk, we are going to shed light on this issue. In particular, we discuss how a regularization of numerical differentiation problem of estimating the Radon-Nikodym derivative of two measures from their samples can be employed in hyperparameters tuning.
Theoretical results will be illustrated by application to stenosis detection in different types of arteries.
The presentation is based on the research performed within FFG COMET project S3AI in cooperation with Nguyen Duc Hoan (RICAM), Bernhard Moser (SCCH), Sergiy Pereverzyev Jr. (Medical University of Innsbruck), Werner Zellinger (SCCH).
We consider an inverse boundary value problem for a nonlinear model of elastic waves. We show that all the material parameters appearing in the equation can be uniquely determined from boundary measurements under certain geometric conditions. The proof is based on the construction of Gaussian beam solutions.
Dr. Jian Zhai is now a postdoc fellow at the Institute for Advanced Study, the Hong Kong University of Science and Technology. He received his PhD in Computational and Applied Mathematics from Rice University in 2018. His research interests include Inverse Problems, Partial Differential Equations, Microlocal Analysis and Scientific Computing.
This talk is about the heuristic (or data-driven) choice of the regularization
parameter in the regularization theory of ill-posed problems. Here, heuristic
means that the parameter is chosen independent of the knowledge of the noise level
(or any other supplementary information).
Recently, a convergence theory for several heuristic parameter choice methods
(for linear regularization) has been developed on basis of the so-called noise-restricted
convergence analysis. Withing this framework, one can circumvent the restrictions of the
so-called Bakushinskii veto.
We outline the corresponding theory and present theoretical results for the most important
examples of heuristic parameter choice rules.
We furthermore discuss some recent results in this direction for convex, nonlinear Tikhonov
regularization, together with some open research questions.
Fast, high-accuracy algorithms for acoustic and electromagnetic scattering from axisymmetric objects are of great importance when modeling physical phenomena in optics, materials science (e.g. meta-materials), and many other fields of applied science. In this talk, we develop an FFT-accelerated separation of variables solver that can be used to efficiently invert integral equation formulations of Maxwell's equations for scattering from axisymmetric bodies. The solver is also extended to geometries with non-smooth generating curves and the scattering from large cavities. We then discuss the applications of the solver in the inverse scattering problems. In particular, based on the multi-frequency data and recursive linearization method, we are able to accurately recover the location and shape of the unknown scatterer.
Jun Lai. He received the B.S. degree in mathematics from Nanjing University, China, in 2008, and the Ph.D. degree in applied mathematics from Michigan State University, USA, in 2013. Then he was a post-doctoral fellow in Courant Institute of Mathematical Sciences in New York University and later became a Courant Instructor. In 2016, he joined Zhejiang University and currently he is an assistant professor in the department of mathematical sciences. His main research interests are computational electromagnetics and inverse problems, including scattering and inverse scattering, boundary integral equations and the fast multipole method.
The total variation (of the gradient) is widely applied in the regularization of inverse problems. It is most useful when the true data is expected to be nearly piecewise constant, for example in the recovery of relatively simple images consisting of well-defined objects with limited texture, or identification of physical parameters which are expected to contain inclusions or discontinuities. A basic question for any regularization method is consistency in the low noise regime. For total variation regularization, basic compactness considerations yield convergence in $L^p$ norms, while adding a source condition involving the subgradient at the least-energy exact solution allows for convergence rates in Bregman distance. However, these distances do not provide much information in the setting of nearly piecewise constant functions that motivates the use of the total variation in the first place.
A different, perhaps more adequate choice is convergence of the boundaries of level sets with respect to Hausdorff distance, which can be loosely interpreted as uniform convergence of the objects to be recovered. Such a result requires an adequate choice of (possibly Banach) spaces for the measurements, dual stability estimates to account for the noise, and uniform density estimates for quasi-minimizers of the perimeter. We present some recent results obtaining this type of convergence for regularization of linear inverse problems under the same type of source condition, and for denoising of simple data without source condition.
A Riemannian gradient descent algorithm and a truncated variant will be presented for solving systems of phaseless equations $|Ax|^2 = y$. The algorithms are developed by exploiting the inherent low rank structure of the problem based on the embedded manifold of rank-1 positive semidefinite matrices. Theoretical recovery guarantee has been established for the truncated variant, showing that the algorithm is able to achieve successful recovery when the number of equations is proportional to the number of unknowns. In addition, we will present a loss function without spurious local minima when the sampling complexity is optimal.
Ke Wei is currently a tenure track professor at School of Data Science, Fudan University. He obtained his DPhil degree in the University of Oxford, followed by three years postdoctoral research in the Hong Kong University of Science and Technology and the University of California at Davis. His reasearch interests are mainly in signal and image processing, mathematical data science and nonconvex optimization.
Two-Point Gradient (TPG) methods are a class of gradient-based iterative regularization methods for solving nonlinear ill-posed problems, and are inspired by Landweber iteration and Nesterov's acceleration scheme. Simple to implement and numerically efficient in practice, these methods have the potential to become useful alternatives to second order iterative schemes. In this talk, we present our initial convergence analysis and numerical experience with TPG methods, and provide a short overview over further works by other researchers which it inspired.
The novel corona virus pneumonia (COVID-19) is a major event in the world. Whether we can establish the mathematical models to describe the characteristics of epidemic spread and evaluate the effectiveness of the control measures we have taken is a question of concern. From January 26, 2020, our team began to conduct research on the modeling of new crown epidemic. A kind of linear nonlocal dynamical system model with time delay is proposed to describe the development of covid-19 epidemic. Based on the public data published by the government, the information of transmission rate, isolation rate and other information, which may not be directly observed in the process of epidemic development is obtained through inversion method, and on the basis of that, a "reasonable" prediction of the development of the epidemic is made. To provide some reasonable data support for government decision-making and various needs of the public.
In this talk, we report some recent results on inverse problems associated with randomness. The first part focuses on the continuous asymptotical regularization on the statistical inverse problems in presence of white noise, where infinite-dimensional stochastic integration shall be treated carefully. The second part considers the convergence analysis on the random projection of discrete inverse problems and we briefly explain how to handle the randomness there.
In this talk I will speak about some recent results on the study of linear inverse problems under the premise that the forward operator is not at hand but given indirectly through some input-output training pairs. We show that regularisation by projection and variational regularisation can be formulated by using the training data only and without making use of the forward operator. I will provide some information regarding convergence and stability of the regularised solutions. Moreover, we show, analytically and numerically, that regularisation by projection is indeed capable of learning linear operators, such as the Radon transform. This is a joint work with Yury Korolev (University of Cambridge) and Otmar Scherzer (University of Vienna and RICAM).