Joint Fudan - RICAM Seminar on Inverse Problems
Two-Point Gradient (TPG) methods are a class of gradient-based iterative regularization methods for solving nonlinear ill-posed problems, and are inspired by Landweber iteration and Nesterov's acceleration scheme. Simple to implement and numerically efficient in practice, these methods have the potential to become useful alternatives to second order iterative schemes. In this talk, we present our initial convergence analysis and numerical experience with TPG methods, and provide a short overview over further works by other researchers which it inspired.
The novel corona virus pneumonia (COVID-19) is a major event in the world. Whether we can establish the mathematical models to describe the characteristics of epidemic spread and evaluate the effectiveness of the control measures we have taken is a question of concern. From January 26, 2020, our team began to conduct research on the modeling of new crown epidemic. A kind of linear nonlocal dynamical system model with time delay is proposed to describe the development of covid-19 epidemic. Based on the public data published by the government, the information of transmission rate, isolation rate and other information, which may not be directly observed in the process of epidemic development is obtained through inversion method, and on the basis of that, a "reasonable" prediction of the development of the epidemic is made. To provide some reasonable data support for government decision-making and various needs of the public.
In this talk, we report some recent results on inverse problems associated with randomness. The first part focuses on the continuous asymptotical regularization on the statistical inverse problems in presence of white noise, where infinite-dimensional stochastic integration shall be treated carefully. The second part considers the convergence analysis on the random projection of discrete inverse problems and we briefly explain how to handle the randomness there.
In this talk I will speak about some recent results on the study of linear inverse problems under the premise that the forward operator is not at hand but given indirectly through some input-output training pairs. We show that regularisation by projection and variational regularisation can be formulated by using the training data only and without making use of the forward operator. I will provide some information regarding convergence and stability of the regularised solutions. Moreover, we show, analytically and numerically, that regularisation by projection is indeed capable of learning linear operators, such as the Radon transform. This is a joint work with Yury Korolev (University of Cambridge) and Otmar Scherzer (University of Vienna and RICAM).
Please join my meeting from your computer, tablet or smartphone.
You can also dial in using your phone.
Austria: +43 7 2081 5337
Access Code: 244-187-445
New to GoToMeeting? Get the app now and be ready when your first meeting starts: