Many data processing techniques treat the given data as union of its coefficients. Sparse transformations like Wavelets, Curvelets, Shearlets, etc. transform the data from one coefficient space into another. Inverse transforms used in MRI, CT, inverse scattering, map the measured data back onto a pixel image. Only in a second step, the obtained result is analyzed to extract the desired information. Although the above mentioned methods perform excellent for their designed task, the overall process is still a two step method. Moreover, the intermediate step often seems to arti cially increase the problem size. Do we really need to solve the complete inverse problem, when in the end only a small part contains the information of interest? Exemplary, do we need to reconstruct the full velocity field of seismic waves, when we are only interested in detecting subsurface material boundaries?
We think that instead of viewing data as union of its coefficients, we should see data as union of its information. As the amount of information required is often much smaller than the data size, this already gives implicit sparsity. In many cases the information is directly bounded to an object contained in the data. For example, each car in a video recorded by a traffic camera carries information about the traffic status. A seismic wave in geophysical data carries information about the subsurface conditions. We want to give those physical objects a mathematical model, such that the data can be mapped into the model space where the information can directly be extracted. Some techniques already use this, or a similar approach. For example, a neural network can be trained as classi er to directly extract information out of the data. This technique requires a lot of training data and is not feasible for all applications. Singular value decomposition (SVD) or Principal Component Analysis (PCA) can also been interpreted as such object orientated method. Applying a SVD to video data will return the video background as largest singular vector as long as the camera is not moving. However, the SVD struggles whenever objects in the video are moving. We present two extensions of SVD that are designed to recover moving objects in the data (not only in video data, but also other applications). The first apporach is the so called shifted rank-1 model which allows object movement. The second approach - ORKA - extends this model by allowing the objects to also change form and restricting their movement to a smooth path. (joint work with Jianwei Ma)
References:
[1] F. Bo mann, J. Ma, Enhanced image approximation using shifted rank-1 reconstruction. Inverse Problems and Imaging, 14 (2), 267-290, 2020.
[2] F. Bo mann, J. Ma, ORKA: Object reconstruction using a K-approximation graph, submitted, available on ArXiv, 2022.

We study the regularity of a conjugacy $H$ between a hyperbolic toral automorphism $A$ and its
smooth perturbation $f$. We show that if $H$ is weakly differentiable then it is $C^{1+\text{Holder}}$ and, if $A$ is also weakly irreducible, then $H$ is $C^\infty$. As a part of the proof, we establish results of independent interest on Holder continuity of a measurable conjugacy between linear cocycles over a hyperbolic system. As a corollary, we improve
regularity of the conjugacy to $C^\infty$ in prior local rigidity results. This is a joint work with B. Kalinin an V. Sadovskaya.

Modern machine learning has uncovered an interesting observation: large over parameterized models can achieve good generalization performance despite interpolating noisy training data. In this talk, we study high-dimensional linear models and show how interpolators can achieve fast statistical rates when their structural bias is moderate. More concretely, while minimum-l2-norm interpolators cannot recover the signal in high dimensions, minimum-l1-interpolators with strong sparsity bias are much more sensitive to noise. In fact, we show that even though they are asymptotically consistent, minimum-l1-norm interpolators converge with a logarithmic rate much slower than the O(1/n) rate of regularized estimators. In contrast, minimum-lp-norm interpolators with 1<p<2 can trade off these two competing trends to yield polynomial rates close to O(1/n).

A pointwise partially hyperbolic diffeomorphism is different from a partially hyperbolic one if the expansion and contraction depend on points. If the system is defined on an open set, then the hyperbolicity may not be uniform. We show that under certain conditions such a suystem has unstable and stable manifolds, and admits a finite or an infinite u-Gibbs measure. If the system is pointwise hyperbolic, then the u-Gibbs measure $\mu$ is an Sinai-Ruelle-Bowen (SRB) measure
or an infinite SRB measure. As applications, we show that some almost Anosov diffeomorphisms and gentle perturbations of Katok's map have the properties.