Faraway Fourier Talks 2021-2022
The Norbert Wiener Center Online Seminar on Harmonic Analysis and Applications
|
| |
When:
Mondays at 2 pm EDT.
Where:
Zoom. For access to the Zoom link, please join the mailing list by entering your information here.
Recorded talks will become available on our Youtube channel.
Our speaker for May 9th is Professor Soledad Villar of JHU, presenting on the topic of 'Enforcing exact group equivariances in machine learning'.
Scheduled and completed talks:
Sept. 27th, 2021 |
Professor David Walnut (GMU, UMD) |
Exponential bases for partitions of intervals |
For a partition of [0,1] into intervals I1,...,In
we prove the existence of a partition of ℤ into
Λ1,..., Λn
such that the complex exponential functions with frequencies in Λk form a Riesz basis for L2(Ik),
and furthermore, that for any J⊆{1,2,...,n}, the exponential functions with frequencies in
⋃j∈J Λj form a Riesz basis for L2(I)
for any interval I with length |I|=Σj∈J |Ij|.
The construction extends to infinite partitions of [0,1], but with size
limitations on the subsets J⊆ ℤ.
The construction utilizes an interesting assortment of tools from analysis, probability, and number theory.
This is joint work with Shauna Revay (GMU and Novetta), and
Goetz Pfander (Catholic University of Eichstaett-Ingolstadt).
|
Concluded | Recording |
Slides |
Oct. 4th, 2021 |
Professor Anne Gelb (Dartmouth) |
Empirical Bayesian inference using joint sparsity
|
We develop a new empirical Bayesian inference algorithm for solving a linear inverse problem given multiple measurement vectors (MMV) of under-sampled and noisy observable data. Specifically, by exploiting the joint sparsity across the multiple measurements in the sparse domain of the underlying signal or image, we construct a new support informed sparsity promoting prior. While a variety of applications can be modeled using this framework, our prototypical example comes from synthetic aperture radar (SAR) data, from which data are acquired from neighboring aperture windows. Hence a good test case is to consider the observations modeled as noisy Fourier samples. Our numerical experiments demonstrate that using the support informed sparse prior not only improves accuracy of the recovery, but also reduces the uncertainty in the posterior when compared to standard sparsity producing priors.
This is joint work with Theresa Scarnati formerly of the Air Force Research Lab Wright Patterson and now working at Qualis Corporation in Huntsville, AL, and Jack Zhang, recent bachelor degree recipient at Dartmouth College and now enrolled at University of Minnesota’s PhD program in mathematics.
|
Concluded | Recording | |
Oct 11th, 2021 |
Professor Zuowei Shen (National University of Singapore) |
Deep Approximation via Deep Learning |
The primary task of many applications is approximating/estimating a function through samples drawn from a probability distribution on the input space. The deep approximation is to approximate a function by compositions of many layers of simple functions, that can be viewed as a series of nested feature extractors. The key idea of deep learning network is to convert layers of compositions to layers of tuneable parameters that can be adjusted through a learning process, so that it achieves a good approximation with respect to the input data. In this talk, we shall discuss mathematical theory behind this new approach and approximation rate of deep network; we will also how this new approach differs from the classic approximation theory, and how this new theory can be used to understand and design deep learning network.
|
Concluded | Recording | |
Oct 18th, 2021 |
Professor Amit Singer (Princeton) |
Wilson Statistics: Derivation, Generalization, and Applications to Cryo-EM |
The power spectrum of proteins at high frequencies is remarkably well described by the flat Wilson statistics. Wilson statistics therefore plays a significant role in X-ray crystallography and more recently in cryo-EM. Specifically, modern computational methods for three-dimensional map sharpening and atomic modeling of macromolecules by single particle cryo-EM are based on Wilson statistics. In this talk we use certain results about the decay rate of the Fourier transform to provide the first rigorous mathematical derivation of Wilson statistics. The derivation pinpoints the regime of validity of Wilson statistics in terms of the size of the macromolecule. Moreover, the analysis naturally leads to generalizations of the statistics to covariance and higher order spectra. These in turn provide theoretical foundation for assumptions underlying the widespread Bayesian inference framework for three-dimensional refinement and for explaining the limitations of autocorrelation based methods in cryo-EM.
|
Concluded | Recording |
Slides |
Oct 25th, 2021 |
Professor John Klauder |
Expanding Quantum Field Theory Using Affine Quantization |
Quantum field theory uses canonical quantization (CQ), and often fails, e.g., φ44, etc. Affine quantization (AQ) - which will be introduced - can solve a variety of problems that CQ cannot. AQ can even be used to solve certain models regarded as nonrenormalizable. The specific procedures of AQ lead to a novel Fourier transformation that illustrates how AQ can create a generous contribution to quantum field theory.
|
Concluded | Recording |
Slides |
Nov. 1st, 2021 |
Professor Thomas Strohmer (UC Davis) |
Fighting Surveillance Capitalism with Mathematics |
'Sharing is Caring', we are taught. However, in the Age of Surveillance
Capitalism, a new economic system that pushes
for relentless data capture and analysis, we better think twice what we
share. As data sharing is increasingly locking horns with data-privacy
concerns, synthetic data are gaining traction as a potential solution to
the aporetic conflict between privacy and utility.
The goal of synthetic data is to create an as-realistic-as-possible
dataset, one that not only maintains the nuances
of the original data, but does so without risk of exposing sensitive
information. As such, synthetic data can be instrumental in
reestablishing the balance between the need of data that drives AI
advances and the fundamental right to data protection for citizens and
consumers. However, the road to privacy is paved with NP-hard problems!
In this talk I will present three recent mathematical breakthroughs in
the NP-hard challenge of computationally
efficiently creating synthetic data that come with provable privacy and
utility guarantees. We draw from a wide range of mathematical concepts,
including Boolean Fourier analysis, duality, empirical processes, and
microaggregation. For instance, we will see some surprising connections
between theoretical probability and
anonymization. I will also present the first noisefree method to achieve
differential privacy and discuss applications of our approach for data
analysis tasks arising in the Intensive Care Unit.
This is joint work with March Boedihardjo and Roman Vershynin.
|
Concluded | Recording |
|
Nov. 8th, 2021 |
Professor Robert Calderbank (Duke) |
Climbing the Diagonal Clifford Hierarchy |
Quantum computers are moving out of physics labs and becoming generally programmable. In this talk, we start from quantum algorithms like magic state distillation and Shor factoring that make essential use of diagonal logical gates. The difficulty of reliably implementing these gates in some quantum error correcting code (QECC) is measured by their level in the Clifford hierarchy, a mathematical framework that was defined by Gottesman and Chuang when introducing the teleportation model of quantum computation. We describe a method of working backwards from a target logical diagonal gate at some level in the Clifford hierarchy to a quantum error correcting code (CSS code) in which the target logical can be implemented reliably.
This talk describes joint work with my graduate students Jingzhen Hu and Qingzhong Liang.
|
Concluded | Recording |
Slides |
Nov. 15th, 2021 |
Professor Hans Feichtinger (Vienna) |
Conceptual Harmonic Analysis: Tools and Goals The Ubiquitous Role of BUPUs |
Since almost 14 years the speaker tries to promote the idea of
``CONCEPTUAL HARMONIC ANALYIS'' as a way to combine or rather reconcile
Abstract Harmonic Analysis (AHA) with Computational Harmonic Analysis (CHA)
and much more. In particular, the long history Fourier Analysis (by now
200 years!) has contributed to a diversification of methods and standards.
This has led to the unpleasant situation that mathematicians, engineers or
physicists have their own notations, their own settings and habits, and
numerical work is often only seen as a way to illustrate the continuous
theory, or to simulate a problem in order to improve the heuristic basis
for the proper development of a mathematical theory.
Going back to Andre Weil and Hans Reiter one can say that the natural domain
for Fourier Analysis are LCA groups. The same is true for time-frequency
analysis and Gabor Analysis. But in the world of AHA we can discuss the
analogy between different groups G. Once the dual group G^ has been identified
we can define the forward and inverse Fourier transform, define time-frequency
shifts and the STFT and discuss the reconstruction from samples (for
band-limited functions or from the STFT).
Obviously one expects that the FFT should be useful in computing at least
approximately the Fourier transform of a nice function, or the convolution
of two functions, or perhaps even measures. We should motivate the approaches
and ideally provide a guarantee (in the spirit of numerical integration methods)
for computations to deliver good quantitative results. Ideally to approach
should avoid unnecessary technicalities (such as Lebesgue integration or Frechet
spaces such as S(R)), at least for the problems relevant for digital signal
processing. Of course, suitable function spaces are required in order to express
properly that computations deliver a good approximation of a given signal.
In the talk the speaker will report on attempts to rebuild Fourier Analysis
over LCA groups (including R^d) from scratch. First convolution of bounded
measures is introduced via translation invariant systems and then the
Fourier Stieltjes transform is introduced, up to the convolution theorem.
BUPUs (bounded uniform partitions play an important role here).
As an intermediate goal the space S_0(G) is introduced, and finally
the Banach Gelfand Triple (S_0,L_2,S_0*). Most spaces relevant for classical
Fourier Analysis are then sandwiched between S_0 and S_0* and are isometrically
invariant under the time-frequency shifts.
Overall, the focus of the talk will be on alternative ways to provide a proper
foundation for AHA, it will talk about non-standard function spaces (avoiding
Lebesgue spaces as a starting point) and suggest an interpretation of signals
as ``mild distributions'' (members of S_0*), having a bounded STFT. On the
other hand we need computational tools plus quantitative and constructive
approximations of guaranteed quality.
|
Concluded | Recording |
|
Nov. 22nd, 2021 |
Professor Rama Chellappa (Johns Hopkins) |
Design of Unbiased, Adaptive and Robust AI Systems |
Over the last decade, algorithms and systems based on deep learning and other data-driven methods have contributed to the reemergence of Artificial Intelligence-based systems with applications in national security, defense, medicine, intelligent transportation, and many other domains. However, another AI winter may be lurking around the corner if challenges due to bias, domain shift and lack of robustness to adversarial attacks are not considered while designing the AI systems. In this talk, I will present our approach to bias mitigation and designing AI systems that are robust to domain shift and a variety of adversarial attacks.
|
Concluded | Recording |
Slides |
Dec. 6th, 2021 |
Professor Dustin Mixon (OSU) |
Three proofs of the Benedetto--Fickus theorem |
In 2003, Benedetto and Fickus introduced a vivid intuition for an objective function called the frame potential, whose global minimizers are fundamental objects known today as unit norm tight frames. Their main result was that the frame potential exhibits no spurious local minimizers, suggesting local optimization as an approach to construct these objects. Local optimization has since become the workhorse of cutting-edge signal processing and machine learning, and accordingly, the community has identified a variety of techniques to study optimization landscapes. This talk applies some of these techniques to obtain three modern proofs of the Benedetto--Fickus theorem. Joint work with Tom Needham, Clayton Shonkwiler, and Soledad Villar.
|
Concluded | Recording |
|
Dec. 13th, 2021 |
Professor Hrushikesh Mhaskar (Claremont Graduate University) |
Learning without training |
The fundamental problem of machine learning is often formulated as the problem of function approximation as follows. Starting with a data of the form {(x_i, y_i)} sampled from an unknown joint distribution t, approximate f(x) = E_t (y|x). Since t is unknown, it is impossible to give a constructive method to find the minimizer of the generalization error defined as the deviation of the model from the target function f in L^2(t). Instead, the estimation of this error is made independently of the actual construction, known as training, which is based on the minimization of another objective function. In this talk, we will point out some pitfalls of this paradigm, and describe our efforts to bypass this procedure and construct a “good” approximation to f directly from the data. While our construction is universal in the sense that it does not involve any assumptions on the target function, we will obtain probabilistic estimates on the pointwise deviation between our approximation and f under some smoothness assumption on f. The talk is mostly theoretical, but some proof-of-concept applications are discussed.
|
Concluded | Recording |
|
Dec. 20th, 2021 |
Professor Weilin Li (Courant Institute) |
Function approximation with one-bit Bernstein and neural networks |
The celebrated universal approximation theorems for neural networks typically state that every sufficiently nice function can be arbitrarily well approximated by a neural network with carefully chosen parameters. Motivated by recent questions regarding compression and overparameterization, we ask whether it is possible to represent any reasonable function with a neural network whose parameters are restricted to a small set of values, with the extreme case being one-bit {+1,-1} neural networks? We answer this question in the affirmative. One of our main innovations is a novel approximation result for linear combinations of multivariate Bernstein polynomials, with only +1 and -1 coefficients. Joint work with Sinan Gunturk.
|
Concluded | Recording |
|
Feb. 28th, 2022 |
Professor Richard Baraniuk (Rice University) |
A Spline Perspective of Deep Learning |
We study the geometry of deep learning through the lens of approximation theory via spline functions and operators. Our key result is that a large class of deep networks (DNs) can be written as a composition of continuous piecewise affine (CPA) splines, which provide a powerful portal through which to interpret and analyze their inner workings. We explore links to the classical theory of optimal classification via matched filters, the effects of data memorization, vector quantization (VQ), and K-means clustering, which open up new geometric avenue to study how DNs organize signals in a hierarchical and multiscale fashion. |
Concluded | Recording |
|
Mar. 7th, 2022 |
Professor Diego Maldonado (Kansas State University) |
A Harnack inequality for certain degenerate/singular elliptic PDEs shaped by convex functions |
Certain convex functions on Rn produce quasi-distances and Borel measures on Rn. Furthermore, their Hessians can shape degenerate/singular elliptic operators. We will describe geometric and measure-theoretic conditions on convex functions that allow to prove a Harnack inequality for nonnegative solutions to their associated elliptic operators. |
Concluded | Recording |
|
Mar. 14th, 2022 |
Professor Margaret Cheney (Colorado State University) |
Passive Source Localization |
This talk introduces the problem of localizing electromagnetic sources from measurements of their radiated fields at two moving sensors.
Two approaches are discussed, the first based on measuring quantities known as “time difference of arrivals“ and “frequency difference of arrivals”. This approach leads to some interesting geometrical problems. The second approach is a synthetic-aperture approach that also involves some unsolved problems. |
Concluded | Recording | |
Mar. 28th, 2022 |
Professor Deanna Needell (UCLA) |
Online nonnegative matrix factorization and applications |
Online Matrix Factorization (OMF) is a fundamental tool for dictionary learning problems, giving an approximate representation of complex data sets in terms of a reduced number of extracted features. Convergence guarantees for most of the OMF algorithms in the literature assume independence between data matrices, and the case of dependent data streams remains largely unexplored. In this talk, we present results showing that a non-convex generalization of the well-known OMF algorithm for i.i.d. data converges almost surely to the set of critical points of the expected loss function, even when the data matrices are functions of some underlying Markov chain satisfying a mild mixing condition. As the main application, by combining online non-negative matrix factorization and a recent MCMC algorithm for sampling motifs from networks, we propose a novel framework of Network Dictionary Learning that extracts `network dictionary patches' from a given network in an online manner that encodes main features of the network. We demonstrate this technique on real-world data and discuss recent extensions and variations. |
Concluded | Recording | |
Apr. 4th, 2022 |
Professor Alex Cloninger (UCSD) |
Data Representation Learning from a Single Pass of the Data |
In many applications, constructing kernel matrices or pairwise distance matrices can be prohibitively expensive. This can be due to the expense of storing data in a streaming setting, or the expense of accessing multiple large data sets to compute some statistical distance between them. In this talk, I highlight several settings in which we can compute representations of data points (or entire point clouds) on a single pass. This includes Linearized Optimal Transport for computing Wasserstein distances between distributions and performing supervised learning, boosted kernel regression on streaming data, and streaming quantization of translation invariant kernel feature spaces. |
Concluded | Recording | |
Apr. 11th, 2022 |
Professor Ozgur Yilmaz (UBC) |
Inverting generative models and applications in inverse problems |
Obtaining accurate signal models is a fundamental problem when solving inverse problems. This is because the typical inverse problem, be it denoising, inpainting, compressed sensing, phase retrieval, medical or seismic imaging, is ill-posed. To find an accurate solution tractably, we need a "good" model of the underlying signal class that can be "imposed" on the solution. Well known, now classical, examples of such models include sparsity with respect to a known basis (compressed sensing), having small total variation (image denoising), and having low rank (matrix and tensor completion). In the Bayesian framework, which provides an alternative approach, signals in a particular class are modeled as realizations of a random vector. The objective is then to model the (prior) distribution of this random vector and use that to identify the posterior distribution, which leads to, for example, the MAP estimator. Recent advances in training deep neural networks have shown that various signal classes can be modeled using generative models leading to generator networks that map a low-dimensional latent space to the high-dimensional signal space and provide a straight-forward way of sampling the signal space according to the prior distribution. Such models were successfully incorporated in the literature to the Bayesian framework to empirically solve certain inverse problems involving, e.g., handwritten digits and faces.
In this talk, we present an algorithm, which we call PLUGIn (Partially Linearized Update for Generative Inversion), for recovering an unknown latent (code) vector from (noisy) observations of a signal, under a known generative model with provable guarantees. Furthermore, while requiring an overall expansivity, our convergence guarantees hold for networks with some contractive layers. We will also show that PLUGIn is versatile and can be adopted to various inverse problems such as compressed sensing and reconstruction from quantized measurements.
This is joint work with Babhru Joshi, Xiaowei Li, and Yaniv Plan. |
Concluded | Recording | |
Apr. 18th, 2022 |
Professor Justin Romberg (Georgia Tech) |
Streaming Solutions for Time-Varying Optimization Problems |
We will discuss streaming optimization problems where the goal is to minimize a sum of functions indexed by time. At each time step, a new function is introduced along with a new set of optimization variables; the function depends the optimization variables at the current time step and the previous one. Problems of this type arise in state estimation problems, including simultaneous localization and mapping (SLAM) in robotics, and in streaming reconstruction problems in signal processing.
We will give structural conditions under which solutions to these problems converge quickly in time. This leads directly to algorithms that have guaranteed performance while using limited memory. We will also show how our framework can be generalized to optimization programs organized on general graphs, where the vertices are associated with functions and the edges are associated with shared variables.
This is joint work with Tomer Hamam. |
Concluded | Recording | |
Apr. 25th, 2022 |
Professor Hau-Tieng Wu (Duke) |
Nonstationary time series analysis through landmark diffusion with clinical applications |
Compared with the commonly collected health information, long-term and high-frequency physiological time series that are nonstationary in nature contain rich information from the other dimension. However, extracting useful information from these time series for clinical usage is challenging on several fronts. Motivated by clinical unmet needs, physiological knowledge and clinical observations, we introduce a latent diffusion geometry based model to model the dynamics, and an algorithm called Robust and Scalable Embedding via Landmark Diffusion (ROSELAND), which is designed to efficiently handle big and noisy datasets, to extract useful information. Theoretically, we show that the algorithm is robust to colored and heterogeneous noise, and suitable for recovering the latent space under the manifold setup due to the L-infty spectral convergence guarantee. A clinical challenge will be discussed as an application. |
Concluded | Recording | |
May 9th, 2022 |
Professor Nate Strawn (Georgetown) |
Filament Plots for Data Visualization |
We introduce filament plots for lossless visualization of high-dimensional datasets, where each data point in a dataset is mapped isometrically to a 2D-valued curve in L2, which is then identified with a 3D ``filament" via integration of solutions to a "Bishop frame". The isometric embedding into the space of symmetric curvature functions is chosen to minimize the mean quadratic variation over L2 curves, and we characterize the solutions of this minimization problem subject to some constraints (constraints which attempt to remove biases in the visualization). In particular, the solution set recovers a form of 3D Andrews plots. This characterization indicates there are many degrees of freedom for the solution set, and (using recent bounds for quadratic Gauss sums) we demonstrate that there is a particular minimizer which also satisfies an asymptotic projective ``tour" property (which is useful for removing "local" visual biases). Metric comparisons are considered, and we discuss how the finite-dimensional analogue of our approach leads to ``hyper frames" which enjoy properties of tight super frames and fusion frames. |
Concluded | Recording | |
May 23rd, 2022 |
Professor Soledad Villar (JHU) |
Enforcing exact group equivariances in machine learning |
In this talk we will discuss the different approaches people use to implement group equivariances in machine learning. We will focus on ideas based on group equivariant convolutions, irreducible representations, and invariants. We will see applications to image processing, graph neural networks, and particle simulations. Time permitting, we will show different ways to quantify how much better an equivariant model generalizes in comparison to a non-equivariant model. |
Concluded | Recording | |
Organizing Committee:
Radu Balan Jacob Bedrossian John Benedetto Maria Cameron Wojciech Czaja Tom Goldstein Vince Lyzinski
|