Data-Enabled Science Seminar Department of Mathematics, University of Houston

Bellow you can find the current schedule for the Data-Enabled Science Seminar at the Department of Mathematics of the University of Houston. For further information or to subscribe to the Data-Enabled Science Seminar mailing list, please send an email to Andreas Mang (andreas at math dot uh dot edu).

Fall 2024

Aug 30 2024 — 11:45AM — 12:45PM PGH 646A
Speaker: Dr. Ming Zhong (Department of Mathematics, University of Houston)
Title: Recent Advances in Structured Learning from High Dimensional Data
Abstract
We present some new mathematical methodologies aimed at extracting accurate scientific insights from high dimensional observational data. Developed to enhance the understanding and simulation of intriguing behaviors from complex systems, our approaches effectively address challenges including curse of dimensionality and information bottlenecks in scientific machine learning. Supported by rigorous theoretical underpinnings, we provide well-maintained software packages with efficient implementations across a spectrum of applications such as medical imaging, social issue modeling and generative AI. Future research directions, encompassing topics such as learning topological averaging, geometric structure, stochastic data, will be discussed.
Sep 06 2024 — 11:45AM — 12:45PM PGH 646A
Speaker: Dr. Jonathan Siegel (Mathematics Department, Texas A&M University)
Title: Convergence and error control of consistent PINNs for elliptic PDEs
Abstract
We study the convergence rate, in terms of the number of collocation points, of Physics-Informed Neural Networks (PINNs) for the solution of elliptic PDEs. Specifically, given Sobolev or Besov space assumptions on the right-hand side of the PDE and on the boundary values, we determine the minimal number of collocation points required to achieve a given accuracy. These results apply more generally to any collocation method which only makes use of point values. Next, we introduce a novel PINNs loss function based upon elliptic PDE regularity theory, which we call consistent PINNs. We prove an error bound for consistent PINNs in terms of the number of collocation points and the final loss value achieved. Finally, we present numerical experiments which demonstrate that the consistent PINNs loss results in improved solution error.
Sep 20 2024 — 11:45AM — 12:45PM PGH 646A (Networks Seminar)
Speaker: Dr. Bhargav Karamched (Department of Mathematics, Florida State University)
Title: Oscillations in Delayed Positive Feedback Systems
Abstract
Positive feedback loops exist in many biological circuits important for organismal function. In this work, we investigate how temporal delay affects the dynamics of two canonical positive feedback models. In this talk, we will discuss models of a genetic toggle switch and a one-way switch with delay added to the feedback terms. We will show that long-lasting transient oscillations exist in both models under general conditions and that the duration depends strongly on the magnitude of the delay and initial conditions. We will discuss some general properties of oscillations emerging from positive feedback systems. We then show the existence of long-lasting oscillations in specific biological examples: the Cdc2-Cyclin B/Wee1 system and a genetic regulatory network. Our results challenge fundamental assumptions underlying oscillatory behavior in biological systems. While generally delayed negative feedback systems are canonical in generating oscillations, we show that delayed positive feedback systems are a mechanism for generating oscillations as well.
Oct 04 2024 — 11:45AM — 12:45PM PGH 646A
Speaker: Dr. Fei Lu (Department of Mathematics, John Hopkins University)
Title: Data-adaptive RKHS regularization for learning kernels in operators
Abstract
Kernels are efficient in representing nonlocal dependence and are widely used to design operators between function spaces or high-dimensional data. Examples include convolution kernels and interaction potentials in interacting particle systems. Thus, learning kernels in operators from data is an inverse problem of general interest. Due to the nonlocal dependence, the inverse problem is often severely ill-posed with a data-dependent compact normal operator that can be singular. Therefore, regularization is necessary. However, little prior information is available to select a proper regularization norm in many applications. We overcome the challenge by introducing a data-adaptive RKHS for regularization. It leads to a convergent estimator that is robust to noise, outperforming the widely used L2 or l2 regularizers. We will discuss both direct and iterative methods for implementation.
Oct 11 2024 — 11:45AM — 12:45PM PGH 646A (Networks Seminar)
Speaker: Dr. David Lipshutz (Department of Neuroscience, Baylor College of Medicine)
Title: Statistical learning algorithms for biological neural networks
Abstract
To survive, biological organisms need to extract, transmit and store meaningful information from the world via their sensory organs. A major challenge at the intersection of neuroscience and statistics is to understand the algorithms that early sensory systems use to process sensory information. These algorithms differ in many respects from those traditionally used in statistics and signal processing due to differences in both the objectives of biological organisms and the "hardware" that the systems use. In this talk, I will discuss a framework for deriving algorithms that solve certain matrix factorization problems, which include many classical statistical learning methods that may be relevant for biological organisms. I will show how these algorithms can be implemented in neural circuit models that closely match certain aspects of biological circuits and make experimental predictions.
Oct 18 2024 — 11:45AM — 12:45PM PGH 646A
Speaker: Dr. Nancy Glenn Griesinger (Department of Mathematics, Texas Southern University)
Title: Wigner-Weighted Empirical Likelihood for Estimating Mean Spacing in Nuclear Resonance Data
Abstract
This research explores the determination of the mean spacing between adjacent neutron resonances, emphasizing maximum likelihood estimation (MLE) as the primary method. Traditional parametric MLE techniques fit statistical models to resonance data to estimate mean spacings by deriving parameters that describe the underlying distribution of spacings. However, these techniques heavily depend on the accuracy of the assumed distribution. To enhance model flexibility, we introduce a novel methodology: Wigner-weighted empirical likelihood. This innovative approach integrates a nonparametric weighted empirical likelihood with a parametric Wigner surmise distribution, creating a semi-parametric framework that accommodates a broader range of data characteristics. Based on research performed at Brookhaven National Laboratory, this hybrid technique aims to provide a more accurate and reliable estimation of the mean spacing between neutron resonances, effectively relaxing the constraints imposed by conventional parametric models. The proposed methodology is expected to yield valuable insights into resonance characteristics, contributing to a deeper understanding of nuclear interactions.
Nov 08 2024 — 11:45AM — 12:45PM PGH 646A (Networks Seminar)
Speaker: Dr. Jay Hennig (Department of Neuroscience, Baylor College of Medicine)
Title: Emergence of belief representations through reinforcement learning
Abstract
To behave adaptively, animals must learn to predict future reward, or value. To do this, animals are thought to learn reward predictions using reinforcement learning. However, in contrast to classical models, animals must learn to estimate value using only incomplete state information. Previous work suggests that animals estimate value in partially observable tasks by first forming "beliefs"--optimal Bayesian estimates of the hidden states in the task. Although this is one way to solve the problem of partial observability, it is not the only way, nor is it the most computationally scalable solution in complex, real-world environments. Here we show that a recurrent neural network (RNN) can learn to estimate value directly from observations, generating reward prediction errors that resemble those observed experimentally, without any explicit objective of estimating beliefs. We integrate statistical, functional, and dynamical systems perspectives on beliefs to show that the RNN’s learned representation encodes belief information, but only when the RNN’s capacity is sufficiently large. These results illustrate how animals can estimate value in tasks without explicitly estimating beliefs, yielding a representation useful for systems with limited capacity.
Nov 15 2024 — 11:45AM — 12:45PM PGH 646A (Networks Seminar)
Speaker: Dr. Ben Hayden (Baylor College of Medicine)
Title: TBD
Abstract
TBD
Nov 22 2024 — 11:45AM — 12:45PM PGH 646A (Networks Seminar)
Speaker: Dr. Alexandru Hening (Department of Mathematics, Texas A&M)
Title: TBD
Abstract
TBD
Nov 29 2024 Thanks Giving Break
Dec 06 2024 — 11:45AM — 12:45PM PGH 646A
Speaker: Dr. Giovanni S. Alberti (Department of Mathematics, University of Genoa)
Title: TBD
Abstract
TBD

Next Semester

show tentative schedule
TBD 2025 — 11:45AM — 12:45PM PGH 646A
Speaker: Dr. Chunyang Liao (Department of Mathematics, University of California LA)
Title: TBD
Abstract
TBD
Jan 17 2025 — 11:45AM — 12:45PM PGH 646A (Networks Seminar)
Speaker: Dr. Heiko Enderling (Department of Radiation Oncology, MD Anderson)
Title: TBD
Abstract
TBD
Jan 24 2025 — 11:45AM — 12:45PM PGH 646A
Speaker: Dr. Zachary J. Grey (Mathematical Analysis and Modeling Group, NIST)
Title: TBD
Abstract
TBD
Jan 31 2025 — 11:45AM — 12:45PM PGH 646A (Networks Seminar)
Speaker: Dr. Jonathan Touboul (Department of Mathematics, Brandeis University)
Title: TBD
Abstract
TBD
Fab 07 2025 — 11:45AM — 12:45PM PGH 646A
Speaker: TBD
Title: TBD
Abstract
TBD
Feb 14 2025 — 11:45AM — 12:45PM PGH 646A
Speaker: TBD
Title: TBD
Abstract
TBD
Feb 21 2025 — 11:45AM — 12:45PM PGH 646A
Speaker: TBD
Title: TBD
Abstract
TBD
Feb 28 2025 — 11:45AM — 12:45PM PGH 646A
Speaker: TBD
Title: TBD
Abstract
TBD
Mar 07 2025 — 11:45AM — 12:45PM PGH 646A (SIAM CSE)
Speaker: TBD
Title: TBD
Abstract
TBD
Mar 14 2025 — 11:45AM — 12:45PM PGH 646A (Spring Break)
Speaker: TBD
Title: TBD
Abstract
TBD
Mar 21 2025 — 11:45AM — 12:45PM PGH 646A
Speaker: TBD
Title: TBD
Abstract
TBD
Mar 28 2025 — 11:45AM — 12:45PM PGH 646A
Speaker: TBD
Title: TBD
Abstract
TBD
Apr 04 2025 — 11:45AM — 12:45PM PGH 646A
Speaker: TBD
Title: TBD
Abstract
TBD
Apr 11 2025 — 11:45AM — 12:45PM PGH 646A
Speaker: TBD
Title: TBD
Abstract
TBD
Apr 18 2025 — 11:45AM — 12:45PM PGH 646A
Speaker: TBD
Title: TBD
Abstract
TBD
Apr 25 2025 — 11:45AM — 12:45PM PGH 646A
Speaker: TBD
Title: TBD
Abstract
TBD

Past Semesters

Spring 2024
Jan 29 2024 — 12:00PM — 1:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Sean Lawley (Mathematics, University of Utah)
Title: Can we get rid of menopause? Stochastic modeling of ovarian aging and procedures for menopause delay.
Abstract
Ovarian tissue cryopreservation is a proven tool to preserve ovarian follicles prior to gonadotoxic treatments. What if this procedure is applied to healthy women to delay or eliminate menopause? In this talk, we will present a mathematical model to predict the efficacy of this procedure and optimize its implementation. Time permitting, we will also discuss how stochastic models of ovarian aging offer answers to longstanding questions surrounding the timing of perimenopause and menopause and the "wasteful oversupply" of ovarian follicles. This talk is based on joint work with Joshua Johnson (University of Colorado School of Medicine), John Emerson (Yale University), Kutluk Oktay (Yale School of Medicine), Nanette Santoro (University of Colorado School of Medicine), and Mary Sammel (Colorado School of Public Health).
Feb 09 2024 — 12:00PM — 1:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Hannah Choi (School of Mathematics, Georgia Tech)
Title: Visual coding shaped by anatomical and functional connectivity structures.
Abstract
Neurons in the visual cortex actively interact with each other to encode sensory information efficiently and robustly. Visual coding, therefore, is shaped by the structure of the underlying anatomical and functional networks of neurons. The first part of the talk will focus on the inter-areal, layer-specific connectivity between cortical areas and how this connectivity patterns modulates the representation of task-relevant information input identity and expectation violation by performing representational analyses on recurrent neural networks with systematically altered structural motifs. Secondly, we will zoom into finer scale networks of single neurons in the visual cortex and discuss how visual stimuli of varying complexity drive functional connectivity of neurons. Our analyses of electrophysiological data across multiple areas of visual cortex reveal that the frequencies of different low-order connectivity motifs are preserved across a range of stimulus complexity, suggesting the role of specific motifs as local computational units of visual information.
Feb 12 2024 — 11:00AM — 12:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Jinsu Kim (Department of Mathematics, Pohang University)
Title: Chemical reaction networks as a language of computation: Training neural networks via biochemical reactions.
Abstract
Chemical reaction networks graphically model the interactions of chemical species using associated dynamical systems of differential equations or Markov chains. In this work, we show that chemical reactions can be used as building blocks for a computing algorithm. As an example, we present chemical reaction networks that implement neural network computations through associated dynamical systems. We demonstrate that chemical reaction networks, in particular, can be employed to implement the calculation of derivatives for neural network activation functions and, ultimately, the backpropagation.
Feb 16 2024 — 12:00PM — 1:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Vikram Maheshri (Economics, University of Houston)
Title: A Unified Empirical Framework to Study Segregation
Abstract
We study the determinants of race and income segregation in the San Francisco Bay area from 1990 to 2004. Our framework incorporates the endogenous feedback loop at the core of the seminal Schelling (1969) model of segregation into a dynamic model of neighborhood choice, thus allowing for data to be observed in transition toward a steady state. We assess the relative importance of a variety of mechanisms that generate segregation – endogenous sorting on the basis of the socioeconomic composition of neighbors, sorting on the basis of other neighborhood amenities, differential responses to prices – and the frictions that mediate these mechanisms – moving costs and incomplete information. Identification of households' endogenous responses to the socioeconomic compositions of neighbors is facilitated by novel instrumental variables that exploit the logic of a dynamic choice model with frictions. Sorting based on unobserved neighborhood amenities is the most important factor generating segregation followed distantly by endogenous sorting on the basis of the socioeconomic composition of neighbors. Frictions, primarily moving costs, play a central role in keeping segregation in check as they disproportionately mitigate endogenous sorting.
Feb 23 2024 — 11:45AM — 12:45PM PGH 646A (Networks Seminar)
Speaker: Dr. Ethan Levien (Department of Mathematics, Dartmouth)
Title: Linking individual and population scale dynamics: The role of extremes
Abstract
Many problems in quantitative biology concern how population-scale dynamics are shaped by heterogeneity within a population. In this talk I will discuss two examples from my own research: 1) In evolution, how does variation in offspring numbers influence the genealogies? 2) Can we predict the proliferation rate of a population in bulk culture based on measurements of single-cells? In both cases, one needs to pay careful attention to the extremes of the individual distribution in order to understand the population-level behavior. The key principles at play can be understood via a connection to the Random Energy (REM), a toy model from the theory of spin glasses. In the REM, exponentially rare events have an exponentially large impact on the partition function, which becomes dominated by extreme value statistics below a critical temperature. After providing a pedagogical introduction to the REM, I will discuss its application to the problems above. If time permits, I will present some more general connections between disordered systems and population biology.
Mar 21 2024 — 11:45AM — 12:45PM PGH 646A
Speaker: Dr. Johannes Milz (H. Milton Stewart School of Industrial and Systems Engineering, Georgia Tech)
Title: Ensemble-based solutions for optimal control of uncertain dynamical systems.
Abstract
We consider finite-time horizon optimal control of uncertain dynamical systems, particularly those governed by nonlinear ordinary differential equations with uncertain inputs. Our approach to approximating the optimal control problem involves a Monte Carlo sample-based method, transforming it into optimal control problems with ensembles of deterministic dynamical systems. This method is commonly known in the literature as the sample average approximation or empirical risk minimization. Our primary goal is to determine Monte Carlo-type convergence rates for the ensemble-based solutions. Additionally, we establish the optimality of these convergence rates with respect to certain problem characteristics.
Mar 29 2024 — 11:45AM — 12:45PM PGH 646A
Speaker: Dr. Weilin Li (City University of New York, City College)
Title: Super-resolution: theory and algorithms.
Abstract
The Rayleigh length in optics refers to the minimum distance between two point sources that can be resolved by an imaging system. Although this is a widely used principle, there is experimental evidence that, in certain situations, it is possible to extract information at scales smaller than the Rayleigh length. Motivated by such discoveries, we examine this phenomenon from a mathematical viewpoint. Super-resolution is the inverse problem of recovering point sources, whose distances may be smaller than the Rayleigh length, from noisy Fourier measurements. We study the fundamental limits of super-resolution from an information theoretic viewpoint and examine the stability of efficient algorithms such as ESPRIT. Both are problems are closely related to properties of non-harmonic Fourier matrices. Time permitting, we also discuss application oriented extensions and challenges with higher dimensional super-resolution.
Apr 05 2024 — 11:45AM — 12:45PM PGH 646A
Speaker: Dr. Stefan Henneking (Oden Institute, The University of Texas at Austin)
Title: Real-time Bayesian inversion of autonomous systems with application to Tsunami forecasting.
Abstract
Hessian-based algorithms for the solution to Bayesian inverse problems typically require many actions of the Hessian matrix on a vector. For problems with high-dimensional parameter fields or expensive-to-evaluate forward operators, a direct approach is often computationally intractable, especially in the context of real-time inversion. One way to overcome the computational bottleneck of Hessian matrix-vector multiplications in these large-scale inverse problems is to exploit the structure of the underlying operators. A particular class of operators for which we can exploit the structure very effectively are those representing autonomous systems. The evolution of such systems with respect to any given input may depend on the system's current state but does not explicitly depend on the independent variable (e.g., time). We present a scalable and computationally efficient approach for Bayesian inversion of problems involving autonomous systems. Our approach splits the computation into a precomputation ("offline") phase and a real-time inversion ("online") phase. Contrary to other methods, this approach does not employ a lower-fidelity approximation but instead uses the full discretization obtained from the PDE-based model. The method is applied to a real-time tsunami Bayesian inverse problem involving a time-invariant dynamical system. Scalability and efficiency of the implementation are demonstrated for state-of-the-art GPU-accelerated compute architectures.
This is joint work with Sreeram Venkat, Milinda Fernando, and Omar Ghattas.
Apr 19 2024 — 11:45AM — 12:45PM PGH 646A
Speaker: Dr. Eric Chi (Department of Statistics, Rice University)
Title: Proximal MCMC for Bayesian Inference of Constrained and Regularized Estimation
Abstract
Proximal Markov Chain Monte Carlo (MCMC) is a flexible and general Bayesian inference framework for constrained or regularized parametric estimation. The basic idea of Proximal MCMC is to approximate nonsmooth regularization terms via the Moreau-Yosida envelope. Initial Proximal MCMC strategies, however, fixed nuisance and regularization parameters as constants, and relied on the Langevin algorithm for the posterior sampling. We extend Proximal MCMC to a fully Bayesian framework with modeling and data-adaptive estimation of all parameters including regularization parameters. More efficient sampling algorithms such as the Hamiltonian Monte Carlo are employed to scale Proximal MCMC to high-dimensional problems. Our proposed Proximal MCMC offers a versatile and modularized procedure for the inference of constrained and non-smooth problems that is mostly tuning parameter free. We illustrate its utility on various statistical estimation and machine learning tasks.
Apr 26 2024 — 11:45AM — 12:45PM PGH 646A (Networks Seminar)
Speaker: Dr. Matthew R. Bennett (Department of Biosciences and Bioengineering, Rice University)
Title: Multicellular synthetic biology: Understanding the design principles of intercellular communication
Abstract
Synthetic biologists have long sought to rationally engineer genetic regulatory networks for a variety of reasons. For instance, the basic science of understanding gene regulation has prospered due to our ability to intricately construct, perturb, and monitor gene networks in living cells. Additionally, synthetic biologists have designed a host of systems for practical biomedical and industrial applications. Yet, as synthetic biologists push the limits of genetic engineering it is becoming increasing clear that synthetic multicellular systems will be required to accomplish tasks that are difficult for single cells or homogeneous colonies. However, traditional mathematical modeling frameworks often fail to adequately describe or predict the behaviors of even simple multicellular regulatory phenomena. In this talk, I will describe our lab’s efforts to better understand and engineer intercellular communication in multicellular bacterial systems. In particular, I will discuss how the network topologies of intercellular gene regulatory networks influence the spatiotemporal coordination of gene expression in multicellular systems. Further, I will discuss how basic cellular processes, such as cellular growth and division, can influence signal propogation and pattern formation in spatially extended systems.
Fall 2023
Oct 06 2023 — 12:00PM — 1:00PM (virtual)
Speaker: Dr. Mark A Iwen (Department of Mathematics, Michigan State University)
Title: Low-distortion embeddings of submanifolds of Rn: Lower bounds, faster realizations, and applications.
Abstract
Let M be a smooth submanifold of R^n equipped with the Euclidean(chordal) metric. This talk will consider the smallest dimension, m, for which there exists a bi-Lipschitz function f:M R^m with bi-Lipschitz constants close to one. We will begin by presenting a bound for the embedding dimension m from below in terms of the bi-Lipschitz constants of f and the reach, volume, diameter, and dimension of M. We will then discuss how this lower bound can be applied to show that prior upper bounds by Eftekhari and Wakin on the minimal low-distortion embedding dimension of such manifolds using random matrices achieve near-optimal dependence on dimension, reach, and volume (even when compared against nonlinear competitors). Next, we will discuss a new class of linear maps for embedding arbitrary (infinite) subsets of R^n with sufficiently small Gaussian width which can both (i) achieve near-optimal embedding dimensions of submanifolds, and (ii) be multiplied by vectors in faster than FFT-time. When applied to d-dimensional submanifolds of R^n we will see that these new constructions improve on prior fast embedding matrices in terms of both runtime and embedding dimension when d is sufficiently small. Time permitting, we will then conclude with a discussion of non-linear so-called “terminal embeddings” of manifolds which allow for extensions of the famous Johnson-Lindenstrauss Lemma beyond what any linear map can achieve.
This talk will draw on joint work with various subsets of Mark Roach (MSU), Benjamin Schmidt (MSU), and Arman Tavakoli (MSU).
Oct 20 2023 — 12:00PM — 1:00PM (virtual)
Speaker: Dr. Martin Benning (School of Mathematical Sciences, Queen Mary University of London)
Title: A lifted Bregman formulation for the inversion of deep neural networks.
Abstract
We propose a novel framework for the regularized inversion of deep neural networks. The framework is based on recent work on training feed-forward neural networks without the differentiation of activation functions. The framework lifts the parameter space into a higher dimensional space by introducing auxiliary variables, and penalizes these variables with tailored Bregman distances. We propose a family of variational regularizations based on these Bregman distances, present theoretical results and support their practical application with numerical examples. In particular, we present the first convergence result (to the best of our knowledge) for the regularized inversion of a single-layer perceptron that only assumes that the solution of the inverse problem is in the range of the regularization operator, and that shows that the regularized inverse provably converges to the true inverse if measurement errors converge to zero.
This is joint work with Xiaoyu Wang from the University of Cambridge.
Oct 27 2023 — 12:00PM — 1:00PM PGH 646A
Speaker: Dr. Wing Tat Leung (City University of Hong Kong)
Title: A discretization invariant extension to DeepONet and Chen and Chen's universal approximation.
Abstract
Operator learning trains a neural network to map functions to functions. An ideal operator learning framework should be mesh-free in the sense that the training does not require a particular choice of discretization for the input functions, allows for the input and output functions to be on different domains, and is able to have different grids between samples. We propose a mesh-free neural operator for solving parametric partial differential equations. The basis enhanced learning network (BelNet) projects the input function into a latent space and reconstructs the output functions. In particular, we construct part of the network to learn the ``basis'' functions in the training process. This generalized the networks proposed in Chen and Chen's universal approximation theory for the nonlinear operators to account for differences in input and output meshes. Through several challenging high-contrast and multiscale problems, we show that our approach outperforms other operator learning methods for these tasks and allows for more freedom in the sampling and/or discretization process
Nov 03 2023 — 12:00PM — 1:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Vicky Yao (Department of Computer Science, Rice University)
Title: Joint embedding of biological networks for cross-species functional alignment.
Abstract
Model organisms are widely used to better understand the molecular causes of human disease. While sequence similarity greatly aids this cross-species transfer, sequence similarity does not imply functional similarity, and thus, several current approaches incorporate protein-protein interactions to help map findings between species. Existing transfer methods either formulate the alignment problem as a matching problem which pits network features against known orthology, or more recently, as a joint embedding problem. We propose a novel state-of-the-art joint embedding solution: Embeddings to Network Alignment (ETNA). ETNA generates individual network embeddings based on network topological structure and then uses a Natural Language Processing-inspired cross-training approach to align the two embeddings using sequence-based orthologs. The final embedding preserves both within and between species gene functional relationships, and we demonstrate that it captures both pairwise and group functional relevance. In addition, ETNA's embeddings can be used to transfer genetic interactions across species and identify phenotypic alignments, laying the groundwork for potential opportunities for drug repurposing and translational studies.
Nov 10 2023 — 12:00PM — 1:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Marek Kimmel (Department of Statistics, Rice University)
Title: Genetic archeology of cancer: Mathematical theories and applications.
Abstract
We explain how models of population genetics can be used to provide quantitative inference of clonal evolution of cancer. The talk has two parts. Part 1 is devoted to the definition and mathematical properties of the Site Frequency Spectrum (SFS), one of the commonly used characteristics of cell populations undergoing growth and mutation. We explore the basic consistency of the approaches based on Wright-Fisher or Moran coalescents versus those based on birth-death processes. This provides building blocks for Part 2, in which we consider a comprehensive set of novel DNA sequencing data on advanced urothelial (bladder) cancer (UC), provided by a unique technique developed by Czerniak’s Lab at MD Anderson. Using our theoretical framework and software we developed. The theory applied to these data allow to understand waves of mutations and genomic transformations contributiong to UC carcinogenesis.

Contributions of Emmanuel Asante, Khanh Dinh, Roman Jaksik, Andrew Koval, PaweÅ‚ KuÅ›, and Simon Tavaré, as well as of Bogdan Czerniak and his Lab, are acknowledged.
Nov 17 2023 — 12:00PM — 1:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Alexander Wiedemann (Randolph-Macon College)
Title: Inferring Interaction Kernels for Stochastic Agent-Based Opinion Dynamics.
Abstract
How individuals influence each other plays an important role in the evolutionary opinion dynamics of the whole population. However, information about such interactions is usually not explicit. Instead, we can only observe the dynamics of each individual in the population over time. This calls for the need to develop methods to infer the underlying microscopic mechanisms (i.e. interactions) that give rise to the observed population dynamics. In this project, we consider an asynchronous, discrete-time, continuous-state stochastic model of opinion dynamics in which the change in each individual’s opinion over a fixed time period follows a normal distribution. The mean of this distribution is defined with an interaction kernel. We develop likelihood-based methods to infer this interaction kernel from opinion time series data.
This ongoing work began at the 2023 AMS Mathematical Research Community Conference (MRC) on Complex Social Systems. Though primarily a presentation of the technical work above, I will offer (personal) recommendation for graduate students and postdocs to participate in future AMS MRCs to network and explore new topics of interest. This work is conducted by Thomas Gebhart, Linh Huynh, Vicki Modisette, William Thompson, Moyi Tian, and Alexander Wiedemann under the guidance of Phil Chodrow and Heather Z. Brooks.
Dec 01 2023 — 12:00PM — 1:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Harel Shouval (University of Texas Medical School at Houston)
Title: Learning to Express Reward Prediction Error-like Dopaminergic Activity Requires Plastic Representations of Time
Abstract
The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.
Fall 2023
Oct 06 2023 — 12:00PM — 1:00PM (virtual)
Speaker: Dr. Mark A Iwen (Department of Mathematics, Michigan State University)
Title: Low-distortion embeddings of submanifolds of Rn: Lower bounds, faster realizations, and applications.
Abstract
Let M be a smooth submanifold of R^n equipped with the Euclidean(chordal) metric. This talk will consider the smallest dimension, m, for which there exists a bi-Lipschitz function f:M R^m with bi-Lipschitz constants close to one. We will begin by presenting a bound for the embedding dimension m from below in terms of the bi-Lipschitz constants of f and the reach, volume, diameter, and dimension of M. We will then discuss how this lower bound can be applied to show that prior upper bounds by Eftekhari and Wakin on the minimal low-distortion embedding dimension of such manifolds using random matrices achieve near-optimal dependence on dimension, reach, and volume (even when compared against nonlinear competitors). Next, we will discuss a new class of linear maps for embedding arbitrary (infinite) subsets of R^n with sufficiently small Gaussian width which can both (i) achieve near-optimal embedding dimensions of submanifolds, and (ii) be multiplied by vectors in faster than FFT-time. When applied to d-dimensional submanifolds of R^n we will see that these new constructions improve on prior fast embedding matrices in terms of both runtime and embedding dimension when d is sufficiently small. Time permitting, we will then conclude with a discussion of non-linear so-called “terminal embeddings” of manifolds which allow for extensions of the famous Johnson-Lindenstrauss Lemma beyond what any linear map can achieve.
This talk will draw on joint work with various subsets of Mark Roach (MSU), Benjamin Schmidt (MSU), and Arman Tavakoli (MSU).
Oct 20 2023 — 12:00PM — 1:00PM (virtual)
Speaker: Dr. Martin Benning (School of Mathematical Sciences, Queen Mary University of London)
Title: A lifted Bregman formulation for the inversion of deep neural networks.
Abstract
We propose a novel framework for the regularized inversion of deep neural networks. The framework is based on recent work on training feed-forward neural networks without the differentiation of activation functions. The framework lifts the parameter space into a higher dimensional space by introducing auxiliary variables, and penalizes these variables with tailored Bregman distances. We propose a family of variational regularizations based on these Bregman distances, present theoretical results and support their practical application with numerical examples. In particular, we present the first convergence result (to the best of our knowledge) for the regularized inversion of a single-layer perceptron that only assumes that the solution of the inverse problem is in the range of the regularization operator, and that shows that the regularized inverse provably converges to the true inverse if measurement errors converge to zero.
This is joint work with Xiaoyu Wang from the University of Cambridge.
Oct 27 2023 — 12:00PM — 1:00PM PGH 646A
Speaker: Dr. Wing Tat Leung (City University of Hong Kong)
Title: A discretization invariant extension to DeepONet and Chen and Chen's universal approximation.
Abstract
Operator learning trains a neural network to map functions to functions. An ideal operator learning framework should be mesh-free in the sense that the training does not require a particular choice of discretization for the input functions, allows for the input and output functions to be on different domains, and is able to have different grids between samples. We propose a mesh-free neural operator for solving parametric partial differential equations. The basis enhanced learning network (BelNet) projects the input function into a latent space and reconstructs the output functions. In particular, we construct part of the network to learn the ``basis'' functions in the training process. This generalized the networks proposed in Chen and Chen's universal approximation theory for the nonlinear operators to account for differences in input and output meshes. Through several challenging high-contrast and multiscale problems, we show that our approach outperforms other operator learning methods for these tasks and allows for more freedom in the sampling and/or discretization process
Nov 03 2023 — 12:00PM — 1:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Vicky Yao (Department of Computer Science, Rice University)
Title: Joint embedding of biological networks for cross-species functional alignment.
Abstract
Model organisms are widely used to better understand the molecular causes of human disease. While sequence similarity greatly aids this cross-species transfer, sequence similarity does not imply functional similarity, and thus, several current approaches incorporate protein-protein interactions to help map findings between species. Existing transfer methods either formulate the alignment problem as a matching problem which pits network features against known orthology, or more recently, as a joint embedding problem. We propose a novel state-of-the-art joint embedding solution: Embeddings to Network Alignment (ETNA). ETNA generates individual network embeddings based on network topological structure and then uses a Natural Language Processing-inspired cross-training approach to align the two embeddings using sequence-based orthologs. The final embedding preserves both within and between species gene functional relationships, and we demonstrate that it captures both pairwise and group functional relevance. In addition, ETNA's embeddings can be used to transfer genetic interactions across species and identify phenotypic alignments, laying the groundwork for potential opportunities for drug repurposing and translational studies.
Nov 10 2023 — 12:00PM — 1:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Marek Kimmel (Department of Statistics, Rice University)
Title: Genetic archeology of cancer: Mathematical theories and applications.
Abstract
We explain how models of population genetics can be used to provide quantitative inference of clonal evolution of cancer. The talk has two parts. Part 1 is devoted to the definition and mathematical properties of the Site Frequency Spectrum (SFS), one of the commonly used characteristics of cell populations undergoing growth and mutation. We explore the basic consistency of the approaches based on Wright-Fisher or Moran coalescents versus those based on birth-death processes. This provides building blocks for Part 2, in which we consider a comprehensive set of novel DNA sequencing data on advanced urothelial (bladder) cancer (UC), provided by a unique technique developed by Czerniak’s Lab at MD Anderson. Using our theoretical framework and software we developed. The theory applied to these data allow to understand waves of mutations and genomic transformations contributiong to UC carcinogenesis.

Contributions of Emmanuel Asante, Khanh Dinh, Roman Jaksik, Andrew Koval, PaweÅ‚ KuÅ›, and Simon Tavaré, as well as of Bogdan Czerniak and his Lab, are acknowledged.
Nov 17 2023 — 12:00PM — 1:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Alexander Wiedemann (Randolph-Macon College)
Title: Inferring Interaction Kernels for Stochastic Agent-Based Opinion Dynamics.
Abstract
How individuals influence each other plays an important role in the evolutionary opinion dynamics of the whole population. However, information about such interactions is usually not explicit. Instead, we can only observe the dynamics of each individual in the population over time. This calls for the need to develop methods to infer the underlying microscopic mechanisms (i.e. interactions) that give rise to the observed population dynamics. In this project, we consider an asynchronous, discrete-time, continuous-state stochastic model of opinion dynamics in which the change in each individual’s opinion over a fixed time period follows a normal distribution. The mean of this distribution is defined with an interaction kernel. We develop likelihood-based methods to infer this interaction kernel from opinion time series data.
This ongoing work began at the 2023 AMS Mathematical Research Community Conference (MRC) on Complex Social Systems. Though primarily a presentation of the technical work above, I will offer (personal) recommendation for graduate students and postdocs to participate in future AMS MRCs to network and explore new topics of interest. This work is conducted by Thomas Gebhart, Linh Huynh, Vicki Modisette, William Thompson, Moyi Tian, and Alexander Wiedemann under the guidance of Phil Chodrow and Heather Z. Brooks.
Dec 01 2023 — 12:00PM — 1:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Harel Shouval (University of Texas Medical School at Houston)
Title: Learning to Express Reward Prediction Error-like Dopaminergic Activity Requires Plastic Representations of Time
Abstract
The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.
Spring 2023
Apr 28 2023 — 12:00PM — 1:00PM
Speaker: Stephen Thacker (Department of Mathematics, University of Houston)
Title: Multi-wavelet frame filter design in the era of A.I.
Abstract
In the context of multi-wavelet frame filter design, good spatial localization of filters is desirable for fast algorithms to avoid the introduction of ringing artifacts associated with the poor spatial localization of filters. While shearlets obtain nearly optimal reconstruction of cartoon-like images, they are not compactly supported in the time domain. In 2019, Karantzas, Atreas, Papadakis and Stavropolous published a paper detailing a new technique that allowed for the construction of multi-wavelet parseval frames that contain hand selected elements, which can include anisotropic atoms. A key consequence of their work is the ability to algorithmically construct compactly supported Parseval frames of dyadic wavelets that contain elements defined by high-pass filters which can have a variety of properties such as acting as differential operators with prescribed orientations, e.g. Prewitt and Sobel operators. The key limitation of their work is that, unlike shearlets, these Parseval frames wavelets can only achieve limited degrees of orientation. In an effort to address these design problems we derived a number of results, two of them we will present in this talk. Our first main result gives, for the class of bandlimited functions, the reconstruction error from multi-wavelet systems constructed from handpicked filterbanks, which can include differential operators of prescribed anisotropy. Our second main result is a new construction attempting to overcome the 2019 paper's gap in the study of the error resulting from the removal of the additional frame wavelets. In fact, we show that the frame wavelets of those Parseval frame can be combined with additional atoms of different length, and then scaled by constants to create a multi-wavelet frame with good frame bounds that can be close to 1. This allows us to modify constructions from the 2019 paper to contain atoms from higher resolution systems, obtaining greater degrees of anisotropy while maintaining good frame bounds for the class of bandlimited functions. These results are generalizable for any square-integrable function but under certain constraints. This is a report on our ongoing research work.
Apr 21 2023 — 12:00PM — 1:00PM (virtual)
Speaker: Dr. Dirk Lorenz (Technical University Braunschweig, Institute of Analysis and Algebra)
Title: Unrolling vs bilevel optimization for learning of variational models.
Abstract
In this talk we will consider the problem of learning a convex regularizer from a theoretical perspective. In general, learning of variational methods can be done by bilevel optimization where the variational problem is the lower level problem and the upper level problem minimizes over some parameter of the lower level problem. However, this is usually too difficult in practice and one practically feasible method is the approach by so-called unrolling (or unfolding). There, one replaces the lower level problem by an algorithm that converges to a solution of that problem and uses the N-th iterate instead of the true solution. While this approach is often successful in practice little theoretical results are available. In this talk we will consider a situation in which one can make a thorough comparison of the bilevel approach and the unrolling approach in a particular case of a quite simple toy example. Even though the example is quite simple, the situation is already quite complex and reveals a few phenomena that have been observed in practice.
Apr 14 2023 — 12:00PM — 1:00PM
Speaker: Dr. Zhichao Peng (Department of Mathematics, Michigan State University)
Title: Reduced order model for kinetic and transport problems.
Abstract
Reduced order model (ROM) is a technique to reduce degrees of freedom needed in numerical simulations. In this talk, I will share our recent work on the ROMs for kinetic and transport equation.
In the first part of this talk, we will discuss a ROM for the radiative transfer equation (RTE) based on the micro-macro decomposition. RTE is a kinetic equation which models particle systems in nuclear engineering and astrophysics. One of the main challenges to numerically solve this equation is due to its high dimensional nature, and standard grid-based method may suffer from the curse of dimensionality. To mitigate the curse of dimensionality, we construct a ROM by projecting the original problem to some low dimensional surrogate spaces, and these spaces are constructed in a way which respects the underlying low-rank structure.
In the second part, we will discuss a ROM for transport problems. Due to the slow decay of Kolmogorov n-width for some transport problems, classical linear ROMs may be inefficient or even inaccurate for these problems. The underlying low rank structure of these problems may be more efficiently captured through some intrinsic transformations, hence we propose to learn a subspace determined by such transformations from data and design a new ROM for transport problems.
Apr 07 2023 — 12:00PM — 1:00PM
Speaker: Dr. Sebastian Perez Salazar (Computational Applied Mathematics & Operations Research, Rice University)
Title: The IID prophet inequality with limited flexibility.
Abstract
In online sales, sellers usually offer each potential buyer a posted price in a take-it-or-leave fashion. Buyers can sometimes see posted prices faced by other buyers, and changing the price frequently could be considered unfair. The literature on posted price mechanisms and prophet inequality problems has studied the two extremes of pricing policies, the fixed price policy and fully dynamic pricing. The former is suboptimal in revenue but is perceived as fairer than the latter. This work examines the middle situation, where there are at most k distinct prices over the selling horizon. Using the framework of prophet inequalities with independent and identically distributed random variables, we propose a new prophet inequality for strategies that use at most k thresholds. We present asymptotic results in k and results for small values of k. For k = 2 prices, we show an improvement of at least 11% over the best fixed-price solution. Moreover, k = 5 prices suffice to guarantee almost 99% of the approximation factor obtained by a fully dynamic policy that uses an arbitrary number of prices. From a technical standpoint, we use an infinite-dimensional linear program in our analysis; this formulation could be of independent interest to other online selection problems.
Mar 03 2023 — 12:00PM — 1:00PM PGH 646A
Speaker: Dr. Stephan Wojtowytsch (Department of Mathematics, Texas A&M University)
Title: Achieving acceleration in stochastic optimization.
Abstract
Gradient descent methods, which choose an optimal direction based on purely local information, are robust but slow. Accelerated methods in convex optimization improve upon gradient descent by exploiting information gained along their trajectory and converge much more quickly, albeit in a smaller class of problems. While non-convex, many optimization problems in deep learning share properties with the convex situation. In this context, however, true gradients are prohibitively expensive to compute and only stochastic gradient estimates are available. In this talk, I present a momentum-based accelerated method which achieves acceleration even if the stochastic noise is many orders of magnitude larger than the gradient (i.e. assuming multiplicative noise scaling with a potentially very large constant). Numerical evidence suggests that this method outperforms the current momentum-SGD optimizer in PyTorch and TensorFlow without increasing the computational cost.
Feb 24 2023 — 12:00PM — 1:00PM PGH 646A
Speaker: Dr. Qiyu Sun (Department of Mathematics, University of Central Florida)
Title: Some mathematical problems in graph signal processing.
Abstract
Networks have been widely used in many real world applications and their complicated topological structures could be described by (un)directed graphs. Graph signal processing provides innovative frameworks to process and learn data residing on various networks and many irregular domains. Many important concepts in the classical Euclidean setting, such as Fourier transform, shifts, wavelet transform, filter banks and neural networks have been extended to the graph setting. However, some fundamental concepts and many important problems, such as Wiener/Kalman filters, Fourier transform on directed graphs, either are in the early stage of development or have not well-defined in the graph setting. In this talk, I will discuss some mathematical problems for graph signal processing.
Feb 17 2023 — 12:00PM — 1:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Gabriel Ocker (Center for Systems Neuroscience, Boston University)
Title: Dynamics of stochastic integrate-and-fire networks.
Abstract
The neural dynamics generating sensory, motor, and cognitive functions are commonly understood through field theories for neural population activity. Classic neural field theories are derived from highly simplified models of individual neurons, while biological neurons are highly complex cells. The hallmark neuronal nonlinearity is the action potential, a stereotyped pulse leading to neuron transmitter release and finishes with the neuron’s membrane potential at an approximately constant reset value. Here, we develop a statistical field theory for networks of stochastic spiking neurons. We use this to investigate the mean field dynamics of the population activity and the impact of nonlinear spike emission and nonlinear spike resets on the population activity, and compare the roles of inhibitory interactions and single-neuron spike resets in stabilizing neural network activity.
Feb 03 2023 — 12:00PM — 1:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Robert Rosenbaum (Department of Applied and Computational Mathematics and Statistics, University of Notre Dame)
Title: Nonlinear stimulus representations in neural circuits with approximate excitatory-inhibitory balance.
Abstract
Several studies show that neurons in the cerebral cortex receive an approximate balance between excitatory (positive) and inhibitory (negative) synaptic input. What are the implications of this balance on neural representations? Earlier studies develop the theory of a “balanced state” that arises naturally in large scale computational models of neural circuits. This balanced state encourages simple, linear relationships between stimuli and neural responses. However, we know that the cortex must implement nonlinear representations. We show that the classical balanced state is fragile and easily broken in a way that produces a new state, which we call the "semi-balanced state." In this semi-balanced state, input to some neurons is imbalanced by excessive inhibition—which transiently silences these neurons—but no neurons receive excess excitation and balance is maintained the sub-network of non-silenced neurons. We show that stimulus representations in the semi-balanced state are nonlinear, improve the network’s computational power, and have a direct relationship to artificial neural networks widely used in machine learning.
Jan 27 2023 — 12:00PM — 1:00PM PGH 646A (Hybrid)
Speaker: Dr. César A. Uribe (Department of Electrical and Computer Engineering, Rice University)
Title: Towards scalable algorithms for distributed optimization and learning.
Abstract
Increasing amounts of data generated by modern complex systems such as the energy grid, social media platforms, sensor networks, and cloud-based services call for attention to distributed data processing, in particular, for the design of scalable algorithms that take into account storage and communication constraints and help to make coordinated decisions. This talk presents recently proposed distributed algorithms with near-optimal convergence rates for optimization problems over networks where data is stored distributedly. We focus on scalable algorithms and show they can achieve the same rates as their centralized counterparts, with an additional cost related to the network’s structure. We provide application examples to distributed inference and learning and computational optimal transport.
Jan 20 2023 — 12:00PM — 1:00PM PGH 646A
Speaker: Dr. Kisung You (Department of Internal Medicine, Yale University School of Medicine)
Title: When geometry meets statistics: Learning with structured and complex objects.
Abstract
Concepts, methods, and ideas from geometry have become indispensable apparatus in modern data science. Today, geometric data analysis is an expanding topic of research that encompasses a range of programs, including statistics on manifolds, manifold learning, and information geometry, to name a few. Numerous fields have benefited from its ability to perform inference over constrained domains and extract meaningful intrinsic structure. In this talk, I will introduce two aspects of my geometric research program. The first part is on the representation geometry of complex models. In business and medicine, tree-based ensemble models are often used to approximate a knowledge system due to their superb performance and interpretability. However, its nonlinear structure precludes site-to-site comparison and statistical inference on a collection knowledge systems. Based on diffusion and Riemannian geometry, the proposed framework is shown to quantify inter-site disparity and further endow geometric structure for the space of models. The approach is then used to identify associations between variables in the tabular data. The second section demonstrates Wasserstein median, a generalization of the geometric median onto the space of probability measures under the framework of optimal transport. The proposed concept is motivated by a common observation that median is a robust measure of central tendency in presence of noise and outliers. It is first shown that the estimate uniquely exists under some conditions, then a generic meta-algorithm is proposed that can make use of any existing algorithms for computing the Wasserstein barycenter. Numerical experiments with simulated and real data are presented that validate the robustness conjecture empirically.
Fall 2022
Nov 18 2022 — 12:00PM — 1:00PM PGH 646A
Speaker: Dr. Vivak Patel (Department of Statistics, University of Wisconsin Madison)
Title: Counter examples for (stochastic) gradient descent.
Abstract
Gradient Descent (GD) and Stochastic Gradient Descent (SGD) are foundational algorithms for optimization problems arising in learning and statistics. Thus, studying GD's and SGD's behavior is essential to understanding learning and statistics, while also ensuring the reliability of these algorithms. Unfortunately, GD and SGD have been studied under conditions that are rather unrealistic for problems in learning and statistics, which we show through concrete examples. Even worse, GD's and SGD's behavior defies existing theoretical results when applied to problems with realistic conditions, as we show by example. Thus, GD and SGD have been studied under unrealistic conditions that have provided an incorrect understanding of their reliability. In this talk, we analyze GD and SGD under realistic conditions to supply a complete global convergence analysis. If time permits, we will given an example that shows how continuous GD has spurious solutions for GD with diminishing step sizes, which implies that approximating discrete methods with their continuous counterparts needs to be done very cautiously.
Oct 28 2022 — 12:00PM — 1:00PM (Virtual)
Speaker: Dr. Philipp Petersen (Faculty of Mathematics, University of Vienna)
Title: Neural network approximation of smooth functions in floating-point arithmetic.
Abstract
We will discuss and recall multiple approximation results by deep neural networks. In essence, we will observe that these networks can very accurately approximate smooth functions and, as a consequence also complex functions with complex discontinuities, or stable inverses to ill-posed inverse problems. We will complement these results by demonstrating analytically that all these approximation results, as well as most other constructions, are troubled by the fact that they cannot be found by gradient-descent-based methods in floating-point arithmetic. Thereafter, we will show some approximation results where this problem will not occur.
Oct 14 2022 — 12:00PM — 1:00PM PGH 646A (Networks Seminar)
Speaker: Dr. Hyunjoong Kim (Department of Mathematics, University of Pennsylvania)
Title: Optimal search processes in biology: zebrafish airinemes and bee foraging.
Abstract
The efficient search for targets is crucial for all levels of biological systems. These search processes often encounter uncertainty due to a lack of information about their target (random search) or limited communication among a finite number of searchers. Thus, it is important to understand how the biological system optimizes its search process under stochasticity. In this talk, we consider two representative biological systems: zebrafish pattern formation and bee foraging. We discuss how additional randomness (by macrophage) on a ballistic search of the cellular protrusion (airineme) can optimize the search probability to a target. Surprisingly, this theoretical optimal value is approximately the same as the value extracted from live cell images. Next, we investigate how an intermediate level of communication can maximize the fraction of successful bees in foraging. This problem can be considered as a cooperative multi-agent multi-armed bandits problem under competition. Interestingly, this result can be only found by a stochastic model, not a deterministic limit.
Oct 10 2022 — 2:00PM — 3:00PM PGH 646A (Hybrid)
Speaker: Dr. Chris Bauch (Department of Mathematics, University of Waterloo)
Title: Harnessing the universality of bifurcations to improve early warning signals of tipping points through deep learning.
Abstract
Many early warning indicators are based on mathematical theory that discards higher-order terms of the equations that are either too hard to solve by hand, or too hard to detect through statistical measures. However, these higher-order terms leave signatures in time series that may provide information about an upcoming tipping point. Deep learning algorithms excel at detecting subtle features in temporal data but must be trained on very large amounts of data from the study system, which we often lack for many experimental or field-based study systems. However, the need for system-specific data could be circumvented by training the algorithms on a library of random dynamical systems passing through tipping points. This would exploit the ‘universality’ of tipping points that can make their features so similar across diverse systems. Hence, training a deep learning algorithm on a library of random ordinary differential equations, phase transition models, or bifurcation normal forms could--in principle--provide early warning signals of upcoming regime shifts as well as provide information about what kind of state lies beyond the tipping point, all without the need for simulated or empirical data specific to the system. This talk will illustrate applications of this approach to both simulated and empirical tipping points in systems including temporal transitions in paleo-climate shifts, thermoacoustics, lake sedimentation, and social shifts, and spatio-temporal phase transitions in ecological, physical, and climate systems.
Sep 23 2022 — 12:00PM — 1:00PM (Virtual)
Speaker: Dr. Matthias J. Ehrhardt (Department of Mathematical Sciences, University of Bath)
Title: Equivariant neural networks for inverse problems.
Abstract
In recent years the use of convolutional layers to encode an inductive bias (translational equivariance) in neural networks has proven to be a very fruitful idea. The successes of this approach have motivated a line of research into incorporating other symmetries into deep learning methods, in the form of group equivariant convolutional neural networks. In this work, we demonstrate that roto-translational equivariant convolutions can improve reconstruction quality compared to standard convolutions when used within a learned reconstruction method. This is almost a free lunch since only little extra computational cost during training and absolutely no extra cost at test time is needed.
Sep 09 2022 — 12:00PM — 1:00PM, PGH 646A (Hybrid)
Speaker: Dr. Ryeongkyung Yoon (Department of Mathematics, University of Houston)
Title: A Non-autonomous equation discovery method for time signal classification.
Abstract
The connection between deep neural networks and ordinary differential equations (ODEs), as known as Neural ODE, is an active field of research in machine learning. In this talk, we view the hidden states of a neural network as a continuous object governed by a dynamical system. The underlying vector field of hidden variables is written using a dictionary representation which is identified by fitting on the dataset. Within this framework, we develop models for time-series classification. We train the parameters in the models by minimizing a loss, which is defined using the solution to the governing ODE. We solve the optimization problem using a gradient-based method where the gradients are computed via the adjoint method from optimal control theory. Through various experiments on synthetic and real-world datasets, we demonstrate the performance of the proposed models. We also interpret the learned models by visualizing the phase plots of the underlying vector field and solution trajectories. At the end, we introduce the extension of the model for unsupervised learning tasks like dimension reduction method.
Spring 2022
Apr 29 2022 — 12:00PM — 1:00PM (Virtual)
Speaker: Dr. Tatiana Bubba (Department of Mathematical Sciences, University of Bath)
Title: Limited Angle Tomography, Wavelets and Convolutional Neural Networks.
Abstract
Sparsity promotion is a popular regularization technique for inverse problems, reflecting the prior knowledge that the exact solution is expected to have few non-vanishing components, e.g. with respect to a suitable wavelet basis. In this talk, I will present a convolutional neural network designed for sparsity-promoting regularization for linear inverse problems. The architecture of the network is deduced by unrolling the well-known Iterative Soft Thresholding Algorithm (ISTA), together with a novel convolutional structure of each layer, motivated by the wavelet representation of the involved operator. As a result, the proposed network is able to replicate the application of ISTA and outperform it, by learning a suitable pseudodifferential operator.
By a combination of techniques and tools from regularization theory of inverse problems, harmonic wavelet analysis and microlocal analysis, we are able to theoretically analyze the network and to prove approximation error estimates.
Our case study is limited-angle computed tomography: we test two different implementations of our network on simulated data from limited-angle geometry, achieving promising results.
Apr 08 2022 — 12:00PM — 1:00PM (Virtual)
Speaker: Dr. Xaq Pitkow (Department of Neuroscience, Baylor College of Medicine)
Title: Inferring Inference.
Abstract
Repeating patterns of microcircuitry in the cerebral cortex suggest that the brain reuses elementary or "canonical" computations. Neural representations, however, are distributed, so the relevant operations may only be related indirectly to single-neuron transformations. It thus remains an open challenge how to define these canonical computations. We present a theory-driven mathematical framework for inferring implicit canonical computations from large-scale neural measurements. This work is motivated by one important class of cortical computation, probabilistic inference. We posit that the brain has a structured internal model of the world, and that it approximates probabilistic inference on this model using nonlinear message-passing implemented by recurrently connected neural population codes. Our general analysis method simultaneously finds (i) the neural representation of relevant variables, (ii) interactions between these latent variables that define the brain's internal model of the world, and (iii) canonical message-functions that specify the implicit computations. With enough data, these properties are statistically distinguishable due to the symmetries inherent in any canonical computation, up to a global transformation of all interactions. As a concrete demonstration of this framework, we analyze artificial neural recordings generated by a model brain that implicitly implements advanced mean-field inference. Given external inputs and noisy neural activity from the model brain, we successfully estimate the latent dynamics and canonical parameters that explain the simulated measurements. In this first example application, we use a simple polynomial basis to characterize the latent canonical transformations. While this construction matches the true model, it is unlikely to capture a real brain's nonlinearities efficiently. To address this, we develop a general, flexible variant of the framework based on Graph Neural Networks, to infer approximate inferences with a known neural embedding. Finally, analysis of these models reveal certain features of experiment design required to successfully extract canonical computations from neural data.
Apr 01 2022 — 12:00PM — 1:00PM (Virtual)
Speaker: Dr. Georgia Stuart (Office of Information Technology, The University of Texas at Dallas)
Title: Constructing Bayesian Likelihood Functions for Non-Deterministic Forward Problems: Applications to Oil Spill Source Location.
Abstract
Sampling methods for Bayesian inversion often rely on repeated solutions of the forward problem. However, in some applications this forward model is inherently stochastic. In this talk, we explore a sampling-based approach to constructing likelihood functions for inverse problems where the forward problem is non-deterministic. We apply the method to the oil spill source location application with a Lagrangian particle tracking (LPT) forward model. In addition, we discuss the verification and validation of inverse problem workflows and forward models on high performance computing (HPC) systems.
Mar 25 2022 — 12:00PM — 1:00PM (Virtual)
Speaker: Dr. Elizabeth Newman (Emory University)
Title: How to Train Better: Exploiting the Separability of DNNs.
Abstract
Deep neural networks (DNNs) are flexible models composed of simple layers parameterized by weights. They have shown their success in countless applications, particularly as high-dimensional function approximators, due to their universal approximation properties. However, training DNNs to achieve these theoretical approximation properties is difficult in practice. The training problem typically is posed as a stochastic optimization problem with respect to the DNN weights. With millions of weights, a non-convex and non-smooth objective function, and many hyperparameters to tune, solving the training problem requires significant time and computational resources.
In this talk, we will exploit the separability of commonly used DNN architectures to simplify the training process. We call a DNN separable if the weights of the final layer are applied linearly. We will leverage this linearity using two different approaches. First, we will approximate the stochastic optimization problem via a sample average approximation (SAA). In this setting, we can eliminate the linear weights through partial optimization, a method known as variable projection. Second, in the stochastic approximation (SA) setting, we will consider a powerful iterative sampling approach to update the linear weights, which notably incorporates automatic regularization parameter selection methods. Throughout the talk, we will demonstrate the efficacy of these two approaches to exploit separability using numerical examples.
Mar 11 2022 — 12:00PM — 1:00PM (Virtual)
Speaker: Dr. Taewoo Lee (Department of Industrial Engineering, University of Houston)
Title: Data-Driven Inverse Optimization through the Lens of Machine Learning.
Abstract
Inverse optimization has received increasing attention as a tool to infer an optimization model using past decision data. This talk will discuss our recent research on data-driven inverse optimization via its integration with recent advances in machine learning. Despite the growing interest, model inference based on inverse optimization can still be highly sensitive to noise, errors, and uncertainty in the data, limiting its applicability in data-driven settings. We introduce the notion of stability in inverse optimization and propose a novel method that integrates inverse optimization and robust regression to enable a stable model inference under data imperfection. We formulate the new inverse model as a large-scale mixed-integer program and develop efficient solution algorithms by exploiting its connection to classical bi-clique problems. The proposed method will be demonstrated through the diet recommendation application. In the latter part of the talk, we will discuss other recent work motivated by the connection between inverse optimization and machine learning, including how to select a parsimonious set of objectives for a multi-objective optimization problem. We apply this novel objective selection method to cancer therapy planning to infer planning objectives that are simple yet clinically effective using historical cancer treatment data.
Mar 04 2022 — 12:00PM — 1:00PM (Virtual)
Speaker: Dr. Ming Zhong (Institute of Data Science, Texas A&M University)
Title: Machine Learning of Self Organization from Observation.
Abstract
Self-organization (also known as collective behaviors) can be found in studying crystal formation, superconductivity, social behaviors, etc. It is a challenging task to understand such phenomenon from the mathematical point of view. We offer data-driven knowledge-based learning approach to interpret such phenomenon directly from observation data; moreover, our learning approach can aid in validating and improving the modeling of self-organization.
We develop a learning framework to derive physically meaningful dynamical systems to explain the observation data. We provide a convergence theory in terms of the number of different initial conditions for first-order systems of homogeneous agents, and investigate its performance for various first- and second-order systems of heterogeneous agents. Then, we study the steady state properties of our estimators from extended learning framework on more complex second-order systems. We complete the convergence analysis of second-order systems next, and we extend the learning approach to dynamics constrained on Riemannian manifolds.
Having successfully applied our learning method to simulated data sets, we study the effectiveness of our learning method on the NASA JPL's modern Ephemerides. We discover that our learned model outperforms the Newton's model (based on Newton's universal law of gravitation) in terms of reproducing the position/velocity of major celestial bodies, and preserving the geometric properties (period/aphelion/perihelion) of the trajectory and the highly-localized perihelion precession rates of Mars, Mercury, and the Earth's Moon.
In the end, we discuss our current research on learning interaction variables and kernels from observation, and learning from one single snapshot of observation data.
Feb 25 2022 — 12:00PM — 1:00PM (Hybrid)
Speaker: Dr. Evelyn Tang (Department of Physics and Astronomy, Rice University)
Title: Predicting Robust Emergent Function in Active Networks.
Abstract
Living and active systems exhibit various emergent dynamics necessary for system regulation, growth, and motility. However, how robust dynamics arises from stochastic components remains unclear. Towards understanding this, I develop topological theories that support robust edge states, effectively reducing the system dynamics to a lower-dimensional subspace. In particular, I will introduce stochastic networks in molecular configuration space that enable different phenomena from a global clock, stochastic growth and shrinkage, to synchronization. These out-of-equilibrium systems further possess uniquely non-Hermitian features such as exceptional points and vorticity. More broadly, my work provides a blueprint for the design and control of novel and robust function in correlated and active systems. If time permits, I will also discuss other work on analyzing neural data to reveal how fast learners have higher dimensional and more efficient representations.
Fall 2021
Nov 05 2021 — 1:00PM — 2:00PM (Virtual)
Speaker: Dr. Jérôme Lacaille (Safran Aircraft Engines)
Title: Automated prognostic and health monitoring for Turbofan aircraft engines.
Abstract
Since 2007, Safran Aircraft Engines, formerly Snecma, decided with its partner GE on CFM and LEAP engines to develop and industrialize turbofan monitoring algorithms. From the PHM (Prognostic and Health Monitoring) algorithms, I built a DataLab to allow data analysis applications on all types of observations and deploy data analytic skills into the company. I will present some patented algorithms designed with doctoral students and industrialized today by our commercial programs. These tools include innovations to improve design, production and others to automatically anticipate damage, predict thrust degradation, and improve maintenance logistics. I will not go into the mathematical details during this presentation and will focus mainly on aeronautic applications.
Oct 29 2021 — 1:00PM — 2:00PM (Virtual)
Speaker: Dr. Raymond Wong (Department of Statistics, Texas A&M University)
Title: Combating non-uniformity in matrix completion through weights.
Abstract
Matrix completion is a modern missing data problem where the object of interest is a high-dimensional and often low-rank matrix. In this talk, I will discuss the application of weighting methods to deal with non-uniform missingness in matrix completion problems. I will highlight some unique challenges associated with inverse probability weighting (IPW)—a standard weighting approach—under matrix completion settings. Then I will introduce our most recent work on matrix completion under general non-uniform missing structures. We draw insight from covariate-balancing methods developed for treatment effect estimations and break away from the IPW framework. By controlling an upper bound of a novel balancing error, we construct weights that can actively adjust for the non-uniformity in the empirical risk without explicitly modeling the observation probabilities. The recovered matrix based on the proposed weighted empirical risk enjoys appealing theoretical guarantees. In particular, the proposed method achieves a stronger guarantee than existing work in terms of the scaling with respect to the observation probabilities, under asymptotically heterogeneous missing settings.
Oct 22 2021 — 1:00PM — 2:00PM (Virtual)
Speaker: Dr. Danna Gurari (Computer Science, University of Colorado Boulder)
Title: Designing computer vision algorithms both to support real users and recognize multiple perspectives.
Abstract
Computer vision systems are transforming our daily lives. Already, such computers that can ‘see’ are guiding self-driving vehicles, assisting medical professionals in diagnosing diseases, allowing us to deposit checks from our mobile phones, and empowering people with visual impairments to independently overcome daily challenges (e.g., reading restaurant menus). However, the current paradigm for designing human-like intelligence in algorithms has limitations. Currently, algorithms are primarily designed to analyze visual information in constrained environments, with the status quo being to return a single response for each task. These limitations translate to algorithms regularly performing poorly in real-world situations and offering a narrow, one-size-fits-all perspective. In this talk, I will discuss my work that addresses these limitations towards building computing systems that enable and accelerate the analysis of visual information. This will include: (1) creating large-scale datasets that represent real use cases; (2) posing new algorithms for understanding and anticipating divergent responses from a crowd; and (3) introducing new methods that can render (and so `imagine') a diversity of photorealistic images.
Oct 08 2021 — 1:00PM — 2:00PM (Virtual)
Speaker: Dr. Carlos Fernandez-Granda (Courant Institute of Mathematical Science, New York University)
Title: Deep learning for denoising.
Abstract
Learning-based approaches to denoising achieve impressive results when trained on standard image-processing datasets in a supervised fashion. However, unleashing their potential in practice will require developing unsupervised or semi-supervised approaches capable of learning from real data, as well as understanding the strategies learned by the networks to perform denoising. In this talk, we will describe recent advances in this direction motivated by a real-world application to electron microscopy.
Oct 01 2021 — 1:00PM — 2:00PM (Virtual)
Speaker: Dr. Mikael Kuusela (Department of Industrial Engineering, University of Houston)
Title: Local spatio-temporal analysis of global oceanographic data from Argo profiling floats.
Abstract
Argo floats measure seawater temperature and salinity in the upper 2,000 m of the global ocean. Statistical analysis of the resulting spatio-temporal data set is challenging due to its complex structure and large size. Analyzing these data using local statistical models has proved to be a successful strategy for handling both the complexity and computational challenges arising with these data. For example, oceanographic anomaly fields can be mapped using locally stationary Gaussian processes which yields computationally tractable nonstationary fields without the need to explicitly model the nonstationary covariance structure. In this talk, I will first introduce the relevant local modeling techniques and then describe two recent applications of these methods in producing new data-driven estimates of key properties of the global climate system. In the first, we reconstruct the 4-dimensional ocean circulation using Argo data and use it to infer the global ocean heat transport. In the second, we use these techniques to characterize the subsurface ocean thermal response to tropical cyclones using co-located Argo profiles. I will throughout highlight the unique statistical challenges posed by these applications and describe how the baseline methods needed to be adapted to tackle these challenges.
Sep 24 2021 — 1:00PM — 2:00PM (Virtual)
Speaker: Dr. Roberta De Vito (Department of Biostatistics, Brown University)
Title: Multi-study machine learning approaches for assessing reproducibility.
Abstract
Biostatistics and computational biology are increasingly facing the urgent challenge of efficiently dealing with a large amount of experimental data. In particular, high-throughput assays are transforming the study of biology, as they generate a rich, complex, and diverse collection of high-dimensional data sets. Through compelling statistical analysis, these large data sets lead to discoveries, advances and knowledge that were never accessible before, via compelling statistical analysis. Building such systematic knowledge is a cumulative process which requires analyses that integrate multiple sources, studies, and technologies. The increased availability of ensembles of studies on related clinical populations, technologies, and genomic features poses four categories of important multi-study statistical questions: 1) To what extent is biological signal reproducibly shared across different studies? 2) How can this global signal be extracted? 3) How can we detect and quantify local signals that may be masked by strong global signals? 4) How do these global and local signals manifest differently in different data types?
We will answer these four questions by introducing novel classes of methodologies for the joint analysis of different studies. The goal is to separately identify and estimate 1) common factors reproduced across multiple studies, and 2) study-specific factors. We present different medical and biological applications. In all the cases, we clarify the benefits of a joint analysis compared to the standard methods.
Our methods could accelerate the pace at which we can combine unsupervised analysis across different studies, and understand the cross-study reproducibility of signal in multivariate data.
Sep 17 2021 — 1:00PM — 2:00PM (Virtual)
Speaker: Dr. Ying Lin (Department of Industrial Engineering, University of Houston)
Title: Adaptive monitoring of chronic disease in large-scale heterogeneous population.
Abstract
Chronic disease monitoring, which relies on routine and one-size-fits-all monitoring guideline, has low effectiveness and high cost in the clinical practice. To improve the cost-effectiveness in chronic disease monitoring, a transition from the population-based routine monitoring to patient-specific adaptive monitoring is needed. This talk will present (a) a novel statistical learning framework, collaborative learning, to effectively model heterogeneous disease trajectories from sparse and irregular sensing data by exploiting the progression patterns and similarities between individuals; (b) a decision support algorithm, selective sensing, to adaptively allocate the limited monitoring resources to large population and maximally detect the high-risk individuals by integrating the disease progression, individual prognostics and monitoring strategy design into a unified framework. The proposed methods were further applied in the context of cognitive decline monitoring in Alzheimer’s Disease (AD) and depression trajectory monitoring to facilitate the effective use of monitoring technology in chronic disease management.
Sep 10 2021 — 1:00PM — 2:00PM (Virtual)
Speaker: Dr. Soledad Villar (Department of Applied Mathematics & Statistics, John Hopkins University)
Title: Scalars are universal: equivariant machine learning structured like classical physics.
Abstract
There has been enormous progress in the last few years in designing conceivable (though not always practical) neural networks that respect the gauge symmetries -- or coordinate freedom -- of physical law. Some of these frameworks make use of irreducible representations, some make use of higher order tensor objects, and some apply symmetry-enforcing constraints. Different physical laws obey different combinations of fundamental symmetries, but a large fraction (possibly all) of classical physics is equivariant to translation, rotation, reflection (parity), boost (relativity), and permutations. Here we show that it is simple to parameterize universally approximating polynomial functions that are equivariant under these symmetries, or under the Euclidean, Lorentz, and Poincaré groups, at any dimensionality d. The key observation is that nonlinear O(d)-equivariant (and related-group-equivariant) functions can be expressed in terms of a lightweight collection of scalars -- scalar products and scalar contractions of the scalar, vector, and tensor inputs. These results demonstrate theoretically that gauge-invariant deep learning models for classical physics with good scaling for large problems are feasible right now.
Sep 03 2021 — 1:00PM — 2:00PM (Virtual)
Speaker: Dr. Harbir Antil (Center for Mathematics and Artificial Intelligence, George Mason University)
Title: Fractional operators in physics and data science.
Abstract
Fractional calculus and its application to anomalous diffusion has recently received a tremendous amount of attention. In complex/heterogeneous material mediums, the long-range correlations or hereditary material properties are presumed to be the cause of such anomalous behavior. Owing to the revival of fractional calculus, these effects are now conveniently modeled by fractional-order differential operators and the governing equations are reformulated accordingly.
In the first part of the talk, we plan to introduce both linear and nonlinear, fractional-order differential equations. As applications, we will develop new physical models for geophysical electromagnetism, imaging science and a new notion of optimal control and inverse problems will be discussed. We also plan to introduce a novel variable order fractional Laplacian operator with multiple applications.
In the second part of the talk, we will focus on novel Deep Neural Networks (DNNs) based on fractional operators. We plan to discuss the approximation properties and apply them to image denoising and tomographic reconstruction problems. We will establish that these DNNs are also excellent surrogates to PDEs and inverse problems with multiple advantages over the traditional methods. If time permits, we will conclude the talk by showing some of our recent results on chemically reacting flows using DNNs which clearly shows the effectiveness of the proposed approach.
The material covered in this talk is part of joint works with several collaborators, PhD students and postdocs.
Spring 2021
April 23 2021 — 12:00PM — 01:00PM (Virtual)
Speaker: Dr. Lorenzo Andrea Rosasco (Universita' degli Studi di Genova)
Title: Interpolation and learning with Matern kernels.
Abstract
We study the learning properties of nonparametric minimum norm interpolating estimators. In particular, we consider estimators defined by so called Matern kernels, and focus on the role of the kernels scale and smoothness. While common ML wisdom suggests estimators defined by large function classes might be prone to overfit the data, here we suggest that they can often be more stable. Our analysis uses a mix of results from interpolation theory and probability theory. Extensive numerical results are provided to investigate the usefulness of the obtained learning bounds.
Apr 16 2021 — 12:00PM — 01:00PM (Virtual)
Speaker: Dr. Ao Kong (School of Finance, Nanjing University of Finance and Economics)
Title: Predicting intraday jumps in stock prices using high-frequency information.
Abstract
Predicting the intraday stock jumps is a significant but challenging problem in finance. Due to the instantaneity and imperceptibility characteristics of intraday stock jumps, relevant studies on their predictability remain limited. Our study proposes a data-driven approach to predict intraday stock jumps using the information embedded in liquidity measures and technical indicators. Specifically, a trading day is divided into a series of 5-minute intervals, and at the end of each interval, the candidate attributes defined by liquidity measures and technical indicators are input into machine learning algorithms to predict the arrival of a stock jump as well as its direction in the following 5-minute interval. Empirical study is conducted on the level-2 high-frequency data of 1271 stocks in the Shenzhen Stock Exchange of China to validate our approach. The result provides initial evidence of the predictability of jump arrivals and jump directions using level-2 stock data as well as the effectiveness of using a combination of liquidity measures and technical indicators in this prediction. We also reveal the superiority of using random forest compared to other machine learning algorithms in building prediction models. Importantly, our study provides a portable data-driven approach that exploits liquidity and technical information from level-2 stock data to predict intraday price jumps of individual stocks.
Apr 02 2021 — 12:00PM to 01:00PM (Virtual)
Speaker: Dr. Arvind Krishna Saibaba (Department of Mathematics, North Carolina State University)
Title: Randomized algorithms for sensitivity analysis and model reduction.
Abstract
Randomized Numerical Linear Algebra (RandNLA) is an emerging research area that uses randomization as an algorithmic resource to develop algorithms that are numerically robust, with strong theoretical guarantees, easy to implement, and well-suited for high-performance computing. In this talk, I will give a brief overview of RandNLA techniques for dimensionality reduction and discuss novel randomized algorithms, their performance, and analysis, in the context of two important problems in scientific computing: hyperdifferential sensitivity analysis, and nonlinear model reduction.
Joint work with Joseph Hart and Bart van Bloemen Waanders (both at Sandia National Labs).
Mar 26 2021 — 12:00PM to 01:00PM (Virtual)
Speaker: Dr. Charles-Albert Lehalle (Capital Fund Management, Paris and Imperial College, London)
Title: From stochastic control to learning in high frequency finance.
Abstract
Since the mast financial crisis, liquidity became a primary topic for market participants. The complexity of products decreased and the business went from "custom haute couture" to "mass market," where logistic costs are the drivers of efficiency. On capital markets, the row material is risk and frictional costs are made of transaction costs, market impact and crowding.
I will first explain these concepts in a formal way and list now standard ways to answer them. Focusing on transaction costs, scheduling of large orders and control of an order in a double auction game, I will then list how ML starts to provide innovative answers. It will cover examples of Bayesian networks, (explainable) neural control of a PDE and reinforcement learning.
Mar 12 2021 — 12:00PM to 01:00PM (Virtual)
Speaker: Dr. Guillaume Lajoie (Mathematics and Statistics Department, Université de Montréal)
Title: Inductive biases for deep learning over sequential data: from connectivity to memory addressing.
Abstract
In neural networks, a key hurdle for efficient learning involving sequential data is ensuring good signal propagation over long timescales, while simultaneously allowing systems to be expressive enough to implement complex computations. The brain has evolved to tackle this problem on different scales, and deriving architectural inductive biases based on these strategies can help design better AI systems.
In this talk, I will present two examples of such inductive biases for recurrent neural networks with and without self-attention. In the first, we propose a novel connectivity structure based on “hidden feed forward” features, using an efficient parametrization of connectivity matrices based on the Schur decomposition. In the second, we present a formal analysis of how self-attention affects gradient propagation in recurrent networks, and prove that it mitigates the problem of vanishing gradients when trying to capture long-term dependencies.
Mar 05 2021 — 12:00PM to 01:00PM (Virtual)
Speaker: Dr. Tom Goldstein (Department of Computer Science, University of Maryland)
Title: A scientific approach to understanding optimization and generalization in neural nets.
Abstract
Neural networks have the ability to over-fit on training data, and yet they work incredibly well on test data. In this talk, I’ll discuss why the generalization behavior of neural networks is unexpected. I’ll discuss various theories for why generalization occurs, and use experimental methods to support or refute these theories. Finally, I’ll talk about how the curse of dimensionality may be a blessing for deep learning, and present experiments that support the theory that generalization can be explained (at least in part) by the volume disparity between flat and sharp minima in the loss landscape. Overall, the goal of this talk is to pick apart generalization through experiments rather than theory, and gain some intuition for how deep nets work.
Feb 26 2021 — 12:00PM to 01:00PM (Virtual)
Speaker: Dr. Hwan Goh (Oden Institute, University of Texas at Austin)
Title: Solving Bayesian inverse problems via variational autoencoders.
Abstract
In recent years, the field of machine learning has made phenomenal progress in the pursuit of simulating real-world data generation processes. One notable example of such success is the variational autoencoder (VAE). In this work, with a small shift in perspective, we leverage and adapt VAEs for a different purpose: uncertainty quantification in scientific inverse problems. We introduce UQ-VAE: a flexible, adaptive, hybrid data/model-informed framework for training neural networks capable of rapid modelling of the posterior distribution representing the unknown parameter of interest. Specifically, from divergence-based variational inference, our framework is derived such that all the information usually present in scientific inverse problems is fully utilized in the training procedure. Additionally, this framework includes an adjustable hyperparameter that allows selection of the notion of distance between the posterior model and the target distribution. This introduces more flexibility in controlling how optimization directs the learning of the posterior model. Further, this framework possesses an inherent adaptive optimization property that emerges through the learning of the posterior uncertainty.
Feb 02 2021 — 12:00PM to 01:00PM (Virtual)
Speaker: Dr. Andreas Savas Tolias (Department of Neuroscience, Baylor College of Medicine)
Title: A less artificial intelligence.
Abstract
Our goal is to discover the algorithms and circuit-level mechanisms that the visual cortex uses to generate visual perception. We seek to ultimately build a computational model of the macaque and mouse visual cortex bridging structure (i.e. cell types and connectivity rules) and function (neural representations and visual perception). We are guided by the theory that brains learn generative models of the world where behaviorally relevant causal features such as 3D motion, texture, object shapes and positions are represented as hierarchies of representations reflected in the firing of populations of neurons of different functional types, across various areas. The populations of neurons encoding these variables also encode the probabilities associated with the uncertainty of these variables. Central to discovering the algorithms of visual inference is to decipher these neural representations. This effort has led to seminal discoveries in the field such as Barlow’s “fly detector” cells in the retina, Hubel and Wiesel’s orientation-selective cells in primary visual cortex and Charles Gross’s “face cells” in inferotemporal cortex. However, finding the optimal sensory inputs and deciphering the corresponding neural representations remains a difficult problem due to the high-dimensionality of the search space and because sensory information processing is nonlinear. To mitigate this problem, we developed inception loops: a closed-loop optimization technique that combines large scale in vivo recordings with in silico deep learning modeling. Inception loops have three key components: 1) We build accurate predictive models of neural activity using large scale experimental data. 2) We use this model as an in silico avatar to perform essentially limitless number of experiments and analyses including ones that are practically impossible to perform in the real brain. The avatar enables a systematic dissection of neural representations to gain functional insights and generate novel hypotheses. 3) We loop back in vivo to test these hypotheses and predictions. I will present recent results we have obtained by using inception loops to study visual processing. I will also discuss how we are using the in silico avatar of neural responses to regularize machine learning algorithms for artificial intelligence tasks.
Fall 2020
Nov 20 2020 — 12:00PM to 1:00PM (Virtual)
Speaker: Dr. Lars Ruthotto (Department of Mathematics, Emory University)
Title: Machine learning for high-dimensional optimal transport.
Abstract
In recent years, OT and ML have become increasingly intertwined. This talk presents new avenues for solving high-dimensional optimal transport (OT) problems using machine learning (ML).
The first part of the talk shows how neural networks can be used to efficiently approximate the optimal transport map between two densities in high dimensions. To avoid the curse-of-dimensionality, we combine Lagrangian and Eulerian viewpoints and employ neural networks to solve the underlying Hamilton-Jacobi-Bellman equation. Our approach avoids any space discretization and can be implemented in existing machine learning frameworks. We present numerical results for OT in up to 100 dimensions and validate our solver in a two-dimensional setting.
The second part of the talk shows how optimal transport theory can improve the efficiency of training generative models and density estimators, which are critical in machine learning. We consider continuous normalizing flows (CNF) that have emerged as one of the most promising approaches for variational inference in the ML community. Our numerical implementation is a discretize-optimize method whose forward problem relies on manually derived gradients and Laplacian of the neural network and uses automatic differentiation in the optimization. In common benchmark challenges, our method outperforms state-of-the-art CNF approaches by 1-2 orders of magnitude during training and inference.
Oct 30 2020 — 12:00PM to 1:00PM (Virtual)
Speaker: Dr. Tan Bui-Thanh (Oden Institute, The University of Texas at Austin)
Title: Model-aware deep learning approaches for forward and PDE-constrained inverse problems.
Abstract
The fast growth in practical applications of machine learning in a range of contexts has fueled a renewed interest in machine learning methods over recent years. Subsequently, scientific machine learning is an emerging discipline which merges scientific computing and machine learning. Whilst scientific computing focuses on large-scale models that are derived from scientific laws describing physical phenomena, machine learning focuses on developing data-driven models which require minimal knowledge and prior assumptions. With the contrast between these two approaches follows different advantages: scientific models are effective at extrapolation and can be fit with small data and few parameters whereas machine learning models require "big data" and a large number of parameters but are not biased by the validity of prior assumptions. Scientific machine learning endeavours to combine the two disciplines in order to develop models that retain the advantages from their respective disciplines. Specifically, it works to develop explainable models that are data-driven but require less data than traditional machine learning methods through the utilization of centuries of scientific literature. The resulting model therefore possesses knowledge that prevents overfitting, reduces the number of parameters, and promotes extrapolatability of the model while still utilizing machine learning techniques to learn the terms that are unexplainable by prior assumptions. We call these hybrid data-driven models as "model-aware machine learning” (MA-ML) methods.
In this talk, we present a few efforts in this MA-ML direction: 1) ROM-ML approach, and 2) Autoencoder-based Inversion (AI) approach. Theoretical results for linear PDE-constrained inverse problems and numerical results various nonlinear PDE-constrained inverse problems will be presented to demonstrate the validity of the proposed approaches.
Oct 23 2020 — 12:00PM to 1:00PM (Virtual)
Speaker: Dr. Thomas Pock (Institute for Graphics and Vision, Technical University Graz)
Title: Variational modeling meets learning.
Abstract
In this talk, I will show how to use learning techniques to significantly improve variation models (also known as energy minimization models). I start by showing that even for the simplest models such as total variation, one can greatly improve the accuracy of the numerical approximation by learning the "best" discretization within a class of consistent discretizations. I will then show how such models can be further extended and improved to provide state-of-the-art results for a number of image reconstruction problems, such as denoising, superresolution, MRI and CT.
Oct 16 2020 — 12:00PM to 1:00PM (Virtual)
Speaker: Dr. Ankit Patel (Department of Neuroscience, Baylor College of Medicine)
Title: Understanding neural networks as splines.
Abstract
How does a neural network approximate a given function? What kinds of functions can it approximate well/poorly? How does the optimization algorithm bias learning? What is the structure of the loss surface and Hessian and how does that impact generalization? Deep Learning has revolutionized many fields, and yet answers to fundamental questions like these remain elusive. Here we present a new emerging viewpoint -- the function space or spline perspective -- that has the power to answer these questions. We find that understanding neural nets is most easily done in the function space, via a novel spline parametrization. This change of coordinates sheds light on many perplexing phenomena, providing simple explanations for the necessity of overparameterization, the structure of loss surface and Hessian, the consequent difficulty of training, and, perhaps most importantly, the phenomenon and mechanism underlying implicit regularization.
Understanding the representation, learning dynamics and inductive bias of neural networks (NNs) is hindered by the opacity of the relationship between NN parameters and the function represented. As such, we propose reparametrizing ReLU NNs as continuous piecewise linear splines. Using this spline lens, we study learning dynamics in shallow univariate ReLU NNs, finding unexpected insights and explanations for several perplexing phenomena. We develop a surprisingly simple and transparent view of the structure of the loss surface, including its critical and fixed points, Hessian, and Hessian spectrum. We also show that standard weight initializations yield very flat functions upon initialization, and that this flatness, together with overparametrization and the initial weight scale, is responsible for the strength and type of implicit regularization, consistent with recent work. Our spline-based approach reproduces key implicit regularization results from recent work but in a far more intuitive and transparent manner. In addition to understanding, the spline lens suggests new kinds of data-dependent initializations and learning algorithms that combine gradient descent with other more global optimization algorithms.
We briefly discuss future work applying the spline lens to: neuronally consistent networks (with saturating activation functions, E/I Balance, and cell types) and to developing new experimental protocols that can test for and characterize implicit regularization in the brain. Going forward, we believe the spline lens will play a foundational role in efforts to understand and design artificial and real neural networks.
Sep 25 2020 — 12:00PM to 1:00PM (Virtual)
Speaker: Jeric Alcala (Department of Mathematics, University of Houston)
Title: Subgrid-scale parametrization of unresolved scales in forced Burgers equation using generative adversarial networks (GAN).
Abstract
Stochastic subgrid-scale parametrizations aim to incorporate effects of unresolved processes in an effective model by sampling from a distribution usually described in terms of the resolved modes. This is an active research area in climate, weather and ocean science where processes evolved in a wide range of spatial and temporal scales. In this study, we evaluate the performance of conditional generative adversarial network (GAN) in parametrizing subgrid-scale effects in a finite-difference discretization of stochastically forced Burgers equation. For this model, resolved modes are defined as local averages and the deviations from these averages are the unresolved degrees of freedom. We train GAN conditioned on the resolved variables to learn the distribution of subgrid flux tendencies for resolved modes and, thus, represent the effect of unresolved scales. Resulting GAN is then used in an effective model to reproduce the statistical features of resolved modes. We demonstrate that various stationary statistical quantities such as spectrum, moments, autocorrelation, etc. are well approximated by this effective model.
Spring 2020
Feb 18 2020 — 1:00PM to 2:00PM — PGH 646A
Speaker: Dr. Eric Price (Department of Computer Science, The University of Texas at Austin)
Title: Compressed sensing and generative models
Abstract
The goal of compressed sensing is make use of image structure to estimate an image from a small number of linear measurements. The structure is typically represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all -- instead, we suppose that vectors lie near the range of a generative model G: Rk —> Rn. Our main theorem here is that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice; this is O(k d log n) for typical d-layer neural networks.
The above result describes how to use a model to recover a signal from noisy data. But if the data is noisy, how can we learn the generative model in the first place? The second part of my talk will describe how to incorporate the measurement process in generative adversarial network (GAN) training. Even if the noisy data does not uniquely identify the non-noisy signal, the distribution of noisy data may still uniquely identify the distribution of non-noisy signals.
Feb 11 2020 — 1:00PM to 2:00PM — PGH 646A
Speaker: Dr. Cameron Buckner (Department of Philosophy, University of Houston)
Title: Is model transparency good or bad for scientific applications of deep learning?
Abstract
As one might expect from a philosophy talk, the answer to the title question is a big “it depends”; it depends on what we mean by “transparency”, it depends on the uses to which the model's verdicts will be put (especially their epistemological, ethical, and legal contexts), and the things on which it depends are probably much more complicated than we very recently supposed. In this talk, I begin by briefly reviewing the leading theories about why deep learning neural networks are often more efficient and accurate than alternative methods in scientific data analysis. I will then raise a cluster of black box/interpretability challenges to these methods, and discuss the different ways that interpretability might be understood here. In particular, I will use the problem of adversarial examples—artificially created data points that seem to dramatically fool deep learning neural networks, but purportedly do not fool humans—as a way to bring multiple issues related to transparency into conflict. I will conclude by reviewing recent results which challenge the consensus verdict on adversarial examples (which is that they are caused by models overfitting noise in the data set) and discussing the implications of these new findings for philosophy of science and the drive for greater model transparency. In the course of this discussion, we will raise as many questions as we answer, but these questions are ripe for fruitful investigation by a new generation of computationally-informed scientists and philosophers.
Fall 2019
Nov 26 2019 — 3:00PM to 4:00PM — PGH 646A
Speaker: Dr. Emily King (Department of Mathematics, Colorado State University)
Title: ReLU-singular values and Gaussian mean width in neural networks.
Abstract
Feedforward neural networks are simply the composition of a sequence of functions alternating between various affine linear maps and a non-linear function called an activation function. One of the most common activation functions is the rectified linear unit ReLU, which maps negative components of a vector to zero and positive components to themselves. A layer of a network consists of a single affine linear map composed with the activation function. A new concept which generalizes singular values to the necessarily non-linear maps corresponding to layers with the ReLU activation function called ReLU-singular values will be introduced. One may leverage ReLU-singular values to prune the neural network. Gaussian mean width, which was originally introduced in high dimensional geometry may be used as another mathematical tool to analyze neural networks which seems to yield information about the success of the neural network in performing the task it was trained to do.
Nov 5 2019 — 3:00PM to 4:00PM — PGH 646A
Speaker: Dr. Anastasios Kyrillidis (Department of Computer Science, Rice University)
Title: Finding low rank solutions, efficiently and provably.
Abstract
A rank-$r$ matrix $X\in\mathbb{R}^{m \times n}$ can be written as a product $UV^\mathsf{T}$, where $U\in \mathbb{R}^{m \times r}$ and $V \in \mathbb{R}^{n \times r}$. One could exploit this observation in optimization: e.g., consider the minimization of a convex function $f(X)$ over rank-$r$ matrices, where the set of rank-$r$ matrices is modeled via the factorization $UV^\mathsf{T}$. Though such parameterization reduces the number of variables, and is more computationally efficient (of particular interest is the case $r \ll \min\{m,n\}$), it comes at a cost: $f(UV^\mathsf{T})$ becomes a non-convex function w.r.t. $U$ and $V$.
We study such parameterization for optimization of generic convex objectives $f$, and focus on first-order, gradient descent algorithmic solutions. We propose the Bi-Factored Gradient Descent (BFGD) algorithm, an efficient first-order method that operates on the $U,V$ factors. We show that when $f$ is (restricted) smooth, BFGD has local sublinear convergence, and linear convergence when $f$ is both (restricted) smooth and (restricted) strongly convex. For several key applications, we provide simple and efficient initialization schemes that provide approximate solutions good enough for the above convergence results to hold.
Oct 08 2019 — 3:00PM to 4:00PM — PGH 646A
Speaker: Dr. Vivek Boominathan (Department of Electrical and Computer Engineering, Rice University)
Title: Designing flat cameras by replacing lenses with computation.
Abstract
Miniaturization of cameras is key to enabling new applications in areas such as connected devices, wearables, implantable medical devices, in vivo microscopy and micro-robotics. Recently, lenses were identified as the main bottleneck in miniaturization of cameras. Standard smaller lens-system camera modules have a thickness of about 10 mm or higher and reducing the size of lenses leads to a smaller aperture, inferior light collection, and noisier and worse resolution. In this talk, I will present our efforts in creating cameras with a compact factor of 10x smaller thicknesses and weights. We achieve this by replacing the focusing lens optics with a thinner optical mask placed close to the imaging sensor and employing computational approaches to perform the “focusing” of the images. I will discuss the design principles of the optical masks in terms of camera performance and the computational complexity of the recovery algorithm. I will show, with results, the applications of our cameras in 3D microscopy, high-resolution imaging, and computer vision.
Spring 2019
May 09 2019 — 10:00AM to 11:00AM — PGH 216 (Special Seminar)
Speaker: Dr. Sarah King (Marine Meteorology Division, U.S. Naval Research Laboratory)
Title: Kalman type filters for data assimilation.
Abstract
Data assimilation is a type of big data problem found in numerical weather prediction (NWP) where all available information is combined to determine the initial conditions for a particular NWP model. In this talk I will discuss ensemble Kalman filtering in Data assimilation and introduce a nonlinear filter designed to be computationally efficient in terms of model evaluations. The method is a form of Kalman quadrature filter which makes similar assumptions as the data assimilation systems used in operational meteorology. Additionally, the method is scalable and adjoint free making it comparable to ensemble Kalman filtering. We will discuss the prior estimation error properties of the filter and present examples.
April 15 2019 — 2:30PM to 3:30PM — PGH 646
Speaker: Dr. Manos Papadakis (Department of Mathematics, University of Houston)
Title: From microlocal analysis and multiscale transforms to Lolaark LLC.
Abstract
Water turbidity is a frequent impediment for achieving satisfactory imaging clarity in underwater video and inhibits the extraction of information concerning the condition of submerged structures. Ports, rivers, lakes and inland waterways are notoriously difficult spots for camera inspections due to poor visibility. This is a good first paragraph for the story we promote. But, reality is different.
We discovered our ability to make an impactful solution in this problem after working on illumination normalization for face recognition and formulating this problem in a new way. Using microlocal analysis we develop a series of mathematical results which prove that the derivative images we generate with our non-linear multiscale transform contain crucial structural information because they retain local smoothness properties of original inputs.
Armed with hope we ventured to start Lolaark LLC which is completing a software system for real-time visibility improvement of underwater video acquired in turbid environments. During the talk we will demonstrate the capabilities of our product ALSvision and we will share the happy and unhappy experiences of two academics who decided to pursue a career in business, in parallel to their academic track.
Feb 19 2019 — 2:30PM to 3:30PM — PGH 646 (Special Seminar)
Speaker: Dr. Ed Saff (Department of Mathematics, Vanderbilt University)
Title: Energy bounds for minimizing Riesz and Gauss Configurations.
Abstract
Utilizing frameworks developed by Delsarte, Yudin and Levenshtein, we deduce linear programming lower bounds (as $N \to \infty$) for the Riesz energy of $N$-point configurations on the $d$-dimensional unit sphere in the so-called hypersingular case; i.e, for non-integrable Riesz kernels of the form $|x - y|^{-s}$ with $s > d$. As a consequence, we immediately get (thanks to the Poppy-seed bagel theorem) lower estimates for the large $N$ limits of minimal hypersingular Riesz energy on compact d-rectifiable sets. Furthermore, for the Gaussian potential $\exp(-\alpha |x - y|^2)$ on $\mathbb{R}^p$, we obtain lower bounds for the energy of infinite configurations having a prescribed density.
Feb 11 2019 — 2:30PM to 3:30PM — PGH 646
Speaker: Dr. Hien V. Nguyen (Department of Electrical & Computer Engineering, University of Houston)
Title: Challenges for Deep Learning in Medical Image Analysis.
Abstract
In the last few years, deep neural networks have emerged as state-of-the-art approaches for various medical image analysis tasks including detection, segmentation, and classification of pathological regions. However, there are several fundamental challenges that prevent deep neural networks from achieving its full potential on medical applications.
First, deep neural networks often require a large number of labeled training examples to achieve a superior accuracy over traditional machine learning algorithms or match the human performance. For example, it takes more than 100,000 clinically labeled images obtained from multiple medical institutions for deep networks to match human dermatologist’s diagnostic accuracy. While crowdsourcing services provide an efficient way to create labels for natural images or texts, they are usually not appropriate for medical data which have high privacy standards. In addition, annotating medical data requires significant medical/biological knowledge which most crowdsourcing workers do not possess. For these reasons, machine learning researchers often rely on domain experts to label the data. This process is expensive and inefficient, therefore, often unable to produce a sufficient number of labels for deep networks to flourish.
Second, to make the matter worse, medical data are often noisy and imperfect. This could be due to missing information in medical records or heterogeneity in sensing technologies and imaging protocols. These effects pose a great challenge for conventional learning frameworks. They also sometimes create a discrepancy between training and test data, thus, decrease the overall system’s accuracy to an extent that is no longer safe to use. Finally, human annotations are another major source of errors. For example, high intra and inter-physician variations are well-known in medical diagnostic tasks such as classification of small lung nodules or histopathological images. This leads to erroneous labels that could derail our learning algorithms. How to effectively deal with imperfection in medical data/labels remains an open research question.
In this talk, I will discuss how to effectively address these challenges. Specifically, I will highlight promising research directions in one-shot learning, designing new network architectures, and multi-stage training of deep networks.
Fall 2018
Nov 12 2018 — 2:00PM to 3:00PM — PGH 646
Speaker: Dr. Saurabh Prasad (Department of Electrical & Computer Engineering, University of Houston)
Title: Advances in Supervised and Semi-Supervised Machine Learning for Image Analysis of Multi-Modal Geospatial Imagery.
Abstract
Recent advances in optical sensing technology (miniaturization and low-cost architectures for spectral imaging) and sensing platforms from which such imagers can be deployed (e.g. handheld devices, unmanned aerial vehicles) have the potential to enable ubiquitous multispectral and hyperspectral imaging on demand to support geospatial analysis. Often, however, robust analysis with such data is challenging due to limited/noisy ground-truth, and variability due to illumination, scale and atmospheric conditions. In this talk, I will review recent advances in the areas of subspace learning, structured sparsity and Bayesian inference towards robust single and multi-sensor geospatial imaging while providing robustness to the aforementioned challenges. I will also present how some of these ideas bear synergy with emerging trends in deep learning and can support robust deep learning under the small-sample-size scenario. I will discuss the algorithmic developments and present results with a benchmark urban ground-cover classification task, as well as an ecosystem monitoring multi-sensor geospatial dataset.
Oct 29 2018 — 2:00PM to 3:00PM — PGH 646
Speaker: Dr. David Fuentes (Department of Imaging Physics, MD Anderson)
Title: A Mathematical Model Development for Prediction of Liver Cancer Treatment Response.
Abstract
Curative therapies are not available to the majority of patients with liver cancer. Treatment decisions are difficult and must intricately balance treatment of the disease extent with quality of life and preservation of liver function while minimizing risk of recurrence and metastasis. As each therapeutic approach imposes significant physical, emotional, and financial impact on the patient, there is a well-recognized need for reliable methods that can predict the response to therapy. In this work, we develop an automated approach that uses clinical factors combined with quantitative image features on computed tomography to predict hepatocellular carcinoma (HCC) response to transcatheter arterial chemoembolization (TACE). Our approach for data curation, feature reduction, model calibration, and validation to build confidence in the model prediction accuracy will be discussed. Reliable image registration and segmentation methods are essential for repeatable extraction of quantitative image features as model input.
Sep 24 2018 — 2:00PM to 3:00PM — PGH 646
Speaker: Dr. Teemu Saksala (Department of Computational and Applied Mathematics, Rice University)
Title: Seeing inside the earth with Riemannian and Finsler geometry.
Abstract
Earthquakes produce seismic waves. They provide a way to obtain information about the deep structures of our planet. The typical measurement is to record the travel time difference of the seismic waves produced by an earthquake. If the network of seismometers is dense enough and they measure a large number of earthquakes, we can hope to recover the wave speed of the seismic wave from the travel time differences. In this talk we will consider geometric inverse problems related to different data sets produced by seismic waves. We will state uniqueness results for these problems and consider the mathematical tools needed for the proofs. The talk is based on joint works with: Maarten de Hoop, Joonas Ilmavirta, Matti Lassas and Hanming Zhou.
Sep 10 2018 — 2:00PM to 3:00PM — PGH 646
Speaker: Dr. James L. Herring (Department of Mathematics, University of Houston)
Title: LAP—A Fast Solver for Coupled Imaging Problems.
Abstract
Coupled nonlinear inverse problems arise in numerous imaging applications, and solving them is often difficult due to ill-posedness and high computational cost. In this work, we introduce LAP, a linearize and project method for coupled nonlinear inverse problems with two (or more) sets of coupled variables. LAP is implemented within a Gauss–Newton framework. At each iteration of the Gauss–Newton optimization, LAP linearizes the residual around the current iterate, eliminates one block of variables via a projection, and solves the resulting reduced dimensional problem for the Gauss--Newton step. The method is best suited for problems where the subproblem associated with one set of variables is comparatively well-posed or easy to solve. LAP supports iterative, direct, and hybrid regularization and supports element-wise bound constraints on all the blocks of variables. This offers various options for incorporating prior knowledge of a desired solution. We demonstrate the advantages of these characteristics with several numerical experiments. We test LAP for two and three dimensional problems in super-resolution and MRI motion correction, two separable nonlinear least-squares problems that are linear in one block of variables and nonlinear in the other. We also use LAP for image registration subject to local rigidity constraints, a problem that is nonlinear in all sets of variables. These two classes of problems demonstrate the utility and flexibility of the LAP method.
Aug 27 2018 — 2:00PM to 3:00PM — PGH 646
Speaker: Dr. Emily King (Department of Mathematics, University of Bremen)
Title: Edge, Ridge, and Blob Detection with Symmetric Molecules.
Abstract
In this talk a novel approach to the detection and characterization of edges, ridges, and blobs in two-dimensional images which exploits the symmetry properties of directionally sensitive analyzing functions in multiscale systems that are constructed in the framework of $\alpha$-molecules -- a generalization of shearlets -- will be presented. The proposed feature detectors are inspired by the notion of phase congruency, stable in the presence of noise, and by definition invariant to changes in contrast. It will also be shown that the behavior of coefficients corresponding to differently scaled and oriented analyzing functions can be used to obtain a comprehensive characterization of the geometry of features in terms of local tangent directions, widths, and heights. The accuracy and robustness of the proposed measures will be validated and compared to various state of the art algorithms in extensive numerical experiments in which sets of clean and distorted synthetic images that are associated with reliable ground truths will be considered. To further demonstrate the applicability, it will be shown that the proposed ridge measure can be used to detect and characterize blood vessels in digital retinal images and that the proposed blob measure can be applied to automatically count the number of cell colonies in a Petri dish.
Spring 2018
Apr 09 2018 — 2:30PM to 3:30PM — PGH 646
Speaker: Nikolaos Karantzas (Department of Mathematics, University of Houston)
Title: On the design of multi-dimensional compactly supported parseval framelets with directional characteristics.
Abstract
We propose a new method for the construction of multi-dimensional, wavelet-like families of affine frames, commonly referred to as framelets, with specific directional characteristics, small and compact support in space, directional vanishing moments (DVM), and axial symmetries or anti-symmetries. The framelets we construct arise from readily available refinable functions. The filters defining these framelets have few non-zero coefficients, custom-selected orientations and can act as finite difference operators.
Apr 02 2018 — 2:30PM to 3:30PM — PGH 646
Speaker: Dr. Manos Papadakis (Department of Mathematics, University of Houston)
Title: ΑΛΣvision: Multiscale sparse representations and microlocal analysis for visibility improvement of underwater video and our journey from math to commercialization.
Abstract
This talk has two parts. The first is more mathematical and presents how we can use a non-linear operator utilizing a multiscale 2-D wavelet transform to address illumination neutralization which is a process of generating a surrogate light neutral image of a scene. We prove that with the operator we use we can achieve almost light neutrality and then we can show what we maintain the microlocal structure of the image of a scene, practically regardless of illumination.
The second part demonstrates how we use this method to improve the visibility of underwater video. Water turbidity is a frequent impediment for achieving satisfactory imaging clarity in underwater video and inhibits the extraction of information concerning the condition of submerged structures. Ports, rivers, lakes and inland waterways are notoriously difficult spots for camera inspections due to poor visibility. This is essentially the same type of problem we see when we have poor or uneven illumination of a scene. We will close with sharing with you some of our experiences in the effort to commercialize this technology, as ΑΛΣvision, without UH support, but with a lot of hope and hard work, clearly outside our professional experience. We hope that the audience will see a different professional pathway emerging from our experience. The talk is open to all graduate students and interested faculty and it will be accessible to all.
Mar 26 2018 — 2:30PM to 3:30PM — PGH 646
Speaker: Dr. Meng Li (Department of Statistics, Rice University)
Title: Partition mixture of 1D wavelets for multidimensional data.
Abstract
We introduce a probabilistic model-based technique called WARP, or wavelets with adaptive random partitioning, with which multidimensional signals can be represented by a mixture of one-dimensional (1D) wavelet decompositions. A probability model, in the form of randomized recursive partitioning, is constructed on the space of wavelet coefficient trees, allowing the decomposition to adapt to the geometric features of the signal. In particular, when combined with the Haar basis, we show that fully probabilistic function estimation can be carried out in closed form using exact recursive belief propagation. We demonstrate that with WARP, even simple 1D Haar wavelets can achieve excellent performance in image denoising via numerical experiments, outperforming state-of-the-art multidimensional wavelet-based methods especially in low signal-to-noise ratio settings.
Mar 19 2018 — 2:30PM to 3:30PM — PGH 646
Speaker: Dr. Yingchun Zhang (Department of Biomedical Engineering, University of Houston)
Title: Dynamics of bioelectrical activity in human body characterized using a multimodal imaging approach.
Abstract
Bioelectrical activity is associated with excitable living tissue and is closely related to the mechanisms and functions of excitable membranes in living tissue/organs. The human brain is comprised of multiple functional regions that activate dynamically to support daily life; Muscle contraction is initiated by the generation of electrical activity at the cell level in the form of action potentials which prorogate along muscle directions in a dynamic way. A better understanding of bioelectrical activity will lead to a better understanding of the functions of organ systems as well as the mechanisms underlying the bioelectric phenomena. Two examples will be discussed in this presentation: (i) Utilizing a 3D innervation zone imaging approach to image the distribution of innervation zone from high-density surface EMG recordings for an individualized muscle spasticity treatment. (ii) Assessing Causal Brain Network Alterations Associated with Emotional Processing and Regulation with Concurrent EEG/fMRI Integrating Analysis.
Mar 05 2018 — 2:30PM to 3:30PM — PGH 646
Speaker: Dr. Laurent Younes (Department of Applied Mathematics and Statistics, Johns Hopkins University)
Title: Riemannian metrics associated with metamorphoses on spaces of curves.
Abstract
We consider a class of metrics on length-normalized curves in d dimensions, represented by their tangent angle expressed as a function of arc-length, i.e., a function of the unit interval to the (d-1)-dimensional unit sphere. These metrics are derived from the combined action of diffeomorphisms (change of parameters) and rotation acting on the tangent angle. Minimizing a Riemannian metric balancing a right-invariant metric on diffeomorphisms and an L2 norm on rotations leads to a special case of "metamorphosis", which provide a general framework adapted to similar situations when Lie groups acts on Riemannian manifolds. When the norm on diffeomorphisms is Sobolev with order 1, the resulting problem is identical to that introduced by Mio and Srivastava for the study of plane curves. Our presentation will offer a new angle for the analysis on this family of metrics, for which we will present new results and experiments.
Feb 26 2018 — 2:30PM to 3:30PM — PGH 646
Speaker: Dr. Luca Giancardo (UTHealth School of Biomedical Informatics)
Title: Detection of motor decline in early Parkinson's disease by interaction with digital devices and brain imaging.
Abstract
Parkinson's disease (PD) is the second most prevalent neurodegenerative disorder in the western world. It is estimated that PD neuronal loss precedes the clinical diagnosis for more than 10 years which leads to a subtle motor decline that cannot be detected with clinical measurements in the current standard of care. This is an urgent unmet medical need. In fact, multiple research groups have independently argued that experimental neuroprotective drugs could significantly slow down or stop the disease progression if administered at the early stages of neuronal damage.
Clinical tools able to measure ecologically valid motor phenotypes to detect and stage motor decline are still elusive. It is known that tests run in the clinic are somehow artificial and cannot capture the whole complexity and variation of PD and other neurodegenerative diseases. In recent studies, my group has shown the feasibility of passive monitoring of the daily interaction with digital devices to measure motor signs in the early stages of PD. In this talk, I will present an overview of these approaches as well as the future steps to enhance them by integrating functional and structural Magnetic Resonance Imaging (MRI) connectomes.
Feb 12 2018 — 2:30PM to 3:30PM — PGH 646
Speaker: Dr. Edward Castillo (Beaumont Health System Research Institute)
Title: CT-derived functional imaging and its applications.
Abstract
Medical imaging is essential for diagnosing and treating many diseases. While imaging modalities such as magnetic resonance imaging and computed tomography (CT) provide a visualization of internal anatomy, functional imaging modalities provide information on physiological activity. For instance, positron emission tomography measures metabolic activity and pulmonary ventilation scans quantify respiration. However, in comparison to CT imaging, functional imaging requires a longer acquisition time, has a lower spatial resolution, and is often susceptible to motion artifacts, particularly in the lungs. With the goal of addressing these shortcomings, we have developed CT-derived functional imaging (CT-FI). CT-FI is an image processing based modality that uses numerical optimization methods to quantify pulmonary function from dynamic computed tomography (often referred to as 4DCT). In this talk, I will present the mathematical derivation and numerical implementation of CT-FI, as well as its applications within cancer radiotherapy, diagnostic imaging, and emergency room medicine.
Feb 05 2018 — 2:30PM to 3:30PM — PGH 646
Speaker: Dr. Jim Hsu (University of Texas Medical Branch)
Title: Cell-based image analysis: primer and computational toolkits.
Abstract
Cell-based imaging offers multimodality, multichannel-based testing of scientific hypotheses in a manner scalable for automation as an unbiased source of scientific information. The category of techniques known as image profiling allows hundreds of features to be interrogated to identify relevant similarities among various treatments. This technique has been applied to analysis of individual cells to dense populations of cells and tissue. Illustrated are applications of these sets of techniques to a real dataset, a population of mouse hippocampal neurons treated with small-molecule inhibitors to simulate the dysregulation of phenotype that occurs in psychiatric disease. In addition to approaches and strategies of image profiling, limitations and pitfalls of image analysis are also presented, applicable to real-life data analysis scenarios. Emerging technologies such as machine learning and deep learning applicable to image profiling are briefly discussed at the end.
Spring 2015
Jan 28 2015 — 2:00PM to 3:00PM — PGH 646
Speaker: Dr. Xuemei Chen (University of Missouri, Department of Mathematics)
Title: A 'frame'work for compressed sensing.
Abstract
Compressed sensing aims to reconstruct sparse signals (most coordinates are zeros) from very few linear measurements. There has been an explosion of research activities in this area during the last decade due to its wide applications, including imaging (e.g., photography, MRI), radar, secure communication and machine learning. In this talk, I will give a brief introduction to compressed sensing. Then I will focus on the compressed sensing problem in the setting where signals are sparse in a frame/dictionary with the goal to build up a framework for this setting. Some interesting and surprising results are presented along the way.
Feb 02 2015 — 2:00PM to 3:00PM — PGH 646
Speaker: Dr. Pascal Fua (Ecole Polytechnique Fédérale de Lausanne, Computer Vision Laboratory)
Title: Modeling Brain Circuitry over a Wide Range of Scales.
Abstract
Electron microscopes (EM) can now provide the nanometer resolution that is needed to image synapses, and therefore connections, while Light Microscopes (LM) see at the micrometer resolution required to model the 3D structure of the dendritic network. Since both the arborescence and the connections are integral parts of the brain’s wiring diagram, combining these two modalities is critically important.
In this talk, I will therefore present our approach to building the dendritic arborescence, to segmenting intra-neuronal structures from EM images, and to registering the resulting models. I will also argue that the techniques that are in wide usage in the Computer Vision and Machine Learning community are just as applicable in this context.
Apr 06 2015 — 2:00PM to 3:00PM — PGH 646
Speaker: Dr. Yifei Lou (University of Texas Dallas. Applied Mathematics)
Title: A Non-convex Approach for Signal and Image Processing.
Abstract
A fundamental problem in compressed sensing (CS) is to reconstruct a sparse signal under a few linear measurements far less than the physical dimension of the signal. Currently, CS favors incoherent systems, in which any two measurements are as little correlated as possible. In reality, however, many problems are coherent, in which case conventional methods, such as L1 minimization, do not work well. In this talk, I will present a novel non-convex approach, which is to minimize the difference of L1 and L2 norms (L1-L2) in order to promote sparsity. Efficient minimization algorithms are constructed and analyzed based on the difference of convex function methodology. The resulting DC algorithms (DCA) can be viewed as convergent and stable iterations on top of L1 minimization, hence improving L1 consistently.
Through experiments, we discover that both L1 and L1-L2 obtain better recovery results from more coherent matrices, which appears unknown in theoretical analysis of exact sparse recovery. In addition, numerical studies motivate us to consider a weighted difference model L1-aL2 (a>1) to deal with ill-conditioned matrices when L1- L2 fails to obtain a good solution. An extension of this model to image processing will be also discussed, which turns out to be a weighted difference of anisotropic and isotropic total variation (TV), based on the well-known TV model and natural image statistics. Numerical experiments on image denoising, image deblurring, and magnetic resonance imaging (MRI) reconstruction demonstrate that our method improves on the classical TV model consistently, and is on par with representative start-of-the-art methods.
Apr 14 2015 — 2:00PM to 3:00PM — PGH 646
Speaker: Dr. Wotao Yin (UCLA, Department of Mathematics)
Title: Bias-free sparse optimization.
Abstract
We introduce a new sparse signal recovery method that has a number of theoretical and computational properties. In particular, it doesn't have the bias found in LASSO. Known since Jianqing Fan and Runze Li's publication in 2001, points on a LASSO path are biased. In order to avoid the bias, instead of the convex l1 energy used in LASSO, one must minimize a nonconvex energy and therefore lose the computational advantages of convex minimization.
Our new method recovers a sparse signal by evolving an ordinary differential inclusion, which involves the subdifferential of the l1 energy. We show that our method can find a solution that is the unbiased estimate of the true signal and whose entries have the signs consistent with those of the true signal. All of these are achieved without any post-processing step of debiasing. In fact, it works better than LASSO combined with debiasing. We also show how to efficiently compute our path both exactly and inexactly but much faster. The exact path can be computed in finitely many steps at a low cost per step. For problems with huge data, we generate an approximate regularization path by so-called Linearized Bregman iteration, which is fast and easy to parallelize yet still have the sign-consistency property and is only slightly biased.
This is joint work with Stanley Osher, Feng Ruan, Jiechao Xiong, Ming Yan, and Yuan Yao.
Apr 27 2015 — 2:00PM to 3:00PM — PGH 646
Speaker: Dr. Glenn Easley (MITRE Corporation)
Title: Super-resolution using a compressive sensing architecture.
Abstract
We present experimental results for a novel super-resolution imaging device that measures projections onto a random basis. The imaging system follows an architecture that comes from the theory of compressed sensing. We developed the system model from experimentally acquired calibration data and evaluate system performance as a function of the size of the basis set, or equivalently, the number of projections applied in the reconstruction. Simulations show sensitivity of the approach to fundamental physical parameters certain to be encountered with real systems, including optical diffraction and noise.
May 04 2015 — 2:00PM to 3:00PM — PGH 646
Speaker: Dr. Panagiotis Tsiamyrtzis (Athens University of Economics and Business, Department of Statistics)
Title: A Bayesian statistical process control approach in modeling count type data.
Abstract
We will start with a short introduction to Statistical Process Control (SPC). Then we will consider a process producing count type data from a Poisson distribution. The Poisson parameter (mean and variance) can experience jumps at random times. These jumps can be of either direction, i.e. either upward (process degradation) or downward (process improvement) and of random size. Our interest is in monitoring in an on-line fashion the underline parameter (detecting “out of control” behavior). The methodology is based on a Bayesian sequentially updated scheme of mixture of Gamma distributions coming from adopting a change point model. Issues regarding inference, prediction and robustness will be covered. The proposed method will be tested against the Shiryaev-Roberts change point alternative and will be applied to a real data set. The developed methodology is very appealing for Phase I and/or short run count data.
Aug 07 2015 — 2:00PM to 3:00PM — PGH 646
Speaker: Dr. Davide Barbieri (Universidad Autónoma de Madrid, Departamento de Matemáticas)
Title: Wavelet analysis in primary visual cortex.
Abstract
A large portion of primary visual cortex neurons activates with an approximately linear response to visual stimuli. Their behavior be can be modeled to a certain extent as a projection of images onto a family of multiscale Gabor filters, with respect to a set of parameters which seems to be related to the task of edge and contour detection. This seminar is aimed to discuss some properties of that analyzing family, and to briefly comment on some details and limitations of this approach.
Aug 07 2015 — 2:00PM to 3:00PM — PGH 646
Speaker: Dr. Bin Dong (Peking University, Beijing International Center for Mathematical Research)
Title: Wavelet Frame Transforms and Differential Operators: Bridging Discrete and Continuum for Image Restoration and Data Analysis.
Abstract
My talk is mainly based on a series of our three recent papers, where fundamental connections between wavelet frame based approach and PDE based approach (including variational and nonlinear PDE based methods) were established. In particular, connections to the total variation model, Mumford-Shah model, and anisotropic diffusions were established. The series of three papers showed that wavelet frame transforms are discretization of differential operators in both variational and PDE frameworks, and such discretization is superior to some of the traditional finite difference schemes for image restoration. This new understanding essentially merged the two seemingly unrelated areas. It also gave birth to many innovative and more effective image restoration models and algorithms.
Although the main application considered is image restoration, I will also discuss possible extensions to high-dimensional unstructured data analysis. I will present a characterization and construction of tight wavelet frames on non-flat domains in both continuum setting, i.e. on manifolds, and in discrete setting, i.e. on graphs; discuss how fast tight wavelet frame transforms can be computed and how they can be effectively used to process and analyze graph data.