# Scientific

## A Weyl-type inequality for irreducible elements in function fields, with applications

We establish a Weyl-type estimate for exponential sums over irreducible elements in function fields. As an application, we generalize an equidistribution theorem of Rhin. Our estimate works for polynomials with degree higher than the characteristic of the field, a barrier to the traditional Weyl differencing method. In this talk, we briefly introduce Lê-Liu-Wooley's original argument for ordinary Weyl sums (taken over all elements), and how we generalize it to estimate bilinear exponential sums with general coefficients. This is joint work with Jérémy Campagne (Waterloo), Thái Hoàng Lê (Mississippi) and Yu-Ru Liu (Waterloo).

## Basic reductions of abelian varieties

Given an abelian variety A defined over a number field, a conjecture attributed to Serre states that the set of primes at which A admits ordinary reduction is of positive density. This conjecture had been proved for elliptic curves (Serre, 1977), abelian surfaces (Katz 1982, Sawin 2016) and certain higher dimensional abelian varieties (Pink 1983, Fite 2021, etc).

In this talk, we will discuss ideas behind these results and recent progress for abelian varieties with non-trivial endomorphisms, including the case where A has almost complex multiplication by an abelian CM field, based on joint work with Cantoral-Farfan, Mantovan, Pries, and Tang.

Apart from ordinary reduction, we will also discuss the set of primes at which an abelian variety admits basic reduction, generalizing a result of Elkies on the infinitude of supersingular primes for elliptic curves. This is joint work with Mantovan, Pries, and Tang.

## On the Art of Giving the Same Name to Different Things

Mathematics has developed an increasingly “higher dimensional” point of view of when different things deserve the same name, categorifying the traditional logical notion of equality to isomorphism (from Greek isos “equal” and morphe “form” or “shape”) and equivalence (from Latin aequus “equal” and valere “be well, be worth”). In practice, mathematicians tend to become more flexible in determining when different things deserve the same name as those things become more complicated, as measured by the dimensions of the categories to which they belong. Unfortunately, these pervasive notions of sameness no longer satisfy Leibniz’s identity of indiscernibles — the assertion that two objects are identical just when they share the same properties — essentially because the traditional set theoretical foundations of mathematics make it too easy to formulate “evil” statements. However, in a new proposed foundation system there are common rules that govern the meaning of identity for mathematical objects of any type that allow one to “transport” information along any identification. Moreover, as a consequence of Voevodsky’s univalence axiom, these identity types are faithful to the meanings of sameness that have emerged from centuries of mathematical practice.

Speaker biography: Emily Riehl is Professor of Mathematics at Johns Hopkins University, working on higher category theory, abstract homotopy theory, and homotopy type theory. She studied at Harvard and Cambridge Universities, earned her Ph.D. at the University of Chicago, and was a Benjamin Pierce and NSF postdoctoral fellow at Harvard University. She has published over thirty papers and written three books: Categorical Homotopy Theory (Cambridge 2014), Category Theory in Context (Dover 2016), and Elements of ∞-Category Theory (Cambridge 2022, joint with Dominic Verity). She was recently elected as a member at large of the Council of the American Mathematical Society. In addition to her research, Dr. Riehl is active in promoting access to the world of mathematics through popular writing and in interviews and podcasts. She was also a co-founder of Spectra: the Association for LGBT Mathematicians.

## Conditional estimates for logarithms and logarithmic derivatives in the Selberg class

The Selberg class consists of functions sharing similar properties to the Riemann zeta function. The Riemann zeta function is one example of the functions in this class. The estimates for logarithms of Selberg class functions and their logarithmic derivatives are connected to, for example, primes in arithmetic progressions.

In this talk, I will discuss about effective and explicit estimates for logarithms and logarithmic derivatives of the Selberg class functions when Re(s) ≥ 1/2+ where

## Shifted divergences for sampling, privacy, and beyond

Shifted divergences provide a principled way of making information theoretic divergences (e.g. KL) geometrically aware via optimal transport smoothing. In this talk, I will argue that shifted divergences provide a powerful approach towards unifying optimization, sampling, privacy, and beyond. For concreteness, I will demonstrate these connections via three recent highlights. (1) Characterizing the differential privacy of Noisy-SGD, the standard algorithm for private convex optimization. (2) Characterizing the mixing time of the Langevin Algorithm to its stationary distribution for log-concave sampling. (3) The fastest high-accuracy algorithm for sampling from log-concave distributions. A recurring theme is a certain notion of algorithmic stability, and the central technique for establishing this is shifted divergences. Based on joint work with Kunal Talwar, and with Sinho Chewi.

## Perceptual Learning in Olfaction: Flexibility, Stability, and Cortical Control

The ability to learn and remember is an essential property of the brain that is not limited to high-level processing. In fact, the perception of olfactory stimuli in rodents is strongly shaped by learning processes in the olfactory bulb, the very first brain area to process olfactory information. We developed computational models for the two structural plasticity mechanisms at work. The models capture key aspects of a host of experimental observations and show how the separate plasticity time scales allow perceptual learning to be fast and flexible, but nevertheless produce long-lasting memories. The modeling gives strong evidence for the formation of odor-specific neuronal subnetworks and indicates how their formation is likely under top-down control.

## Quantitative estimates for the size of an intersection of sparse automatic sets

In 1979, Erdős conjectured that for $k \ge 9$, $2^k$ is not the sum of distinct powers of 3. That is, the set of powers of two (which is 2-automatic) and the 3-automatic set consisting of numbers whose ternary expansions omit 2 has finite intersection. In the theory of automata, a theorem of Cobham (1969) says that if $k$ and $\ell$ are two multiplicatively independent natural numbers then a subset of the natural numbers that is both $k-$ and $\ell$-automatic is eventually periodic. A multidimensional extension was later given by Semenov (1977). Motivated by Erdős' conjecture and in light of Cobham's theorem, we give a quantitative version of the Cobham-Semenov theorem for sparse automatic sets, showing that the intersection of a sparse k-automatic subset of $\mathbb{N}^d$ and a sparse $\ell$-automatic subset of $\mathbb{N}^d$ is finite. Moreover, we give effectively computable upper bounds on the size of the intersection in terms of data from the automata that accept these sets.

## An explicit estimate on the mean value of the error in the prime number theorem in intervals

The prime number theorem (PNT) gives us the density of primes amongst the natural numbers. We can extend this idea to consider whether we have the asymptotic number of primes predicted by the PNT in a given interval. Currently, this has only been proven for sufficiently large intervals. We can also consider whether the PNT holds for sufficiently large intervals ‘on average’. This requires estimating the mean-value of the error in the PNT in intervals. A new explicit estimate for this will be given based on the work of Selberg in 1943, along with two applications: one for primes in intervals, and one for Goldbach numbers in intervals.

## Understanding adversarial robustness via optimal transport.

Deep learning-based approaches have succeeded surprisingly in various fields of sciences. In particular, one of the first and most successful achievements of them is image classification. Now, deep learning-based algorithms perform even better than humans in classification problems. However, people in machine learning community observed that it is possible to deteriorate the performance of neural networks seriously by adding a well-designed small noise, which is called ‘adversarial attack’. Although humans still classify this new image correctly. the machine completely fails to classify this new image. Since this issue is serious in practice, for example security or self-driving car, practioners want to develop more robust machines against such adversarial attack, which motivates ‘adversarial training problem’. However, until very recently there has been no theoretical understanding of it. In this talk, I will present the recent progress of understanding adversarial training problem. The key idea of connecting the two areas originates from (Wasserstein) barycenter problem, one of the famous implications of optimal transport theory. I will introduce ‘generalized barycenter problem’, the extension of classical barycenter problem, and its multimarginal optimal transport formulations. Through the lens of those tools, one can understand the geometric structure of adversarial training problems. One crucial advantage of this result is that it allows to utilize many computational optimal transport tools. Lastly, if time is permitted, I will give the result of the existence of optimal robust classifiers which not only extends the binary setting case to the multiclass one but also provides a clean interpretation by duality.

## What conifer trees can show us about how organs are positioned in developing organisms

One of the central questions in developmental biology is how organs form in the correct positions in order to create a functional mature organism. Plant leaves offer an easily observable example of organ positioning, with species-specific motifs for leaf arrangement (phyllotaxis). These patterns arise through a combination of chemical pattern formation, mechanical stresses and growth. Mathematical modelling in each of these areas (and their combinations) contributes to quantitative understanding of developmental mechanisms and morphogenesis in general. Conifer trees are some of the most characteristic plants of BC. They also display a type of ring patterning of their embryonic leaves (cotyledons), which I believe offers a unique route to understanding plant phyllotaxis in general. I will discuss how early work at UBC on similar patterning in algae led to application of reaction-diffusion models in conifer development. This framework has guided experiments at BCIT and recently led to a model that accounts for the natural variability in conifer cotyledon number. The model involves the kinetics of a highly conserved gene regulation module and therefore sheds light on the chemical pattern formation control of phyllotaxis across plants. Conifer patterning also demonstrates scaling of position to organism size, an active area of research in animal development: the model provides some mechanistic insight into how this can occur via chemical kinetics.