Lethbridge Number Theory and Combinatorics Seminar
Abstract:
In 1973, assuming the Riemann hypothesis (RH), Montgomery studied the vertical distribution of zeta zeros, and conjectured that they behave like the eigenvalues of some random matrices. We will discuss some models for zeta zeros starting from the random matrix model but going beyond it and related questions, conjectures and results on statistical information on the zeros. In particular, assuming RH and a conjecture of Chan for how often gaps between zeros can be close to a fixed non-zero value, we will discuss our proof of a conjecture of Berry (1988) for the number variance of zeta zeros, in a regime where random matrix models alone do not accurately predict the actual behavior (based on joint work with Meghann Moriah Lugar and Micah B. Milinovich).
Lethbridge Number Theory and Combinatorics Seminar
Abstract:
Let $G$ be a graph with adjacency matrix $A$. A continuous quantum walk on $G$ is determined by the complex unitary matrix $U(t)=\exp(itA)$, where $i^2=−1 and $t$ is a real number. Here, $G$ represents a quantum spin network, and its vertices and edges represent the particles and their interactions in the network. The propagation of quantum states in the quantum system determined by $G$ is then governed by the matrix $U(t)$. In particular, $|U(t)_{u,v}|^2$ may be interpreted as the probability that the quantum state assigned at vertex $u$ is transmitted to vertex $v$ at time $t$. Quantum walks are of great interest in quantum computing because not only do they produce algorithms that outperform classical counterparts, but they are also promising tools in the construction of operational quantum computers. In this talk, we give an overview of continuous quantum walks, and discuss old and new results in this area with emphasis on the concepts and techniques that fall under the umbrella of discrete mathematics.
In this talk, we will discuss a well-known formula of Ramanujan and its relationship with the partial sums of the Möbius function. Under some conjectures, we analyze a finer structure of the involved terms. It is a joint work with Steven M. Gonek (University of Rochester).
A zero-free region of the Riemann zeta-function is a subset of the
complex plane where the zeta-function is known to not vanish. In this talk we
will discuss various computational and analytic techniques used to enlarge the
zero-free region for the Riemann zeta-function, when the imaginary part of a
complex zero is large. We will also explore the limitations of currently known
approaches. This talk will reference a number of works from the literature,
including a joint work with M. Mossinghoff and T. Trudgian.
While Einstein’s theory of gravity is formulated in a smooth setting, the celebrated singularity theorems of Hawking and Penrose describe many physical situations in which this smoothness must eventually breakdown. In positive-definite signature, there is a highly successful theory of metric and metric-measure geometry which includes Riemannian manifolds as a special case, but permits the extraction of nonsmooth limits under dimension and curvature bounds analogous to the energy conditions in relativity: here sectional curvature is reformulated through triangle comparison, while and Ricci curvature is reformulated using entropic convexity along geodesics of probability measures.
This lecture explores recent progress in the development of an analogous theory in Lorentzian signature, whose ultimate goal is to provide a nonsmooth theory of gravity. In work in progress, we aim to establish a low regularity splitting theorem by sacrificing linearity of the d’Alembertian to recover ellipticity. We exploit a negative homogeneity $p-$ d’Alembert operator for this purpose. The same technique yields a simplified proof of Eschenberg (1988) Galloway (1989) and Newman’s (1990) confirmation of Yau’s (1982) conjecture, bringing all three Lorentzian splitting results into a framework closer to the Cheeger-Gromoll splitting theorem from Riemannian geometry.
Consider a population that undergoes asexual and homogeneous reproduction over time, originating from a single individual and eventually ceasing to exist after producing a total of n individuals. What is the order of magnitude of the maximum number of children of an individual in this population when n tends to infinity? This question is equivalent to studying the largest degree of a large Bienaymé-Galton-Watson random tree. We identify a regime where a condensation phenomenon occurs, in which the second greatest degree is negligible compared to the greatest degree. The use of the "one-big jump principle" of certain random walks is a key tool for studying this phenomenon. Finally, we discuss applications of these results to other combinatorial models.
This course will focus on two well-studied models of modern probability: simple symmetric and branching random walks in ℤd. The focus will be on the study of their trace in the regime that this is a small subset of the ambient space.
We will start by reviewing some useful classical (and not) facts about simple random walks. We will introduce the notion of capacity and give many alternative forms for it. Then we will relate it to the covering problem of a domain by a simple random walk. We will review Lawler’s work on non-intersection probabilities and focus on the critical dimension $d=4$. With these tools at hand we will study the tails of the intersection of two infinite random walk ranges in dimensions d≥5.
A branching random walk (or tree indexed random walk) in ℤd is a non-Markovian process whose time index is a random tree. The random tree is either a critical Galton Watson tree or a critical Galton Watson tree conditioned to survive. Each edge of the tree is assigned an independent simple random walk in ℤd increment and the location of every vertex is given by summing all the increments along the geodesic from the root to that vertex. When $d\geq 5$, the branching random walk is transient and we will mainly focus on this regime. We will introduce the notion of branching capacity and show how it appears naturally as a suitably rescaled limit of hitting probabilities of sets. We will then use it to study covering problems analogously to the random walk case.
Optimization theory seeks to show the performance of algorithms to find the (or a) minimizer x∈ℝd of an objective function. The dimension of the parameter space d has long been known to be a source of difficulty in designing good algorithms and in analyzing the objective function landscape. With the rise of machine learning in recent years, this has been proven that this is a manageable problem, but why? One explanation is that this high dimensionality is simultaneously mollified by three essential types of randomness: the data are random, the optimization algorithms are stochastic gradient methods, and the model parameters are randomly initialized (and much of this randomness remains). The resulting loss surfaces defy low-dimensional intuitions, especially in nonconvex settings.
Random matrix theory and spin glass theory provides a toolkit for theanalysis of these landscapes when the dimension $d$ becomes large. In this course, we will show
how random matrices can be used to describe high-dimensional inference
nonconvex landscape properties
high-dimensional limits of stochastic gradient methods.
he word complexity function p(n) of a subshift X measures the number of n-letter words appearing in sequences in X, and X is said to have linear complexity if p(n)/n is bounded. It’s been known since work of Ferenczi that linear word complexity highly constrains the dynamical behaviour of a subshift.
In this talk, we will review some new techniques and limitations for achieving efficient approximation algorithms for entropy and pressure in the context of Gibbs measures defined over countable groups. Our starting point will be a deterministic formula for the Kolmogorov-Sinai entropy of measure-preserving actions of order-able amenable groups. Next, we will review techniques based on random orderings, mixing properties of Markov random fields, and percolation theory to generalize previous work. As a by-product of these results, we will obtain conditions for the uniqueness of the equilibrium state and the locality of pressure, among other implications that are not strictly algorithmic.