Mathematics

Shifts of finite type defined by one forbidden block and a Theorem of Lind

Speaker: 
Chengyu Wu
Date: 
Thu, Jul 25, 2024
Location: 
PIMS, University of British Columbia
Conference: 
Mini Conference on Symbolic Dynamics at UBC
Abstract: 

We focus on shift of finite type (SFTs) obtained by forbidding one word from an ambient SFT. Given a word in a one-dimensional full shift, Guibas and Odlyzko, and Lind showed that its auto-correlation, zeta function and entropy (of the corresponding SFT) all determine the same information. We first prove that when two words both have trivial auto-correlation, the corresponding SFTs not only have the same zeta function, but indeed are conjugate to each other, and this conjugacy is given by a chain of swap conjugacies. We extend this result to the case when the ambient SFT is the golden mean shift, with an additional assumption on the extender set of the words. Then, we introduce SFTs obtained by forbidding one finite pattern from a higher dimensional ambient SFT. We prove that, when two patterns have the same auto-correlation, then there is a bijection between the language of their corresponding SFTs. The proof is a simple combination of a replacement idea and the inclusion-exclusion principle. This is joint work with Nishant Chandgotia, Brian Marcus and Jacob Richey.

Class: 
Subject: 

Shift equivalence implies flow equivalence for reducible shifts of finite type

Speaker: 
Mike Boyle
Date: 
Wed, Jul 24, 2024
Location: 
PIMS, University of British Columbia
Conference: 
Mini Conference on Symbolic Dynamics at UBC
Abstract: 

Explain the implication stated in the title.

Class: 
Subject: 

Variations on Krieger's Embedding Theorem and a Theorem of Boyle

Speaker: 
Brian Marcus
Date: 
Wed, Jul 24, 2024
Location: 
PIMS, University of British Columbia
Conference: 
Mini Conference on Symbolic Dynamics at UBC
Abstract: 

Krieger's celebrated embedding theorem gives necessary and sufficient conditions, in terms of periodic points, for proper embedding of a subshift into a mixing shift of finite type (SFT). The result does not generalize to mixing sofic shifts. Boyle introduced the notion of a receptive periodic point, which is the main obstruction. He also proved that a condition involving receptive periodic points is sufficient for proper embedding of a subshift into a mixing sofic shift. We introduce a stronger notion of embedding, called factorizable embedding, which requires the embedding to factor through a mixing SFT. We show that Boyle's sufficient condition for proper embedding into a mixing sofic shift is necessary and sufficient for a factorizable embedding into a mixing sofic shift. We also give a new characterization of receptive periodic points, and we give a characterization of mixing for irreducible sofic shifts in terms of receptive periodic points. This is joint work with Tom Meyerovitch, Klaus Thomsen and Chengyu Wu.

Class: 
Subject: 

Condensation phenomena in random trees - Lecture 2

Speaker: 
Igor Kortchemski
Date: 
Thu, Jul 25, 2024
Location: 
CRM, Montreal
Conference: 
2024 CRM-PIMS Summer School in Probability
Abstract: 

Consider a population that undergoes asexual and homogeneous reproduction over time, originating from a single individual and eventually ceasing to exist after producing a total of n individuals. What is the order of magnitude of the maximum number of children of an individual in this population when n tends to infinity? This question is equivalent to studying the largest degree of a large Bienaymé-Galton-Watson random tree. We identify a regime where a condensation phenomenon occurs, in which the second greatest degree is negligible compared to the greatest degree. The use of the "one-big jump principle" of certain random walks is a key tool for studying this phenomenon. Finally, we discuss applications of these results to other combinatorial models.

Class: 

Random matrix theory of high-dimensional optimization - Lecture 15

Speaker: 
Elliot Paquette
Date: 
Thu, Jul 25, 2024
Location: 
CRM, Montreal
Conference: 
2024 CRM-PIMS Summer School in Probability
Abstract: 

Optimization theory seeks to show the performance of algorithms to find the (or a) minimizer x∈ℝd of an objective function. The dimension of the parameter space d has long been known to be a source of difficulty in designing good algorithms and in analyzing the objective function landscape. With the rise of machine learning in recent years, this has been proven that this is a manageable problem, but why? One explanation is that this high dimensionality is simultaneously mollified by three essential types of randomness: the data are random, the optimization algorithms are stochastic gradient methods, and the model parameters are randomly initialized (and much of this randomness remains). The resulting loss surfaces defy low-dimensional intuitions, especially in nonconvex settings.
Random matrix theory and spin glass theory provides a toolkit for theanalysis of these landscapes when the dimension $d$ becomes large. In this course, we will show

how random matrices can be used to describe high-dimensional inference
nonconvex landscape properties
high-dimensional limits of stochastic gradient methods.

Class: 

Random walks and branching random walks: old and new perspectives - Lecture 15

Speaker: 
Perla Sousi
Date: 
Thu, Jul 25, 2024
Location: 
CRM, Montreal
Conference: 
2024 CRM-PIMS Summer School in Probability
Abstract: 

This course will focus on two well-studied models of modern probability: simple symmetric and branching random walks in ℤd. The focus will be on the study of their trace in the regime that this is a small subset of the ambient space.
We will start by reviewing some useful classical (and not) facts about simple random walks. We will introduce the notion of capacity and give many alternative forms for it. Then we will relate it to the covering problem of a domain by a simple random walk. We will review Lawler’s work on non-intersection probabilities and focus on the critical dimension $d=4$. With these tools at hand we will study the tails of the intersection of two infinite random walk ranges in dimensions d≥5.

A branching random walk (or tree indexed random walk) in ℤd is a non-Markovian process whose time index is a random tree. The random tree is either a critical Galton Watson tree or a critical Galton Watson tree conditioned to survive. Each edge of the tree is assigned an independent simple random walk in ℤd increment and the location of every vertex is given by summing all the increments along the geodesic from the root to that vertex. When $d\geq 5$, the branching random walk is transient and we will mainly focus on this regime. We will introduce the notion of branching capacity and show how it appears naturally as a suitably rescaled limit of hitting probabilities of sets. We will then use it to study covering problems analogously to the random walk case.

Class: 

Condensation phenomena in random trees - Lecture 1

Speaker: 
Igor Kortchemski
Date: 
Tue, Jul 23, 2024
Location: 
CRM, Montreal
Conference: 
2024 CRM-PIMS Summer School in Probability
Abstract: 

Consider a population that undergoes asexual and homogeneous reproduction over time, originating from a single individual and eventually ceasing to exist after producing a total of n individuals. What is the order of magnitude of the maximum number of children of an individual in this population when n tends to infinity? This question is equivalent to studying the largest degree of a large Bienaymé-Galton-Watson random tree. We identify a regime where a condensation phenomenon occurs, in which the second greatest degree is negligible compared to the greatest degree. The use of the "one-big jump principle" of certain random walks is a key tool for studying this phenomenon. Finally, we discuss applications of these results to other combinatorial models.

Class: 

Random walks and branching random walks: old and new perspectives - Lecture 14

Speaker: 
Perla Sousi
Date: 
Tue, Jul 23, 2024
Location: 
CRM, Montreal
Conference: 
2024 CRM-PIMS Summer School in Probability
Abstract: 

This course will focus on two well-studied models of modern probability: simple symmetric and branching random walks in ℤd. The focus will be on the study of their trace in the regime that this is a small subset of the ambient space.
We will start by reviewing some useful classical (and not) facts about simple random walks. We will introduce the notion of capacity and give many alternative forms for it. Then we will relate it to the covering problem of a domain by a simple random walk. We will review Lawler’s work on non-intersection probabilities and focus on the critical dimension $d=4$. With these tools at hand we will study the tails of the intersection of two infinite random walk ranges in dimensions d≥5.

A branching random walk (or tree indexed random walk) in ℤd is a non-Markovian process whose time index is a random tree. The random tree is either a critical Galton Watson tree or a critical Galton Watson tree conditioned to survive. Each edge of the tree is assigned an independent simple random walk in ℤd increment and the location of every vertex is given by summing all the increments along the geodesic from the root to that vertex. When $d\geq 5$, the branching random walk is transient and we will mainly focus on this regime. We will introduce the notion of branching capacity and show how it appears naturally as a suitably rescaled limit of hitting probabilities of sets. We will then use it to study covering problems analogously to the random walk case.

Class: 

Random matrix theory of high-dimensional optimization - Lecture 14

Speaker: 
Elliot Paquette
Date: 
Tue, Jul 23, 2024
Location: 
CRM, Montreal
Conference: 
2024 CRM-PIMS Summer School in Probability
Abstract: 

Optimization theory seeks to show the performance of algorithms to find the (or a) minimizer x∈ℝd of an objective function. The dimension of the parameter space d has long been known to be a source of difficulty in designing good algorithms and in analyzing the objective function landscape. With the rise of machine learning in recent years, this has been proven that this is a manageable problem, but why? One explanation is that this high dimensionality is simultaneously mollified by three essential types of randomness: the data are random, the optimization algorithms are stochastic gradient methods, and the model parameters are randomly initialized (and much of this randomness remains). The resulting loss surfaces defy low-dimensional intuitions, especially in nonconvex settings.
Random matrix theory and spin glass theory provides a toolkit for theanalysis of these landscapes when the dimension $d$ becomes large. In this course, we will show

how random matrices can be used to describe high-dimensional inference
nonconvex landscape properties
high-dimensional limits of stochastic gradient methods.

Class: 

Random matrix theory of high-dimensional optimization - Lecture 13

Speaker: 
Elliot Paquette
Date: 
Mon, Jul 22, 2024
Location: 
CRM, Montreal
Conference: 
2024 CRM-PIMS Summer School in Probability
Abstract: 

ptimization theory seeks to show the performance of algorithms to find the (or a) minimizer x∈ℝd of an objective function. The dimension of the parameter space d has long been known to be a source of difficulty in designing good algorithms and in analyzing the objective function landscape. With the rise of machine learning in recent years, this has been proven that this is a manageable problem, but why? One explanation is that this high dimensionality is simultaneously mollified by three essential types of randomness: the data are random, the optimization algorithms are stochastic gradient methods, and the model parameters are randomly initialized (and much of this randomness remains). The resulting loss surfaces defy low-dimensional intuitions, especially in nonconvex settings.
Random matrix theory and spin glass theory provides a toolkit for theanalysis of these landscapes when the dimension $d$ becomes large. In this course, we will show

how random matrices can be used to describe high-dimensional inference
nonconvex landscape properties
high-dimensional limits of stochastic gradient methods.

Class: 

Pages