Scientific

Quantum Magic in Secret Communication

Speaker: 
Gilles Brassard
Date: 
Fri, Jan 1, 2010
Location: 
University of Calgary, Calgary, Canada
Abstract: 

In this talk, we shall tell the tale of the origin of Quantum Cryptography from the birth of the first idea by Wiesner in 1970 to the invention of Quantum Key Distribution in 1983, to the first prototypes and ensuing commercial ventures, to exciting prospects for the future. No prior knowledge in quantum mechanics or cryptography will be expected.

Class: 

Introduction to Marsden & Symmetry

Speaker: 
Alan Weinstein
Date: 
Wed, Jul 20, 2011
Location: 
Vancouver Convention Center, BC, Canada
Conference: 
ICIAM 2011
Abstract: 

Alan Weinstein is a Professor of the Graduate School in the Department of Mathematics at the University of California, Berkeley. He was a colleague of Jerry Marsden throughout Jerry’s career at Berkeley, and their joint papers on “Reduction of symplectic manifolds with symmetry” and “The Hamiltonian structure of the Maxwell-Vlasov equations” were fundamental contributions to geometric mechanics.

Class: 

Expanders, Group Theory, Arithmetic Geometry, Cryptography and Much More

Speaker: 
Eyal Goran
Date: 
Tue, Apr 6, 2010
Location: 
University of Calgary, Calgary, Canada
Abstract: 

This is a lecture given on the occasion of the launch of the PIMS CRG in "L-functions and Number Theory".

The theory of expander graphs is undergoing intensive development. It finds more and more applications to diverse areas of mathematics. In this talk, aimed at a general audience, I will introduce the concept of expander graphs and discuss some interesting connections to arithmetic geometry, group theory and cryptography, including some very recent breakthroughs.

Class: 

Perfect Crystals for Quantum Affine Algebras and Combinatorics of Young Walls

Speaker: 
Seok-Jin Kang
Date: 
Fri, Jul 10, 2009
Location: 
University of New South Wales, Sydney, Australia
Conference: 
1st PRIMA Congress
Abstract: 

In this talk, we will give a detailed exposition of theory of perfect crystals, which has brought us a lot of significant applications. On the other hand, we will also discuss the strong connection between the theory of perfect crystals and combinatorics of Young walls. We will be able to derive LLT algorithm of computing global bases using affine paths. The interesting problem is how to construct affine Hecke algebras out of affine paths.

Class: 
Subject: 

Regular Permutation Groups and Cayley Graphs

Speaker: 
Cheryl E. Praeger
Date: 
Fri, Jul 10, 2009
Location: 
University of New South Wales, Sydney, Australia
Conference: 
1st PRIMA Congress
Abstract: 

Regular permutation groups are the `smallest' transitive groups of permutations, and have been studied for more than a century. They occur, in particular, as subgroups of automorphisms of Cayley graphs, and their applications range from obvious graph theoretic ones through to studying word growth in groups and modeling random selection for group computation. Recent work, using the finite simple group classification, has focused on the problem of classifying the finite primitive permutation groups that contain regular permutation groups as subgroups, and classifying various classes of vertex-primitive Cayley graphs. Both old and very recent work on regular permutation groups will be discussed.

Class: 

Law of Large Number and Central Limit Theorem under Uncertainty, the related New Itô's Calculus and Applications to Risk Measures

Speaker: 
Shige Peng
Date: 
Thu, Jul 9, 2009
Location: 
University of New South Wales, Sydney, Australia
Conference: 
1st PRIMA Congress
Abstract: 
Let $S_n= \sum_{i=1}^n X_i$ where $\{X_i\}_{i=1}^\infty$ is a sequence of independent and identically distributed (i.i.d.) of random variables with $E[X_1]=m$. According to the classical law of large number (LLN), the sum $S_n/n$ converges strongly to $m$. Moreover, the well-known central limit theorem (CLT) tells us that, with $m = 0$ and $s^2=E[X_1^2]$, for each bounded and continuous function $j$ we have $\lim_n E[j(S_n/\sqrt{n}))]=E[j(X)]$ with $X \sim N(0, s^2)$. These two fundamentally important results are widely used in probability, statistics, data analysis as well as in many practical situation such as financial pricing and risk controls. They provide a strong argument to explain why in practice normal distributions are so widely used. But a serious problem is that the i.i.d. condition is very difficult to be satisfied in practice for the most real-time processes for which the classical trials and samplings becomes impossible and the uncertainty of probabilities and/or distributions cannot be neglected. In this talk we present a systematical generalization of the above LLN and CLT. Instead of fixing a probability measure P, we only assume that there exists a uncertain subset of probability measures $\{P_q:q \in Q\}$. In this case a robust way to calculate the expectation of a financial loss $X$ is its upper expectation: $[\^\,(\mathbf{E})][X]=\sup_{q \in Q} E_q[X]$ where $E_q$ is the expectation under the probability $P_q$. The corresponding distribution uncertainty of $X$ is given by $F_q(x)=P_q(X \leq x)$, $q \in Q$. Our main assumptions are:
  1. The distributions of $X_i$ are within an abstract subset of distributions $\{F_q(x):q \in Q\}$, called the distribution uncertainty of $X_i$, with $['(m)]=[\^(\mathbf{E})][X_i]=\sup_q\int_{-\infty}^\infty xF_q(dx)$ and $m=-[\^\,(\mathbf{E})][-X_i]=\inf_q \int_{-\infty}^\infty x F_q(dx)$.
  2. Any realization of $X_1, \ldots, X_n$ does not change the distributional uncertainty of $X_{n+1}$ (a new type of `independence' ).
Our new LLN is: for each linear growth continuous function $j$ we have $$\lim_{n\to\infty} \^{\mathbf{E}}[j(S_n/n)] = \sup_{m\leq v\leq ['(m)]} j(v)$$ Namely, the distribution uncertainty of $S_n/n$ is, approximately, $\{ d_v:m \leq v \leq ['(m)]\}$. In particular, if $m=['(m)]=0$, then $S_n/n$ converges strongly to 0. In this case, if we assume furthermore that $['(s)]2=[\^\,(\mathbf{E})][X_i^2]$ and $s_2=-[\^\,(\mathbf{E})][-X_i^2]$, $i=1, 2, \ldots$. Then we have the following generalization of the CLT: $$\lim_{n\to\infty} [j(Sn/\sqrt{n})]= \^{\mathbf{E}}[j(X)], L(X)\in N(0,[s^2,\overline{s}^2]).$$ Here $N(0, [s^2, ['(s)]^2])$ stands for a distribution uncertainty subset and $[\^(E)][j(X)]$ its the corresponding upper expectation. The number $[\^(E)][j(X)]$ can be calculated by defining $u(t, x):=[^(\mathbf{E})][j(x+\sqrt{tX})]$ which solves the following PDE $\partial_t u= G(u_{xx})$, with $G(a):=[1/2](['(s)]^2a^+-s^2a^-).$ An interesting situation is when $j$ is a convex function, $[\^\,(\mathbf{E})][j(X)]=E[j(X_0)]$ with $X_0 \sim N(0, ['(s)]^2)$. But if $j$ is a concave function, then the above $['(s)]^2$ has to be replaced by $s^2$. This coincidence can be used to explain a well-known puzzle: many practitioners, particularly in finance, use normal distributions with `dirty' data, and often with successes. In fact, this is also a high risky operation if the reasoning is not fully understood. If $s=['(s)]=s$, then $N(0, [s^2, ['(s)]^2])=N(0, s^2)$ which is a classical normal distribution. The method of the proof is very different from the classical one and a very deep regularity estimate of fully nonlinear PDE plays a crucial role. A type of combination of LLN and CLT which converges in law to a more general $N([m, ['(m)]], [s^2, ['(s)]^2])$-distributions have been obtained. We also present our systematical research on the continuous-time counterpart of the above `G-normal distribution', called G-Brownian motion and the corresponding stochastic calculus of Itô's type as well as its applications.
Class: 
Subject: 

On Fourth Order PDEs Modelling Electrostatic Micro-Electronical Systems

Speaker: 
Nassif Ghoussoub
Date: 
Thu, Jul 9, 2009
Location: 
University of New South Wales, Sydney, Australia
Conference: 
1st PRIMA Congress
Abstract: 
Micro-ElectroMechanical Systems (MEMS) and Nano-ElectroMechanical Systems (NEMS) are now a well established sector of contemporary technology. A key component of such systems is the simple idealized electrostatic device consisting of a thin and deformable plate that is held fixed along its boundary $\partial \Omega$, where $\Omega$ is a bounded domain in $\mathbf{R}^2.$ The plate, which lies below another parallel rigid grounded plate (say at level $z=1$) has its upper surface coated with a negligibly thin metallic conducting film, in such a way that if a voltage l is applied to the conducting film, it deflects towards the top plate, and if the applied voltage is increased beyond a certain critical value $l^*$, it then proceeds to touch the grounded plate. The steady-state is then lost, and we have a snap-through at a finite time creating the so-called pull-in instability. A proposed model for the deflection is given by the evolution equation $$\frac{\partial u}{\partial t} - \Delta u + d\Delta^2 u = \frac{\lambda f(x)}{(1-u^2)}\qquad\mbox{for}\qquad x\in\Omega, t\gt 0 $$ $$u(x,t) = d\frac{\partial u}{\partial t}(x,t) = 0 \qquad\mbox{for}\qquad x\in\partial\Omega, t\gt 0$$ $$u(x,0) = 0\qquad\mbox{for}\qquod x\in\Omega$$
Now unlike the model involving only the second order Laplacian (i.e., $d = 0$), very little is known about this equation. We shall explain how, besides the above practical considerations, the model is an extremely rich source of interesting mathematical phenomena.
Class: 

Geometry and analysis of low dimensional manifolds

Speaker: 
Gang Tian
Date: 
Sat, Aug 8, 2009
Location: 
University of New South Wales, Sydney, Australia
Conference: 
1st PRIMA Congress
Abstract: 

In this talk, I will start with a brief tour on geometrization of 3-manifolds. Then I will discuss recent progresses on geometry and analysis of 4-manifolds.

Class: 
Subject: 

Warming Caused by Cumulative Carbon Emissions: the Trillionth Tonne

Speaker: 
Myles Allen
Date: 
Wed, Aug 8, 2007 to Thu, Aug 9, 2007
Location: 
University of New South Wales, Sydney, Australia
Conference: 
1st PRIMA Congress
Abstract: 

The eventual equilibrium global mean temperature associated with a given stabilization level of atmospheric greenhouse gas concentrations remains uncertain, complicating the setting of stabilization targets to avoid potentially dangerous levels of global warming. Similar problems apply to the carbon cycle: observations currently provide only a weak constraint on the response to future emissions. These present fundamental challenges for the statistical community, since the non-linear relationship between quantities we can observe and the response to a stabilization scenario makes estimates of the risks associated with any stabilization target acutely sensitive to the details of the analysis, prior selection etc. Here we use ensemble simulations of simple climate-carbon-cycle models constrained by observations and projections from more comprehensive models to simulate the temperature response to a broad range of carbon dioxide emission pathways. We find that the peak warming caused by a given cumulative carbon dioxide emission is better constrained than the warming response to a stabilization scenario and hence less sensitive to underdetermined aspects of the analysis. Furthermore, the relationship between cumulative emissions and peak warming is remarkably insensitive to the emission pathway (timing of emissions or peak emission rate). Hence policy targets based on limiting cumulative emissions of carbon dioxide are likely to be more robust to scientific uncertainty than emission-rate or concentration targets. Total anthropogenic emissions of one trillion tonnes of carbon (3.67 trillion tonnes of CO2), about half of which has already been emitted since industrialization began, results in a most likely peak carbon-dioxide induced warming of 2○C above pre-industrial temperatures, with a 5-95% confidence interval of 1.3-3.9○C.

Class: 

Discrete Stochastic Simulation of Spatially Inhomogeneous Biochemical Systems

Speaker: 
Linda Petzold
Date: 
Tue, Jul 7, 2009
Location: 
University of New South Wales, Sydney, Australia
Conference: 
1st PRIMA Congress
Abstract: 

In microscopic systems formed by living cells, the small numbers of some reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA), which applies to well-stirred chemically reacting systems. However, cells are hardly homogeneous! Spatio-temporal gradients and patterns play an important role in many biochemical processes. In this lecture we report on recent progress in the development of methods for spatial stochastic and multiscale simulation, and outline some of the many interesting complications that arise in the modeling and simulation of spatially inhomogeneous biochemical systems.

Class: 

Pages