warning: Creating default object from empty value in /www/www.mathtube.org/modules/taxonomy/taxonomy.pages.inc on line 33.

Scientific

The Hypoelliptic Laplacian

Speaker: 
Jean-Michel Bismut
Date: 
Fri, Sep 23, 2011
Location: 
PIMS, University of British Columbia
Conference: 
PIMS/UBC Distinguished Colloquium Series
Abstract: 
If X is a Riemannian manifold, the Laplacian is a second order elliptic operator on X. The hypoelliptic Laplacian L_b is an operator acting on the total space of the tangent bundle of X, that is supposed to interpolate between the elliptic Laplacian (when b -> 0) and the geodesic flow (when b -> \infty). Up to lower order terms, L_b is a weighted sum of the harmonic oscillator along the fibre TX and of the generator of the geodesic flow. In the talk, we will explain the underlying algebraic, analytic and probabilistic aspects of its construction, and outline some of the applications obtained so far.

Approximating Functions in High Dimensions

Speaker: 
Albert Cohen
Date: 
Mon, Mar 14, 2011
Location: 
University of British Columbia, Vancouver, Canada
Conference: 
IAM-PIMS-MITACS Distinguished Colloquium Series
Abstract: 
This talk will discuss mathematical problems which are challenged by the fact they involve functions of a very large number of variables. Such problems arise naturally in learning theory, partial differential equations or numerical models depending on parametric or stochastic variables. They typically result in numerical difficulties due to the so-called ''curse of dimensionality''. We shall explain how these difficulties may be handled in various contexts, based on two important concepts: (i) variable reduction and (ii) sparse approximation.

Virtual Lung Project at UNC: What's Math Got To Do With It?

Speaker: 
Gregory Forest
Date: 
Fri, Mar 18, 2011
Location: 
PIMS, University of British Columbia
Abstract: 
A group of scientists at the University of North Carolina, from theorists to clinicians, have coalesced over the past decade on an effort called the Virtual Lung Project. There is a parallel VLP at the Pacific Northwest Laboratory, focused on environmental health, but I will focus on our effort. We come from mathematics, chemistry, computer science, physics, lung biology, biophysics and medicine. The goal is to engineer lung health through combined experimental-theoretical-computational tools to measure, assess, and predict lung function and dysfunction. Now one might ask, with all due respect to Tina Turner: what's math got to do with it? My lecture is devoted to many responses, including some progress yet more open problems.

Frozen Boundaries and Log Fronts

Speaker: 
Andrei Okounkov
Date: 
Mon, Oct 16, 2006
Location: 
University of British Columbia, Vancouver, Canada
Conference: 
PIMS 10th Anniversary Lectures
Abstract: 
In this talk, based on joint work with Richard Kenyon and Grisha Mikhalkin, Andrei Okounkov discusses a binary operation on plane curves which
  1. generalizes classical duality for plane curves and
  2. arises naturally in probabilistic context,
namely as a facet boundary in certain random surface models.

Discrete Stochastic Simulation of Spatially Inhomogeneous Biochemical Systems

Speaker: 
Linda Petzold
Date: 
Tue, Jul 7, 2009
Location: 
University of New South Wales, Sydney, Australia
Conference: 
1st PRIMA Congress
Abstract: 
In microscopic systems formed by living cells, the small numbers of some reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA), which applies to well-stirred chemically reacting systems. However, cells are hardly homogeneous! Spatio-temporal gradients and patterns play an important role in many biochemical processes. In this lecture we report on recent progress in the development of methods for spatial stochastic and multiscale simulation, and outline some of the many interesting complications that arise in the modeling and simulation of spatially inhomogeneous biochemical systems.

Warming Caused by Cumulative Carbon Emissions: the Trillionth Tonne

Speaker: 
Myles Allen
Date: 
Wed, Aug 8, 2007
Location: 
University of New South Wales, Sydney, Australia
Conference: 
1st PRIMA Congress
Abstract: 
The eventual equilibrium global mean temperature associated with a given stabilization level of atmospheric greenhouse gas concentrations remains uncertain, complicating the setting of stabilization targets to avoid potentially dangerous levels of global warming. Similar problems apply to the carbon cycle: observations currently provide only a weak constraint on the response to future emissions. These present fundamental challenges for the statistical community, since the non-linear relationship between quantities we can observe and the response to a stabilization scenario makes estimates of the risks associated with any stabilization target acutely sensitive to the details of the analysis, prior selection etc. Here we use ensemble simulations of simple climate-carbon-cycle models constrained by observations and projections from more comprehensive models to simulate the temperature response to a broad range of carbon dioxide emission pathways. We find that the peak warming caused by a given cumulative carbon dioxide emission is better constrained than the warming response to a stabilization scenario and hence less sensitive to underdetermined aspects of the analysis. Furthermore, the relationship between cumulative emissions and peak warming is remarkably insensitive to the emission pathway (timing of emissions or peak emission rate). Hence policy targets based on limiting cumulative emissions of carbon dioxide are likely to be more robust to scientific uncertainty than emission-rate or concentration targets. Total anthropogenic emissions of one trillion tonnes of carbon (3.67 trillion tonnes of CO2), about half of which has already been emitted since industrialization began, results in a most likely peak carbon-dioxide induced warming of 2○C above pre-industrial temperatures, with a 5-95% confidence interval of 1.3-3.9○C.

New geometric and functional analytic ideas arising from problems in symplectic geometry

Speaker: 
Helmut Hofer
Date: 
Mon, Oct 23, 2006
Location: 
PIMS, University of British Columbia
Conference: 
PIMS 10th Anniversary Lectures
Abstract: 
The study of moduli spaces of holomorphic curves in symplectic geometry is the key ingredient for the construction of symplectic invariants. These moduli spaces are suitable compactifications of solution spaces of a first order nonlinear Cauchy-Riemann type operator. The solution spaces are usually not compact due to bubbling-off phenomena and other analytical difficulties.

Geometry and analysis of low dimensional manifolds

Speaker: 
Gang Tian
Date: 
Fri, Aug 7, 2009
Location: 
University of New South Wales, Sydney, Australia
Conference: 
1st PRIMA Congress
Abstract: 
In this talk, I will start with a brief tour on geometrization of 3-manifolds. Then I will discuss recent progresses on geometry and analysis of 4-manifolds.

On Fourth Order PDEs Modelling Electrostatic Micro-Electronical Systems

Speaker: 
Nassif Ghoussoub
Date: 
Wed, Jul 8, 2009
Location: 
University of New South Wales, Sydney, Australia
Conference: 
1st PRIMA Congress
Abstract: 

Micro-ElectroMechanical Systems (MEMS) and Nano-ElectroMechanical Systems (NEMS) are now a well established sector of contemporary technology. A key component of such systems is the simple idealized electrostatic device consisting of a thin and deformable plate that is held fixed along its boundary $ \partial \Omega $, where $ \Omega $ is a bounded domain in $ \mathbf{R}^2. $ The plate, which lies below another parallel rigid grounded plate (say at level $ z=1 $) has its upper surface coated with a negligibly thin metallic conducting film, in such a way that if a voltage l is applied to the conducting film, it deflects towards the top plate, and if the applied voltage is increased beyond a certain critical value $ l^* $, it then proceeds to touch the grounded plate. The steady-state is then lost, and we have a snap-through at a finite time creating the so-called pull-in instability. A proposed model for the deflection is given by the evolution equation

$$\frac{\partial u}{\partial t} - \Delta u + d\Delta^2 u = \frac{\lambda f(x)}{(1-u^2)}\qquad\mbox{for}\qquad x\in\Omega, t\gt 0 $$
$$u(x,t) = d\frac{\partial u}{\partial t}(x,t) = 0 \qquad\mbox{for}\qquad x\in\partial\Omega, t\gt 0$$
$$u(x,0) = 0\qquad\mbox{for}\qquod x\in\Omega$$

Now unlike the model involving only the second order Laplacian (i.e., $ d = 0 $), very little is known about this equation. We shall explain how, besides the above practical considerations, the model is an extremely rich source of interesting mathematical phenomena.

Law of Large Number and Central Limit Theorem under Uncertainty, the related New Itô's Calculus and Applications to Risk Measures

Speaker: 
Shige Peng
Date: 
Thu, Jul 9, 2009
Location: 
University of New South Wales, Sydney, Australia
Conference: 
1st PRIMA Congress
Abstract: 

Let $ S_n= \sum_{i=1}^n X_i $ where $ \{X_i\}_{i=1}^\infty $ is a sequence of independent and identically distributed (i.i.d.) of random variables with $ E[X_1]=m $. According to the classical law of large number (LLN), the sum $ S_n/n $ converges strongly to $ m $. Moreover, the well-known central limit theorem (CLT) tells us that, with $ m = 0 $ and $ s^2=E[X_1^2] $, for each bounded and continuous function $ j $ we have $ \lim_n E[j(S_n/\sqrt{n}))]=E[j(X)] $ with $ X \sim N(0, s^2) $.

These two fundamentally important results are widely used in probability, statistics, data analysis as well as in many practical situation such as financial pricing and risk controls. They provide a strong argument to explain why in practice normal distributions are so widely used. But a serious problem is that the i.i.d. condition is very difficult to be satisfied in practice for the most real-time processes for which the classical trials and samplings becomes impossible and the uncertainty of probabilities and/or distributions cannot be neglected.

In this talk we present a systematical generalization of the above LLN and CLT. Instead of fixing a probability measure P, we only assume that there exists a uncertain subset of probability measures $ \{P_q:q \in Q\} $. In this case a robust way to calculate the expectation of a financial loss $ X $ is its upper expectation: $ [\^\,(\mathbf{E})][X]=\sup_{q \in Q} E_q[X] $ where $ E_q $ is the expectation under the probability $ P_q $. The corresponding distribution uncertainty of $ X $ is given by $ F_q(x)=P_q(X \leq x) $, $ q \in Q $. Our main assumptions are:

  1. The distributions of $ X_i $ are within an abstract subset of distributions $ \{F_q(x):q \in Q\} $, called the distribution uncertainty of $ X_i $, with $ ['(m)]=[\^(\mathbf{E})][X_i]=\sup_q\int_{-\infty}^\infty xF_q(dx) $ and $ m=-[\^\,(\mathbf{E})][-X_i]=\inf_q \int_{-\infty}^\infty x F_q(dx) $.
  2. Any realization of $ X_1, \ldots, X_n $ does not change the distributional uncertainty of $ X_{n+1} $ (a new type of `independence' ).

Our new LLN is: for each linear growth continuous function $ j $ we have

$$\lim_{n\to\infty} \^{\mathbf{E}}[j(S_n/n)] = \sup_{m\leq v\leq ['(m)]} j(v)$$

Namely, the distribution uncertainty of $ S_n/n $ is, approximately, $ \{ d_v:m \leq v \leq ['(m)]\} $.

In particular, if $ m=['(m)]=0 $, then $ S_n/n $ converges strongly to 0. In this case, if we assume furthermore that $ ['(s)]2=[\^\,(\mathbf{E})][X_i^2] $ and $ s_2=-[\^\,(\mathbf{E})][-X_i^2] $, $ i=1, 2, \ldots $. Then we have the following generalization of the CLT:

$$\lim_{n\to\infty} [j(Sn/\sqrt{n})]= \^{\mathbf{E}}[j(X)], L(X)\in N(0,[s^2,\overline{s}^2]).$$

Here $ N(0, [s^2, ['(s)]^2]) $ stands for a distribution uncertainty subset and $ [\^(E)][j(X)] $ its the corresponding upper expectation. The number $ [\^(E)][j(X)] $ can be calculated by defining $ u(t, x):=[^(\mathbf{E})][j(x+\sqrt{tX})] $ which solves the following PDE $ \partial_t u= G(u_{xx}) $, with $ G(a):=[1/2](['(s)]^2a^+-s^2a^-). $

An interesting situation is when $ j $ is a convex function, $ [\^\,(\mathbf{E})][j(X)]=E[j(X_0)] $ with $ X_0 \sim N(0, ['(s)]^2) $. But if $ j $ is a concave function, then the above $ ['(s)]^2 $ has to be replaced by $ s^2 $. This coincidence can be used to explain a well-known puzzle: many practitioners, particularly in finance, use normal distributions with `dirty' data, and often with successes. In fact, this is also a high risky operation if the reasoning is not fully understood. If $ s=['(s)]=s $, then $ N(0, [s^2, ['(s)]^2])=N(0, s^2) $ which is a classical normal distribution. The method of the proof is very different from the classical one and a very deep regularity estimate of fully nonlinear PDE plays a crucial role.

A type of combination of LLN and CLT which converges in law to a more general $ N([m, ['(m)]], [s^2, ['(s)]^2]) $-distributions have been obtained. We also present our systematical research on the continuous-time counterpart of the above `G-normal distribution', called G-Brownian motion and the corresponding stochastic calculus of Itô's type as well as its applications.

Syndicate content