This triplet of introductory lectures summarizes a few of the most basic biochemical models with the simple rate equations that they satisfy. I describe production-decay, with Michaelis-Menten and sigmoidal terms, showing how the latter can lead to bistable behaviour and hysteresis. I describe two bistable genetic circuits: the toggle switch by Gardner et al (2000) Nature 403, and the phage-lambda gene by Hasty et al (2000) PNAs 97. The idea of bifurcations is discussed. Finally, I introduce
phosphorylation cycles, and show that sharp responses can arise when the enzymes responsible (kinase and phosphatase) operate near saturation. (This is the so called Goldbeter-Koshland ultrasensitivity).
The mathematical context of the third story, Small Number and the Basketball Tournament, contains some basic principles of combinatorics. The plot of the story and the closing question are structured in a manner that allows the moderator to introduce the notion of permutations and combinations. Since the numbers used in the story are relatively small, this can be used to encourage the young audience to explore on their own. Mathematics is also present in the background. Small Number and his friends do mathematics after school in the Aboriginal Friendship Centre. He loves playing the game of Set and when he comes home his sister is just finishing her math homework. Small Number and his friend would like to participate in a big half-court tournament, and so on.
This opening lecture lists some of the questions and issues propelling current research in Cell Biology and modelling in this field. I introduce basic features of eukaryotic cells that can crawl, and explain briefly the role of the actin cytoskeleton in cell motility. I also introduce the biochemical signalling that regulates the cytoskeleton and the concept of cell polarization. By simplifying the
enormously complex signalling networks, and applying tools of mathematics (nonlinear dynamics, scaling, bifurcations), we can hope to get some understanding of a few of the basic mechanisms that areresponsible for symmetry breaking, robustness, pattern formation, self-assembly, and other cell-level phenomena.
Central to Alan Turing's posthumous reputation is his work with British codebreaking during the Second World War. This relationship is not well understood, largely because it stands on the intersection of two technical fields, mathematics and cryptology, the second of which also has been shrouded by secrecy. This lecture will assess this relationship from an historical cryptological perspective. It treats the mathematization and mechanization of cryptology between 1920-50 as international phenomena. It assesses Turing's role in one important phase of this process, British work at Bletchley Park in developing cryptanalytical machines for use against Enigma in 1940-41. It focuses on also his interest in and work with cryptographic machines between 1942-46, and concludes that work with them served as a seed bed for the development of his thinking about computers.
While Turing is best known for his abstract concept of a "Turing Machine," he did design (but not build) several other machines - particularly ones involved with code breaking and early computers. While Turing was a fine mathematician, he could not be trusted to actually try and construct the machines he designed - he would almost always break some delicate piece of equipment if he tried to do anything practical.
The early code-breaking machines (known as "bombes" - the Polish word for bomb, because of their loud ticking noise) were not designed by Turing but he had a hand in several later machines known as "Robinsons" and eventually the Colossus machines.
After the War he worked on an electronic computer design for the National Physical Laboratory - an innovative design unlike the other computing machines being considered at the time. He left the NPL before the machine was operational but made other contributions to early computers such as those being constructed at Manchester University.
This talk will describe some of his ideas behind these machines.
Many scientific questions are considered solved to the best possible degree when we have a method for computing a solution. This is especially true in mathematics and those areas of science in which phenomena can be described mathematically: one only has to think of the methods of symbolic algebra in order to solve equations, or laws of physics which allow one to calculate unknown quantities from known measurements. The crowning achievement of mathematics would thus be a systematic way to compute the solution to any mathematical problem. The hope that this was possible was perhaps first articulated by the 18th century mathematician-philosopher G. W. Leibniz. Advances in the foundations of mathematics in the early 20th century made it possible in the 1920s to first formulate the question of whether there is such a systematic way to find a solution to every mathematical problem. This became known as the decision problem, and it was considered a major open problem in the 1920s and 1930s. Alan Turing solved it in his first, groundbreaking paper "On computable numbers" (1936). In order to show that there cannot be a systematic computational procedure that solves every mathematical question, Turing had to provide a convincing analysis of what a computational procedure is. His abstract, mathematical model of computability is that of a Turing Machine. He showed that no Turing machine, and hence no computational procedure at all, could solve the Entscheidungsproblem.
Many multi-cellular organisms exhibit remarkably similar patterns of aging and mortality. Because this phenomenon appears to arise from the complex interaction of many genes, it has been a challenge to explain it quantitatively as a response to natural selection. I survey attempts by me and my collaborators to build a framework for understanding how mutation, selection and recombination acting on many genes combine to shape the distribution of genotypes in a large population. A genotype drawn at random from the population at a given time is described in our model by a Poisson random measure on the space of loci, and hence its distribution is characterized by the associated intensity measure. The intensity measures evolve according to a continuous-time, measure-valued dynamical system. I present general results on the existence and uniqueness of this dynamical system, how it arises as a limit of discrete generation systems, and the nature of its equilibria.