A continuation of last week's talk.
We consider multi-objective convex optimal control problems. First we state a relationship between the (weakly or properly) efficient set of the multi-objective problem and the solution of the problem scalarized via a convex combination of objectives through a vector of parameters (or weights). Then we establish that (i) the solution of the scalarized (parametric) problem for any given parameter vector is unique and (weakly or properly) efficient and (ii) for each solution in the (weakly or properly) efficient set, there exists at least one corresponding parameter vector for the scalarized problem yielding the same solution. Therefore the set of all parametric solutions (obtained by solving the scalarized problem) is equal to the efficient set. Next we consider an additional objective over the efficient set. Based on the main result, the new objective can instead be considered over the (parametric) solution set of the scalarized problem. For the purpose of constructing numerical methods, we point to existing solution differentiability results for parametric optimal control problems. We propose numerical methods and give an example application to illustrate our approach. This is joint work with Henri Bonnel (University of New Caledonia).
This talk will introduce fractal transformations and some of their remarkable properties. I will explain the mathematics that sustains them and how to construct them in simple cases. In particular I hope to demonstrate a very recent result, showing how they can be applied to generate convenient mutually-singular measures that enable the storage of multiple images within a single image. The talk will include some beautiful computer graphics.
I will describe four recent theorems, developed jointly with Andrew Vince and David C. Wilson (both of the University of Florida) that reveal a surprisingly rich theory associated with an attractor of a projective iterated function system (IFS). The first theorem characterizes when a projective IFS has an attractor that avoids a hyperplane. The second theorem establishes that a projective IFS has at most one attractor. In the third theorem the classical duality between points and hyperplanes in projective space leads to connections between attractors that avoid hyperplanes and repellers that avoid points and an associated index, which is a nontrivial projective invariant, is defined. I will link these results to the Conley decomposition theorem.
Hyperbolic geometry will be introduced, and visualised via the Cinderella software suite. Simple constructions will be performed and compared and contrasted to with Euclidean geometry. Constructions and examples will be quite elementary. Audience participation, specifically suggestions for constructions to attempt, during the demonstration is actively encouraged.The speaker apologises in advance for not being nearly as knowledgeable of the subject as he probably ought to be.
Scenario Trees are compact representations of processes by which information becomes available. They are most commonly used when solving stochastic optimization problems with recourse, but they have many other uses. In this talk we discuss two uses of scenario trees: computing the value of hydroelectricity in a regulated market; and updating parameters of epidemiological models based on observations of syndromic data. In the former, we investigate the impact that the size and shape of the tree has on the dual price of the first stage demand constraint. In the latter, we summarize a simulation of epidemics and syndromic behaviors on a tree, then identify the subtree most likely to match observed data.
Mineral freight volume increases are driving transport infrastructure investments on Australia's east and west coasts. New and upgraded railways, roads and ports are planned or are under construction -- to serve new mines, processing facilities and international markets. One of the fastest growing regions is Northern Queensland, central to which is the so-called Northern Economic Triangle that has Rockhampton, Mt Isa and Townsville at its vertices. CSIRO has been working with Queensland Government to construct a new GIS-based infrastructure planning optimisation system that is known as the Infrastructure Futures Analysis Platform (IFAP). IFAP can be used to build long-term (eg. 25 year) plans for infrastructure development in regions such as the Northern Economic Triangle. IFAP consists of a commercial Geographic Information System (MapInfo), a database and a network optimisation solver that has been constructed by CSIRO and will ultimately by open-sourced. The prototype IFAP is nearing completion and in this presentation I will discuss the development process and the underlying network optimisation problem.
Joint work with Kim Levy, Andreas Ernst, Gaurav Singh, Stuart Woodman, Andrew Higgins, Leorey Marquez, Olena Gavriliouk and Dhananjay Thiruvady
Nonconvex/nonsmooth phenomena appear naturally in many complex systems. In static systems and global optimization problems, the nonconvexity usually leads to multi-solutions in the related governing equations. Each of these solutions represents certain possible state of the system. How to identify the global and local stability and extremality of these critical solutions is a challenge task in nonconvex analysis and global optimization. The classical Lagrangian-type methods and the modern Fenchel-Moreau-Rockafellar duality theories usually produce the well-known duality gap. It turns out that many nonconvex problems in global optimization and computational science are considered to be NP-hard. In nonlinear dynamics, the so-called chaotic behavior is mainly due to nonconvexity of the objective functions. In nonlinear variational analysis and partial differential equations, the existence of nonsmooth solutions has been considered as an outstanding open problem.
In this talk, the speaker will present a potentially useful canonical duality theory for solving a class of optimization and control problems in complex systems. Starting from a very simple cubic nonlinear equation, the speaker will show that the optimal solutions for nonconvex systems are usually nonsmooth and cannot be captured by traditional local analysis and Newton-type methods. Based on the fundamental definitions of the objectivity and isotropy in continuum physics, the canonical duality theory is naturally developed, and can be used for solving a large class of nonconvex/nonsmooth/discrete problems in complex systems. The results illustrate the important fact that smooth analytic or numerical solutions of a nonlinear mixed boundary-value problem might not be minimizers of the associated variational problem. From a dual perspective, the convergence (or non-convergence) of the FDM is explained and numerical examples are provided. This talk should bring some new insights into nonconvex analysis, global optimization, and computational methods.
We shall continue from last week, performing more hyperbolic constructions, as well as some elliptic constructions for comparison. In particular, some alternating projection algorithms will be explored in hyperbolic space. With some luck we shall confound some more of our euclidean intuitions. Audience participation is actively encouraged.
Machine learning problems are a particularly rich source of applications for sparse optimization, giving rise to a number of formulations that require specialized solvers and structured, approximate solutions. As case studies, we discuss two such applications - sparse SVM classification and sparse logistic regression - and present algorithms that are assembled from different components, including stochastic gradient methods, random approximate matrix factorizations, block coordinate descent, and projected Newton methods. We also describe a third (distantly related) application to selection of captive breeding populations for endangered species using binary quadratic programming, a project started during a visit to Newcastle in June 2009.
CARMA reflects changes in the mathematical research being undertaken at Newcastle. Mathematics is "the language of high technology" which underpins all facets of modern life, while current Information and Communication Technology (ICT) has become ubiquitous. No other research centre exists which focuses on the implications of developments in ICT, present and future, for the practice of research mathematics. CARMA fills this gap through the exploitation and development of techniques and tools for computer-assisted discovery and disciplined data-mining including mathematical visualization. Advanced mathematical computation is equally essential to solution of real-world problems; sophisticated mathematics forms the core of software used by decision-makers, engineers, scientists, managers and those who design, plan and control the products and systems which are key to present-day life.
This talk is aimed at being accessible to all.
Sebastian will be presenting Lagrangian Relaxation and the Cost Splitting Dual on Monday July 5 at 4pm. He will discuss when the Cost Splitting Dual can provide a better bound than the Lagrangian Relaxation or regular linear relaxation and apply the theory learned to an example.
In Euclidean space the medians of a triangle meet at a point that divides each median in the ratio 2 to 1. That point is called the centroid. Cinderella tells us that the medians of a triangle in hyperbolic space meet at a point, but the medians do not divide each other in any ﬁxed ratio. What characterises that point? One answer is that it is the centre of mass of equal-mass particles placed at the vertices. I will outline how one can deﬁne the centre of mass of a set of particles (points) in a Riemannina manifold, and how one can understand this in terms of the exponential map. This centre of mass, or geometric mean, is sometimes called the Karcher mean (apparently ﬁrst introduced by Cartan!). I will attempt to show what this tells us about the medians of a triangle.
In this talk we will present some results recently obtained in collaboration with B.F.Svaiter on maximal monotone operators in nonreflexive Banach spaces. The focus will be on the use of concept of convex representation of a maximal monotone operator for obtaining results on these operators of type: surjectivity of perturbations by duality mappings, uniqueness of the extension to the bidual, Brondsted-Rockafellar property, etc.
Several types of subdifferentials will be introduced on Riemannian manifolds. We'll show their properties and applications, including spectral functions, Borwein-Preiss variational principle, and distance functions.
We study the existence and approximation of fixed points of several types of Bregman nonexpansive operators in reflexive Banach spaces.
Please note the change in our usual day from Monday to Tuesday.
The Hu-Washizu formulation in elasticity is the mother of many different finite element methods in engineering computation. We present some modified Hu-Washizu formulations and their performance in removing locking effect in the nearly incompressible elasticity. The stabilisation of the standard Hu-Washizu formulation is used to obtain the stabilised nodal strain formulation or node-based uniform strain elements. However, we show that standard or stabilised nodal strain formulation should be modified to have a uniformly convergent finite element approximation in the nearly incompressible case.
The classical prolate spheroidal wavefunctions (prolates) arise when solving the Helmholtz equation by separation of variables in prolate spheroidal coordinates. They interpolate between Legendre polynomials and Hermite functions. In a beautiful series of papers published in the Bell Labs Technical Journal in the 1960's, they were rediscovered by Landau, Slepian and Pollak in connection with the spectral concentration problem. After years spent out of the limelight while wavelets drew the focus of mathematicians, physicists and electrical engineers, the popularity of the prolates has recently surged through their appearance in certain communication technologies. In this talk we discuss the remarkable properties of these functions, the ``lucky accident'' which enables their efficient computation, and give details of their role in the localised sampling of bandlimited signals.
In vehicle routing problems (VRPs), a fleet of vehicles must be routed to service the demands of a set of customers in a least-cost fashion. VRPs have been studied extensively by operations researchers for over 50 years. Due to their complexity, VRPs generally cannot be solved optimally, except for very small instances, so researchers have turned to heuristic algorithms that can generate high-quality solutions in reasonable run times. Along these lines, we develop novel integer programming-based heuristics for several different VRPs. We apply our heuristics to benchmark problems in the literature and report computational results to demonstrate their effectiveness.
We discuss a class of sums which involve complex powers of the distance to points in a two-dimensional square lattice and trigonometric functions of their angle. We give a general expression which permits numerical evaluation of members of the class of sums to arbitrary order. We use this to illustrate numerically the properties of trajectories along which the real and imaginary parts of the sums are zero, and we show results for the first two of a particular set of angular sums (denoted C(1, 4 m; s)) which indicate their density of zeros on the critical line of the complex exponent is the same as that for the product (denoted C(0, 1; s)) of the Riemann zeta function and the Catalan beta function. We then introduce a function which is the quotient of the angular lattice sums C(1, 4 m; s) with C(0, 1; s), and use its properties to prove that C(1, 4 m; s) obeys the Riemann hypothesis for any m if and only if C(0, 1; s) obeys the Riemann hypothesis. We furthermore prove that if the Riemann hypothesis holds, then C(1 ,4 m; s) and C(0, 1; s) have the same distribution of zeros on the critical line (in a sense made precise in the proof).
I will describe the history and some recent research on a subject with a remarkable pedigree.
We discuss the Hahn-Banach-Lagrange theorem, a generalized form of the Hahn--Banach theorem. As applications, we derive various results on the existence of linear functionals in functional analysis, on the existence of Lagrange multipliers for convex optimization problems, with an explicit sharp lower bound on the norm of the solutions (multipliers), on finite families of convex functions (leading rapidly to a minimax theorem), on the existence of subgradients of convex functions, and on the Fenchel conjugate of a convex function. We give a complete proof of Rockafellar's version of the Fenchel duality theorem, and an explicit sharp lower bound for the norm of the solutions of the Fenchel duality theorem in terms of elementary geometric concepts.
Computation of definite integrals to high precision (typically several hundred digit precision) has emerged as a particularly fruitful tool for experimental mathematics. In many cases, integrals with no known analytic evaluations have been experimentally evaluated (pending subsequent formal proof) by applying techniques such as the PSLQ integer relation algorithm to the output numerical values. In other cases, intriguing linear relations have been found in a class of related integrals, relations which have subsequently been proven as instances of more general results. In this lecture, Bailey will introduce the two principal algorithms used for high-precision integration, namely Gaussian quadrature and tanh-sinh quadrature, with some details on efficient computer implementations. He will also present numerous examples of new mathematical results obtained, in part, by using these methods.
In a subsequent lecture, Bailey will discuss the PSLQ algorithm and give the details of efficient multi-level and parallel implementations.
Historically, fossil fuels have been vital for our global energy needs. However climate change is prompting renewed interest in the role of fossil fuel production for our energy needs. In order to appropriately plan for our future energy needs, a new detailed model of fossil fuel supply is required. The modelling applied an algorithm-based approach to predict both supply and demand for coal, gas, oil and total fossil fuel resources. Total fossil fuel demand was calculated globally, based on world population and per capita demand; while production was calculated on a country-by-country basis and summed to obtain global production. Notably, production over the lifetime of a fuel source was not assumed to be symmetrical about a peak value like that depicted by a Hubbert curve. Separate production models were developed for mining (coal and unconventional oil) and field (gas and conventional oil) operations, which reflected the basic differences in extraction and processing techniques. Both of these models included a number of parameters that were fitted to historical production data, including: (1) coal for New South Wales, Australia; (2) gas from the North Sea, UK; and (3) oil from the North Sea, UK, and individual state data from the USA.
In this talk we will focus our attention on certain regularization techniques related to two operations involving monotone operators: point-wise sums of maximal monotone operators and pre-compositions of such operators with linear continuous mappings. These techniques, whose underlying idea is to obtain a bigger operator as a result, lead to two concepts of generalized operations--extended and variational sums of maximal monotone operators and, the corresponding to them, extended and variational compositions of monotone mappings with linear continuous operators. We will revue some of the basic results concerning these generalized concepts, as well as will present some recent important advances.
Given a pair of Banach spaces X and Y such that one is the dual of the other, we study the relationships between generic Fr´echet differentiability of convex continuous functions on Y (Asplund property), generic existence of linear perturbations for lower semicontinuous functions on X to have a strong minimum (Stegall variational principle), and dentability of bounded subsets of X (Radon-Nikod´ym Property).
The PSLQ algorithm is an algorithm for finding integer relations in a set of real numbers. In particular, if (x1, ..., xn) is a vector of real numbers, then PSLQ finds integers (a1, ..., an), not all zero, such that a1*x1 + a2*x2 + ... + an*xn = 0, if such integers exist. In practice, PSLQ finds a sequence of matrices B_n such that if x is the original vector, then the reduced vector y = x * B_n tends to have smaller and smaller entries, until one entry is zero (or a very small number commensurate with precision), at which point an integer relation has been detected. PSLQ also produces a sequence of bounds on the size of any possible integer, which bounds grow until either precision is exhausted or a relation has been detected.
The fundamental duality formula (see Zalinescu ”Convex Analysis in General Vector Spaces”, Theorem 2.7.1) is extended to functions mapping into the power set of a topological linear space with a convex cone which is not necessarily pointed. Pairs of linear functionals are used as dual variables instead of linear operators. The talk will consist of three parts. First, motivations and explanations are given for the infimum approach to set-valued optimization. It deviates from other approaches, and it seems to be the only way to obtain a theory which completely resembles the scalar case. In the second part, the main results are presented, namely the fundamental duality formula and several conclusions. The third part deals with duality formulas for set-valued risk measures, a cutting edge development in mathematical finance. It turns out that the proposed duality theory for set-valued functions provides a satisfying framework not only for set-valued risk measures, but also for no-arbitrage and superhedging theorems in conical market models.
An Asplund space is a Banach space which possesses desirable differentiability properties enjoyed by Euclidean spaces. Many characterisations of such spaces fall into two classes: (i) those where an equivalent norm possesses a particular general property, (ii) those where every equivalent norm possesses a particular property at some points of the space. For example: (i) X is an Asplund space if there exists an equivalent norm Frechet differentiable on the unit sphere of the space, (ii) X is an Asplund space if every equivalent norm is Frechet differentiable at some point of its unit sphere. In 1993 (F-P) showed that (i) X is an Asplund space if there exists an equivalent norm strongly subdifferentiable on the unit sphere of the space and in 1995 (G-M-Z) showed that (ii) X separable is an Asplund space if every equivalent norm is strongly subdifferentiable at a nonzero point of X. Problem: Is this last result true for non-separable spaces? In 1994 (C-P) showed (i) X is an Asplund space if there exists an equivalent norm with subdifferential mapping Hausdorff weak upper semicontinuous on its unit sphere. We show: (ii) X is an Asplund space if every continuous gauge on X has a point where its subdifferential mapping is Hausdorff weak upper semicontinuous with weakly compact image which is some way towards solving the problem.
It is an expository talk about (conjectural) hypergeometric evaluations of lattice sums
$F(a,b,c,d)=(a+b+c+d)^2\sum_{n_j=-\infty,\ j=1,2,3,4}^\infty \frac{(-1)^{n_1+n_2+n_3+n_4}}{(a(6n_1+1)^2+b(6n_2+1)^2+c(6n_3+1)^2+d(6n_4+1)^2)^2}$
which arise as the values of L-functions of certain elliptic curves.
Riemannian manifolds constitute a broad and fruitful framework for the development of different fields in mathematic, such as convex analysis, dynamical systems, optimization or mathematical programming, among other scientific areas, where some of its approaches and methods have successfully been extended from Euclidean spaces. The nonpositive sectional curvature is an important property enjoyed by a large class of differential manifolds, so Hadamard manifolds, which are complete simply connected Riemannian manifolds of nonpositive sectional curvature, have worked out a suitable setting for diverse disciplines.
On the other hand, the study of the class of nonexpansive mappings has become an active research area in nonlinear analysis. This is due to the connection with the geometry of Banach spaces along with the relevance of these mappings in the theory of monotone and accretive operators.
We study the problems that arise in the interface between the fixed point theory for nonexpansive type mappings and the theory of monotone operators in the setting of Hadamard manifolds. Different classes of monotone and accretive set-valued vector fields and the relationship between them will be presented, followed by the study of the existence and approximation of singularities for such vector fields. Then we analyze the problem of finding fixed points of nonexpansive type mappings and the connection with monotonicity. As a consequence, variational inequality and minimization problems in this setting will be discussed.
The term "closed form" is one of those mathematical notions that is commonplace, yet virtually devoid of rigor. And, there is disagreement even on the intuitive side; for example, most everyone would say that π + log 2 is a closed form, but some of us would think that the Euler constant γ is not closed. Like others before us, we shall try to supply some missing rigor to the notion of closed forms and also to give examples from modern research where the question of closure looms both important and elusive.
This talk accompanies a paper by Jonathan M. Borwein and Richard E. Crandall, to appear in the Notices of the AMS, which is available at http://www.carma.newcastle.edu.au/~jb616/closed-form.pdf
The term "closed form" is one of those mathematical notions that is commonplace, yet virtually devoid of rigor. And, there is disagreement even on the intuitive side; for example, most everyone would say that π + log 2 is a closed form, but some of us would think that the Euler constant γ is not closed. Like others before us, we shall try to supply some missing rigor to the notion of closed forms and also to give examples from modern research where the question of closure looms both important and elusive.
This talk accompanies a paper by Jonathan M. Borwein and Richard E. Crandall, to appear in the Notices of the AMS, which is available at http://www.carma.newcastle.edu.au/~jb616/closed-form.pdf.
The design of signals with specified frequencies has applications in numerous fields including acoustics, antenna beamforming, digital filters, optics, radar, and time series analysis. It is often desirable to concentrate signal intensity in certain locations and design methods for this have been intensively studied and are well understood. However, these methods assume that the specified frequencies consist of an interval of integers. What happens when this assumption fails is almost a complete mystery that this talk will attempt to address.
In the area of Metric Fixed Point Theory, one of the outstanding question was if the fixed point property implied reflexivity. This question was answered in the negative in 2008 by P.K.Lin, when he showed that certain renorm in the space of absolutely sumable sequences, had the fixed point property.
In this talk we will show a general way to renorm certain spaces in order to have the fixed point property. We also give general properties for a given Banach space to enjoy the f.p.p. And we will also show equivalences of geometrical properties to certain fixed point properties.
There are many algorithms with which linear programs (LPs) can be solved (Fourier-Motzkin, simplex, barrier, ellipsoid, subgradient, bundle, ...). I will provide a very brief review of these methods and their advantages and disadvantages. An LP solver is the main ingredient of every solution method.(branch&bound, cutting planes, column generation, ...) for (NP hard) mixed-integer linear programs (MIPs). What combinations of which techniques work well in practice? There is no general answer. I will show, by means of many practical examples from my research group (telecommunication, transport, traffic and logistics, energy, ...), how large scale LPs and MIPs are successfully attacked today.
The log-concavity of a sequence is a much studied concept in combinatorics with surprising links to many other mathematical fields. In this talk we discuss the stronger but much less studied notion of m-fold log-concavity which has recently recieved some attention after Boros and Moll conjectured that a "remarkable" sequence encountered in the integration of an inverse cubic is infinitely log-concave. In particular, we report on a recent result of Branden which implies infinite log-concavity of the binomial coefficients and other new developments. Examples and conjectures are promised. A PDF of the talk is available here.
Accurate computer recognition of handwritten mathematics offers to provide a natural interface for mathematical computing, document creation and collaboration. Mathematical handwriting, however, provides a number of challenges beyond what is required for the recognition of handwritten natural languages. For example, it is usual to use symbols from a range of different alphabets and there are many similar-looking symbols. Many writers are unfamiliar with the symbols they must use and therefore write them incorrectly. Mathematical notation is two-dimensional and size and placement information is important. Additionally, there is no fixed vocabulary of mathematical "words" that can be used to disambiguate symbol sequences. On the other hand there are some simplifications. For example, symbols do tend to be well-segmented. With these charactersitics, new methods of character recognition are important for accurate handwritten mathematics input.
An informal one-day workshop on Multi Zeta Values will be held on Wed 20th Oct, from 12.30 pm to 6:00 pm. There will be talks by Laureate Professor Jonathan Borwein (Newcastle), Professor Yasuo Ohno (Kinki University, Osaka), and A/Professor Wadim Zudilin (Newcastle), as well as by PhD students from the two universities, followed by a dinner. If you are interested in attending, please inform Juliane Turner Juliane.Turner@newcastle.edu.au so that we can plan for the event.
The Matching Polynomial is a topic in the area of mathematics, statistical physics (dimer-molomer problem) and chemistry (topological resonant energy). In this talk we will discuss the computation of matching polynomial and location of its roots. We show that the roots of matching generating polynomials of graphs are dense in (−∞, 0] and the roots of matching polynomials of graphs dense in (−∞,+∞) which answer a problem of Brown et. al. (see Journal of Algebraic Combinatorics, 19, 273–282, 2004). Some similar result in characteristic polynomial, independent polynomial and chromatic polynomial are also presented.
[Also speaking: Prof Weigen Yan]
We discuss the feasibility pump heuristic and we interpret it as a multi-start, global optimization algorithm that utilizes a fast local minimizer. The function that is minimized has many local minima, some of which correspond to feasible integral solutions. This interpretation suggests alternative ways of incorporating restarts one of which is the use of cutting planes to eliminate local optima that do not correspond to feasible integral solutions. Numerical experiments show encouraging results on standard test libraries.
We are looking at families of finite sets, more specifically subsets of [n]={1,2,...,n}. In particular, we are interested in antichains, that means no member of the family is contained in another one. In this talk we focus on antichains containing only sets of two different cardinalities, say k and l, and study the question what the smallest size of a maximal antichain is (maximal in the sense that it is impossible to add any k-set or l-set without violating the antichain property). This can be nicely reformulated as a problem in extremal (hyper)graph theory, looking similar to the Turán problem on the maximum number of edges in a graph without a complete subgraphs on l vertices. We sketch the solution for the case (k,l)=(2,3), conjecture an optimal construction for the case (k,l)=(2,4) and present some asymptotic bounds for this case.
The continued fraction:
$${\cal R}_\eta(a,b) =\,\frac{{\bf \it a}}{\displaystyle
\eta+\frac{\bf \it b^2}{\displaystyle \eta
+\frac{4{\bf \it a}^2}{\displaystyle \eta+\frac{9 {\bf \it b}^2}{\displaystyle \eta+{}_{\ddots}}}}}$$
enjoys attractive algebraic properties such as a striking arithmetic-geometric mean relation and elegant links with elliptic-function theory. The fraction presents a computational challenge, which we could not resist.
The continued fraction:
$${\cal R}_\eta(a,b) =\,\frac{{\bf \it a}}{\displaystyle \eta+\frac{\bf \it b^2}{\displaystyle \eta +\frac{4{\bf \it a}^2}{\displaystyle \eta+\frac{9 {\bf \it b}^2}{\displaystyle \eta+{}_{\ddots}}}}}$$
enjoys attractive algebraic properties such as a striking arithmetic-geometric mean relation and elegant links with elliptic-function theory. The fraction presents a computational challenge, which we could not resist.
In Part II will reprise what I need from Part I and focus on the dynamics. The talks are stored here.
We will give a brief overview and of the history of Ramanujan and give samplings of areas such as partitions, partition congruences, ranks, modular forms, and mock theta functions. For example: A partition of a positive number $n$ is a non-increasing sequence of positive integers whose sum is $n$. There are five partitions of the number four: 4, 3+1, 2+2, 2+1+1, 1,1,1,1. If we let $p(n)$ be the number of partitions of $n$, it turns out that $p(5n+4)\equiv \pmod{5}$. How does one explain this? Once the basics and context have been introduced, we will discuss new results with respect to mock theta functions and show how they relate to old and recent results.
What is a {\em random subgroup} of a group, and who cares? In a (non-abelian) group based cryptosystem, two parties (Alice and Bob) each choose a subgroup of some platform group "at random" -- each picks $k$ elements "at random" and takes the subgroup generated by their chosen elements.
But for some platform groups (like the braid groups, which were chosen first, being so complicated and difficult) a "random subgroup" is not so random after all. It turned out, pick $k$ elements of a braid group, and they will generate (almost always) a {\em free group} with your $k$ elements as the free basis. And if Alice and Bob are just playing with free groups, it makes their secrets easy to attack.
Richard Thompson's group $F$ is an infinite, torsion free group, with many weird and cool properties, but the one I liked for this project is that it has {\em no} free subgroups (of rank $>1$) at all, so a random subgroup of $F$ could not be free -- so what would it be?
This is joint work with Sean Cleary (CUNY), Andrew Rechnitzer (UBC) and Jeniffer Taback (Bowdoin).
In this talk we consider two-phase flow models in porous media as they occur in several applications like oil production, pollute transport or CO2-storage. After a general introduction, we focus on an enhanced model where the capillary pressure is rate-dependent. We discuss the consequences of this term for heterogeneous materials with and without entry pressure. In the case of entry pressures the problem can be reformulation as inequality constraint at the material interface. Suitable discretization schemes and solution algorithms are proposed and used in various numerical simulations.
The complete elliptic integrals of the first and second kinds (K(x) and E(x)) will be introduced and their key properties revised. Then, new and perhaps interesting results concerning moments and other integrals of K(x) and E(x) will be derived using elementary means. Diverse connections will be made, for instance with random walks and some experimental number theory.
See flyer [PDF]
Automorphism groups of locally finite trees form a significant class of examples of locally compact totally disconnected topological groups. In this talk I will discuss my honours research, which covered the various local properties of automorphism groups. I will provide methods of constructing such groups, in particular groups acting on regular trees, and discuss what conclusions we can make regarding the structure of these groups.
This talk will be an introduction to cycle decompositions of complete graphs in the context of Alspach's conjecture about the necessary and sufficient conditions for their existence. Several useful methods of construction based on algebra, graph products and modifying existing decompositions will be presented. The most up to date results on this problem will be mentioned and future directions of study may be suggested.
This talk will describe a number of dierent variational principles for self-adjoint
eigenvalue problems that arose from considerations of convex and nonlinear analysis.
First some unconstrained variational principles that are smooth analogues of the
classical Rayleigh principles for eigenvalues of symmetric matrices will be described. In
particular the critical points are eigenvectors and their norms are related to the eigenvalues
of the matrix. Moreover the functions have a nice Morse theory with the Morse indices
describing the ordering of the eigenvector.
Next an unconstrained variational principle for eigenfunctions of elliptic operators
will be illustrated for the classical Dirichlet Laplacian eigenproblem. The critical points
of this problems have a Morse theory that plays a similar role to the classical Courant-
Fischer-Weyl minimax theory.
Finally I will describe certain Steklov eigenproblems and indicate how they are used
to develop a spectral characterization of trace spaces of Sobolev fundtions.
Rational numbers can be represented in many different ways: as a fraction, as a M"obius function, as a 2x2 matrix, as a string of L's and R's, as a continued fraction, as a disc in the plane, or as a point in the lattice Z^2. Converting between the representations involves interesting questions about computation and geometry. The geometries that arise are hyperbolic, inversive, or projective.
Various Vacation Scholars, HRD students and CARMA RAs will report on their work. This involves visualization and computation, practice and theory. Everyone is welcome to see what they have done and what they propose to do.
In this talk I will give some classical fixed point theorems and present some of their applications. The talk will be of a "Chalk and Talk" style and will include some elegant classical proofs. The down side of this is that the listener will be expected to have some familiarity with metric spaces, convexity and hopefully Zorn's Lemma.
Various Vacation Scholars, HRD students and CARMA RAs will report on their work. This involves visualization and computation, practice and theory. Everyone is welcome to see what they have done and what they propose to do.
We discuss a short-term revenue optimization problem that involves the optimal targeting of customers for a promotional sale in which a finite number of perishable items are offered on a last-minute offer. The goal is to select the subset of customers to whom the offer will be made available, maximizing the expected return. Each client replies with a certain probability and reports a specific value that depends on the customer type, so that the selected subset has to balance the risk of not selling all the items with the risk of assigning an item to a low value customer. Selecting all those clients with values above a certain optimal threshold may fail to achieve the maximal revenue. However, using a linear programming relaxation, we prove that such threshold strategies attain a constant factor of the optimal value. The achieved factor is ${1\over 2}$ when a single item is to be sold, and approaches 1 as the number of available items grows to infinity. Furthermore, for the single item case, we propose an upper bound based on an exponential size linear program that allows us to get a threshold strategy achieving at least ${2\over 3}$ of the optimal revenue. Computational experiments with random instances show a significantly better performance than the theoretical predictions.
Talk in [PDF]
We describe an integrated model for TCP/IP protocols with multipath routing. The model combines a Network Utility Maximization for rate control, with a Markovian Traffic Equilibrium for routing. This yields a cross-layer design in which sources minimize the expected cost of sending flows, while routers distribute the incoming flow among the outgoing links according to a discrete choice model. We prove the existence of a unique equilibrium state, which is characterized as the solution of an unconstrained strictly convex program of low dimension. A distributed algorithm for solving this optimization problem is proposed, with a brief discussion of how it can be implemented by adapting current Internet protocols.
Talk in [PDF]
Research, as an activity, is fundamentally collaborative in nature. Driven by the massive amounts of data that are produced by computational simulations and high resolution scientific sensors, data-driven collaboration is of particular importance in the computational sciences. In this talk, I will discuss our experiences in designing, deploying, and operating an Canada wide advanced collaboration infrastructure in the support of the computational sciences. In particular, I will focus on the importance of data in such collaborations and discuss how current collaboration tools are sorely lacking in their support of data-centric collaboration.
McCullough-Miller space X = X(W) is a topological model for the outer automorphism group of a free product of groups W. We will discuss the question of just how good a model it is. In particular, we consider circumstances under which Aut(X) is precisely Out(W).
The talk will explain joint work with Yehuda Shalom showing that the only homomorphisms from certain arithmetic groups to totally disconnected, locally compact groups are the obvious, or naturally occurring, ones. For these groups, this extends the supperrigidity theorem that G. Margulis proved for homomorphisms from high rank arithmetic groups to Lie groups. The theorems will be illustrated by referring to the groups $SL_3(\mathbb{Z})$, $SL_2(\mathbb{Z}[\sqrt{2}])$ and $SL_3(\mathbb{Q})$.
CARMA is currently engaged in several shared projects with the IRMACS Centre and the OCANA Group UBC-O, both in British Columbia, Canada. This workshop will be an opportunity to learn about irmacs, Centres and to experience the issues in collaborating for research and teaching across the Pacific.
This will be followed by discussion and illustrations of collaboration, technology, teaching and funding etc.
Cross Pacific Collaboration pages at irmacs.
TBA
This is a discrete mathematics instructional seminar commencing 24 February--to meet on subsequent Thursdays from 3:00-4:00 p.m. The seminar will focus on "classical" papers and portions of books.
"In this talk I'll exhibit the interplay between Selberg integrals (interpreted broadly) and random matrix theory. Here an important role is played by the basic matrix operations of a random corank 1 projection (this reduces the number of nonzero eigenvalues by one) and bordering (this increases the number of eigenvalues by one)."
Concave utility functions and convex risk measures play crucial roles in economic and financial problems. The use of concave utility function can at least be traced back to Bernoulli when he posed and solved the St. Petersburg wager problem. They have been the prevailing way to characterize rational market participants for a long period of time until the 1970’s when Black and Scholes introduced the replicating portfolio pricing method and Cox and Ross developed the risk neutral measure pricing formula. For the past several decades the `new paradigm’ became the main stream. We will show that, in fact, the `new paradigm’ is a special case of the traditional utility maximization and its dual problem. Moreover, the convex analysis perspective also highlights that overlooking sensitivity analysis in the `new paradigm’ is one of the main reason that leads to the recent financial crisis. It is perhaps time again for bankers to learn convex analysis.
The talk will be divided into two parts. In the first part we layout a discrete model for financial markets. We explain the concept of arbitrage and the no arbitrage principle. This is followed by the important fundamental theorem of asset pricing in which the no arbitrage condition is characterized by the existence of martingale (risk neutral) measures. The proof of this gives us a first taste of the importance of convex analysis tools. We then discuss how to use utility functions and risk measures to characterize the preference of market agents. The second part of the talk focuses on the issue of pricing financial derivatives. We use simple models to illustrate the idea of the prevailing Black -Scholes replicating portfolio pricing method and related Cox-Ross risk-neutral pricing method for financial derivatives. Then, we show that the replicating portfolio pricing method is a special case of portfolio optimization and the risk neutral measure is a natural by-product of solving the dual problem. Taking the convex analysis perspective of these methods h
Concave utility functions and convex risk measures play crucial roles in economic and financial problems. The use of concave utility function can at least be traced back to Bernoulli when he posed and solved the St. Petersburg wager problem. They have been the prevailing way to characterize rational market participants for a long period of time until the 1970’s when Black and Scholes introduced the replicating portfolio pricing method and Cox and Ross developed the risk neutral measure pricing formula. For the past several decades the `new paradigm’ became the main stream. We will show that, in fact, the `new paradigm’ is a special case of the traditional utility maximization and its dual problem. Moreover, the convex analysis perspective also highlights that overlooking sensitivity analysis in the `new paradigm’ is one of the main reason that leads to the recent financial crisis. It is perhaps time again for bankers to learn convex analysis.
The talk will be divided into two parts. In the first part we layout a discrete model for financial markets. We explain the concept of arbitrage and the no arbitrage principle. This is followed by the important fundamental theorem of asset pricing in which the no arbitrage condition is characterized by the existence of martingale (risk neutral) measures. The proof of this gives us a first taste of the importance of convex analysis tools. We then discuss how to use utility functions and risk measures to characterize the preference of market agents. The second part of the talk focuses on the issue of pricing financial derivatives. We use simple models to illustrate the idea of the prevailing Black -Scholes replicating portfolio pricing method and related Cox-Ross risk-neutral pricing method for financial derivatives. Then, we show that the replicating portfolio pricing method is a special case of portfolio optimization and the risk neutral measure is a natural by-product of solving the dual problem. Taking the convex analysis perspective of these methods h
Professor Jonathan Borwein shares with us his passion for \(\pi\), taking us on a journey through its rich history. Professor Borwein begins with approximations of \(\pi\) by ancient cultures, and leads us through the work of Archimedes, Newton and others to the calculation of \(\pi\) in today's age of computers.
Professor Borwein is currently Laureate Professor in the School of Mathematical and Physical Sciences at the University of Newcastle. His research interests are broad, spanning pure, applied and computational mathematics and high-performance computing. He is also Chair of the Scientific Advisory Committee at the Australian Mathematical Sciences Institute (AMSI).
This talk will be broadcast from the Access Grid room V206 at the University of Newcastle, and will link to the West coast of Canada.
For more information visit AMSI's Pi Day website or read Jon Borwein's talk.
Co-author: Thomas Prellberg (Queen Mary, University of London)
Various kinds of paths on lattices are often used to model polymers. We describe some partially directed path models for which we find the exact generating functions, using instances of the `kernel method'. In particular, motivated by recent studies of DNA unzipping, we find and analyze the generating function for pairs of non-crossing partially directed paths with contact interactions. Although the expressions involving two-path problem are unweildy and tax the capacities of Maple and Mathematica, we are still able to gain an understanding of the singularities of the generating function which govern the behaviour of the model.
The Mahler measure of a polynomial of several variables has been a subject of much study over the past thirty years. Very few closed forms are proven but more are conjectured. We provide systematic evaluations of various higher and multiple Mahler measures using log-sine integrals. We also explore related generating functions for the log-sine integrals. This work makes frequent use of “The Handbook” and involves extensive symbolic computations.
In the 80's R.Grigorchuk found a finitely generated group such that the number of elements that can be written as a product of at most \(n\) generators grows faster than any polynomial in \(n\), but slower than any exponential in \(n\), so-called "intermediate" growth.
It can be described as an group of automorphisms of an infinite rooted binary tree, or in terms of abstract computing devices called "non-initial finite transducers".
In this talk I will describe what some of these short words/products of generators look like, and speculate on the asymptotic growth rate of all short words of length \(n\).
This is joint unpublished work with Mauricio Gutierrez (Tufts) and Zoran Sunic (Texas A&M).
The Mahler measure of a polynomial of several variables has been a subject of much study over the past thirty years. Very few closed forms are proven but more are conjectured. We provide systematic evaluations of various higher and multiple Mahler measures using log-sine integrals. We also explore related generating functions for the log-sine integrals. This work makes frequent use of “The Handbook” and involves extensive symbolic computations.
I will continue by showing relationships between Mahler measures and logsine integrals [PDF]. This should be comprehensible whether or not you heard Part 1.
This talk will present recent theoretical and experimental results contrasting quantum randomness with pseudo-randomness.
We shall conclude the discussion of some of the mathematics surrounding Birkhoff's Theorem about doubly stochastic matrices.
The stochastic Loewner evolution (SLE) is a one-parameter family of random growth processes in the complex plane introduced by the late Oded Schramm in 1999 which is predicted to describe the scaling limit of a variety of statistical physics models. Recently a number of rigorous results about such scaling limits have been established; in fact, Wendelin Werner was awarded the Fields Medal in 2006 for "his contributions to the development of stochastic Loewner evolution, the geometry of two-dimensional Brownian motion, and conformal field theory" and Stas Smirnov was awarded the Fields Medal in 2010 "for the proof of conformal invariance of percolation and the planar Ising model in statistical physics." In this talk, I will introduce some of these models including the Ising model, self-avoiding walk, loop-erased random walk, and percolation. I will then discuss SLE, describe some of its basic properties, and touch on the results of Werner and Smirnov as well as some of the major open problems in the area. This talk will be "colloquium style" and is intended for a general mathematics audience.
In my talk I will review some recent progress on evaluations of Mahler measures via hypergeometric series and Dirichlet L-series. I will provide more details for the case of the Mahler measure of $1+x+1/x+y+1/y$, whose evaluation was observed by C. Deninger and conjectured by D. Boyd (1997). The main ingredients are relations between modular forms and hypergeometric series in the spirit of Ramanujan. The talk is based on joint work with Mat Rogers.
The demiclosedness principle is one of the key tools in nonlinear analysis and fixed point theory. In this talk, this principle is extended and made more flexible by two mutually orthogonal affine subspaces. Versions for finitely many (firmly) nonexpansive operators are presented. As an application, a simple proof of the weak convergence of the Douglas-Rachford splitting algorithm is provided.
This week we shall start the classical paper by Jack Edmonds and DR Fulkerson on partitioning matroids.
On wednesday afternoon, we will be visited by Dr Stephen Hardy and Dr Kieran Larkin from Canon Information Systems Research Australia, Sydney. Drs Larkin and Hardy will be here to explore research opportunities with University of Newcastle researchers. To familiarise them with what we do and to help us understand what they do, there will be three short talks, giving information on the functions and activities of the CDSC, CARMA and Canon's group of 45 researchers. All are welcome to participate.
You are invited to celebrate the life and work of Paul Erdős!
NUMS and CARMA are holding a "Meet Paul Erdős Night" on Wednesday the 20th April starting at 4pm in V07 and we'd love you to come. You can view a poster with information about the night here.
Please RSVP by next Friday 15th April so that we can cater appropriately. To RSVP, reply to: nums@newcastle.edu.au
The Chaney-Schaefer $\ell$-tensor product $E\tilde{\otimes}_{\ell}Y$ of a Banach lattice $E$ and a Banach space $Y$ may be viewed as an extension of the Bochner space $L^p(\mu,Y) (1\leq p < \infty)$. We consider an extension of a classical martingale characterization of the Radon Nikodým property in $L^p(\mu,Y)$, for $1 < p < 1$, to $E\tilde{\otimes}_{\ell}Y$. We consider consequences of this extension, and time permitting, use it to represent set-valued measures of risk dened on Banach lattice-valued Orlicz hearts.
We introduce, assuming only a modest background in one variable complex analysis, the rudiments of infinite dimensional holomorphy. Approaches and some answers to elementary questions arising from considering monomial expansions in different settings and spaces are used to sample the subject.
We meet this Thursday at the usual time when I will show you a nice application of the Edmonds-Fulkerson matroid partition theorem, namely, I'll prove that Paley graphs have Hamilton decompositions (an unpublished result).
The elements of a free group are naturally considered to be reduced "words" in an certain alphabet. In this context, a palindrome is a group element which reads the same from left-to-right and right-to-left. Certain primitive elements, elements that can be part of a basis for the free group, are palindromes. We discuss these elements, and related automorphisms.
We resolve some recent and fascinating conjectural formulae for $1/\pi$ involving the Legendre polynomials. Our mains tools are hypergeometric series and modular forms, though no prior knowledge of modular forms is required for this talk. Using these we are able to prove some general results regarding generating functions of Legendre polynomials and draw some unexpected number theoretic connections. This is joint work with Heng Huat Chan and Wadim Zudilin. The authors dedicate this paper to Jon Borwein's 60th birthday.
In the late seventies, Bill Thurston defined a semi-norm on the homology of a 3-dimensional manifold which lends itself to the study of manifolds which fibre over the circle. This led him to formulate the Virtual Fibration Conjecture, which is fairly inscrutable and implies almost all major results and conjectures in the field. Nevertheless, Thurston gave the conjecture "a definite chance for a positive answer" and much research is currently devoted to it. I will describe the Thurston norm, its main properties and applications, as well as its relationship to McMullen’s Alexander norm and the geometric invariant for groups due to Bieri, Neumann and Strebel.
The aim of this talk is to demonstrate how cyclic division algebras and their orders can be used to enhance wireless communications. This is done by embedding the information bits to be transmitted into smart algebraic structures, such as matrix representations of order lattices. We will recall the essential algebraic definitions and structures, and further familiarize the audience with the transmission model of fading channels. An example application of major current interest is digital video broadcasting. Examples suitable to this application will be provided.
The Landau-Lifshitz-Gilbert equation (LLGE) comes from a model for the dynamics of the magnetization of a ferromagnetic material. In this talk we will first describe existing finite element methods for numerical solution of the deterministic and stochastic LLGEs. We will then present another finite element solution to the stochastic LLGE. This is a work in progress jointly with B. Goldys and K-N Le.
Let $T$ be a topological space (a compact subspace of ${\mathbb R^m}$, say) and let $C(T)$ be the space of real continuous functions on $T$, equipped with the uniform norm: $||f|| = \text{max}_{t\in T}|f(t)|$ for all $f \in C(T)$. Let $G$ be a finite-dimensional linear subspace of $C(T)$. If $f \in C(T)$ then $$d(f,G) = \text{inf}\{||f−g|| : g \in G\}$$ is the distance of $f$ from $G$, and $$P_G(f) = \{g \in G : ||f−g|| = d(f,G)\}$$ is the set of best approximations to $f$ from $G$. Then $$P_G : C(T) \rightarrow P(G)$$ is the set-valued metric projection of $C(T)$ onto $G$. In the 1850s P. L. Chebyshev considered $T = [a, b]$ and $G$ the space of polynomials of degree $\leq n − 1$. Our concern is with possible properties of $P_G$. The historical development, beginning with Chebyshev, Haar (1918) and Mairhuber (1956), and the present state of knowledge will be outlined. New results will demonstrate that the story is still incomplete.
High-dimensional integrals come up in a number of applications like statistics, physics and financial mathematics. If explicit solutions are not known, one has to resort to approximative methods. In this talk we will discuss equal-weight quadrature rules called quasi-Monte Carlo. These rules are defined over the unit cube $[0,1]^s$ with carefully chosen quadrature points. The quadrature points can be obtained using number-theoretic and algebraic methods and are designed to have low discrepancy, where discrepancy is a measure of how uniformly the quadrature points are distributed in $[0,1]^s$. In the one-dimensional case, the discrepancy coincides with the Kolmogorov-Smirnov distance between the uniform distribution and the empirical distribution of the quadrature points and has also been investigated in a paper by Weyl published in 1916.
The talk will focus on recent results or work in progress, with some open problems which span both Combinatorial Design and Sperner Theory. The work focuses upon the duality between antichains and completely separating systems. An antichain is a collection $\cal A$ of subsets of $[n]=\{1,...,n\}$ such that for any distinct $A,B\in\cal A$, $A$ is not a subset of $B$. A $k$-regular antichain on $[m]$ is an antichain in which each element of $[m]$ occurs exactly $k$ times. A CSS is the dual of an antichain. An $(n,k)CSS \cal C$ is a collection of blocks of size $k$ on $[n]$, such that for each distinct $a,b\in [n]$ there are sets $A,B \in \cal C$ with $a \in A-B$ and $b \in B-A$. The notions of $k$-regular antichains of size $n$ on $[m]$ and $(n,k)CSS$s in $m$ blocks are dual concepts. Natural questions to be considered include: Does a $k$-regular antichain of size $n$ exist on $[m]$? For $k
The concept of orthogonal double covers (ODC) of graphs originates in questions concerning database constraints and problems in statistical combinatorics and in design theory. An ODC of the complete graph $K_n$ by a graph $G$ is a collection of $n$ subgraphs of $K_n$, all isomorphic to $G$, such that any two of them share exactly one edge, and every edge of $K_n$ occurs in exactly two of the subgraphs. We survey some of the main results and conjectures in the area as well as constructions, generalizations and modifications of ODC.
This paper studies combinations of the Riemann zeta function, based on one defined by P.R. Taylor, and shown by him to have all its zeros on the critical line. With a rescaled complex argument, this is denoted here by ${\cal T}_-(s)$, and is considered together with a counterpart function ${\cal T}_+(s)$, symmetric rather than antisymmetric about the critical line. We prove by a graphical argument that ${\cal T}_+(s)$ has all its zeros on the critical line, and that the zeros of both functions are all of first order. We also establish a link between the zeros of ${\cal T}_-(s)$ and of ${\cal T}_+s)$ with zeros of the Riemann zeta function $\zeta(2 s-1)$, and between the distribution functions of the zeros of the three functions.
This talk concerns developing a numerical method of the Newton type to solve systems of nonlinear equations described by nonsmooth continuous functions. We propose and justify a new generalized Newton algorithm based on graphical derivatives, which have never been used to derive a Newton-type method for solving nonsmooth equations. Based on advanced techniques of variational analysis and generalized differentiation, we establish the well-posedness of the algorithm, its local superlinear convergence, and its global convergence of the Kantorovich type. Our convergence results hold with no semismoothness and Lipschitzian assumptions, which is illustrated by examples. The algorithm and main results obtained in the paper are compared with well-recognized semismooth and $B$-differentiable versions of Newton's method for nonsmooth Lipschitzian equations.
One of the most effective avenues in recent experimental mathematics research is the computational of definite integrals to high precision, followed by the identification of resulting numerical values as compact analytic formulas involving well-known constants and functions. In this talk we summarize several applications of this methodology in the realm of applied mathematics and mathematical physics, in particular Ising theory, "box integrals", and the study of random walks.
We will investigate the existence of common fixed points for point-wise Lipschitzian semigroups of nonlinear mappings $Tt : C - C$ where $C$ is a bounded, closed, convex subset of a uniformly convex Banach space $X$, i.e. a family such that $T0(x) = x$, $Ts+t = Ts(Tt(x))$, where each $Tt$ is pointwise Lipschitzian, i.e. there exists a family of functions $at : C - [0;x)$ such that $||Tt(x)-Tt(y)|| < at(x)||x-y||$ for $x$, $y \in C$. We will also demonstrate how the asymptotic aspect of the pointwise Lipschitzian semigroups can be expressed in terms of the respective Frechet derivatives. We will discuss some questions related to the weak and strong convergence of certain iterative algorithms for the construction of the stationary and periodic points for such semigroups.
These talks are aimed at extending the undergraduates' field of vision, or increase their level of exposure to interesting ideas in mathematics. We try to present topics that are important but not covered (to our knowledge) in undergraduate coursework. Because of the brevity and intended audience of the talks, the speaker generally only scratches the surface, concentrating on the most interesting aspects of the topic.
In my talk I will try to overview ideas behind (still recent) achievements on arithmetic properties of numbers $\zeta(s)=\sum_{n=1}^\infty n^{-s}$ for integral $s\ge2$, with more emphasis on odd $s$. The basic ingredients of proofs are generalized hypergeometric functions and linear independence criteria. I will also address some "most recent" results and observations in the subject, as well as connections with other problems in number theory.
This paper considers designing permission sets to influence the project selection decision made by
a better-informed agent. The project characteristics are two-dimensional. The principal can verify the characteristics of the project selected by the agent. However, the principal cannot observe the number and characteristics of those projects that the agent could, but does not, propose. The payoffs to the agent and the principal are different. Using calculus of variations, we solve the optimal permission set, which can be characterized by a threshold function. We obtain comparative statics on the preference alignment and expected number of projects available. When outcome-based incentives are feasible, we discuss the use of financial inducement to maximize the social welfare. We also extend our analysis to two cases: 1) when one of the project characteristics is unobservable; and 2) when there are multiple agents with private preferences and the principal must establish a universal permission set.
Key words: calculus of variations, optimal permission set, project management.
The most important open problem in Monotone Operator Theory concerns the maximal monotonicity of the sum of two maximally monotone operators provided that Rockafellar's constraint qualification holds. In this talk, we prove the maximal monotonicity of the sum of a maximally monotone linear relation and the subdifferential of a proper lower semicontinuous convex function satisfying Rockafellar's constraint qualification. Moreover, we show that this sum operator is of type (FPV).
Infinite index subgroups of integer matrix groups like $SL(n,Z)$ which are Zariski dense in $SL(n)$ arise in geometric diophantine problems (e.g., integral Apollonian packings) as well as monodromy groups associated with families of varieties. One of the key features needed when applying such groups to number theoretic problems is that the congruence graphs associated with these groups are "expanders". We will introduce and explain these ideas and review some recent developments especially those connected with the affine sieve.
It is shown that, for maximally monotone linear relations defined on a general Banach space, the monotonicities of dense type, of negative-infimum type, and of Fitzpatrick-Phelps type are the same and equivalent to monotonicity of the adjoint. This result also provides affirmative answers to two problems: one posed by Phelps and Simons, and the other by Simons.
We continue looking at the 1960 Hoffman-Singleton paper about Moore graphs and related topics.
Given a mixed-up sequence of distinct numbers, say 4 2 1 5 7 3 6, can you pass them through an infinite stack (first-in-last-out) from right to left, and put them in order?
2 1 5 7 3 6 | | | 4 | |___| 1 5 7 3 6 | | | 2 | | 4 | |___| 1 5 7 3 6 | | | 2 | | 4 | |___| 1 2 5 7 3 6 | | | 4 | |___|umm….. This talk will be about this problem - when can you do it with one stack, two stacks (in series), an infinite and a finite capacity stack in series, etc etc? How many permutations of 1,2,...,n are there that can be sorted? The answer will lie in the "forbidden subpatterns" of permutations, and it turns out there is a whole theory of this, which I will try to describe.
20-minute presentation followed by 10 minutes of questions and discussion.
We introduce the concept and several examples of q-analogs. A particular focus is on the q-binomial coefficients, which are characterized in a variety of ways. We recall classical binomial congruences and review their known q-analogs. Finally, we establish a full q-analog of Ljunggren's congruence which states that (a*p choose b*p) is congruent to (a choose b) mod p^3.
This week we shall conclude the proof of the uniqueness of the Hoffman-Singleton graph.
This Thursday is your chance to start anew! I shall be starting a presentation of the best work that has been done on Lovasz's famous 1979 problem (now a conjecture) stating that every connected vertex-transitive graph has a Hamilton path. This is good stuff and requires minimal background.
Basically, a function is Lipschitz continuous if it has a bounded slope. This notion can be extended to set-valued maps in different ways. We will mainly focus on one of them: the so-called Aubin (or Lipschitz-like) property. We will employ this property to analyze the iterates generated by an iterative method known as the proximal point algorithm. Specifically, we consider a generalized version of this algorithm for solving a perturbed inclusion $$y \in T(x),$$ where $y$ is a perturbation element near 0 and $T$ is a set-valued mapping. We will analyze the behavior of the convergent iterates generated by the algorithm and we will show that they inherit the regularity properties of $T$, and vice versa. We analyze the cases when the mapping $T$ is metrically regular (the inverse map has the Aubin property) and strongly regular (the inverse is locally a Lipschitz function). We will not assume any type of monotonicity.
We resolve and further study a sinc integral evaluation, first posed in The American Mathematical Monthly in [1967, p. 1015], which was solved in [1968, p. 914] and withdrawn in [1970, p. 657]. After a short introduction to the problem and its history, we give a general evaluation which we make entirely explicit in the case of the product of three sinc functions. Finally, we exhibit some general structure of the integrals in question.
The topic is Lovasz's Conjecture that all connected vertex-transitive graphs have Hamilton paths.
We are interested in local geometrical properties of a Banach space which are preserved under natural embeddings in all even dual spaces. An example of this behaviour which we generalise is:
if the norm of the space $X$ is Fréchet differentiable at $x \in S(X)$ then the norm of the second dual $X^{**}$ is Fréchet differentiable at $\hat{x}\in S(X)$ and of $X^{****}$ at $\hat{\hat{x}} \in S(X^{****})$ and so on....
The results come from a study of Hausdorff upper semicontinuity properties of the duality mapping characterising general differentiability conditions satisfied by the norm.
One of the most intriguing problems in metric fixed point theory is whether we can find closed convex and unbounded subsets of Banach spaces with the fixed point property. A celebrated theorem due to W.O. Ray in 1980 states that this cannot happen if the space is Hilbert. This problem was so poorly understood that two antagonistic questions were raised: the first one was if this property characterizes Hilbert spaces within the class of Banach spaces, while the second one asked if this property characterizes any space at all, that is, if Ray's theorem states in any Banach space. The second problem is still open but the first one has recently been answered in the negative by T. Domínguez Benavides after showing that Ray's theorem also holds true in the classical space of real sequences $c_0$.
The situation seems, however, to be completely different for CAT(0) spaces. Although Hilbert spaces are a particular class of CAT(0) spaces, there are different examples of CAT(0) spaces, including $\mathbb{R}$-trees, in the literature for which we can find closed convex and unbounded subsets with the fixed point property. In this talk we will look closely at this problem. First, we will introduce a geometrical condition inspired in the Banach-Steinhaus theorem for CAT(0) spaces under which we can still assure that Ray's theorem holds true. We will provide different examples of CAT(0) spaces with this condition but we will notice that all these examples are of a very strong Hilbertian nature. Then we will look at $\delta$-hyperbolic geodesic spaces. If looked from very far these spaces, if unbounded, resemble $\mathbb{R}$-trees, therefore it is natural to try to find convex closed and unbounded subsets with the fixed point property in these spaces. We will present some partial results in this direction.
This talk is based a joint work with Bożena Piątek.
This week the discrete mathematics instructional seminar will continue with a consideration of the Lovasz problem. This is the last meeting of the seminar until 13 October.
In Euclidean geometry, Ptolemy's theorem is a relation between the four sides and two diagonals of a cyclic quadrilateral (a quadrilateral whose vertices lie on a common circle). If the quadrilateral is given with its four vertices $A$, $B$, $C$, and $D$ in order, then the theorem states that: $$|AC| \cdot |BD| = |AB| \cdot |CD| + |AD| \cdot |BC|.$$ Furthermore, it is well known that in every Euclidean (or Hilbert) space $H$ we have that $$||x - y|| \cdot ||z - w|| \leq ||x - z|| \cdot ||y - w|| + ||z - y|| \cdot ||x - w||$$ for any four points $w, x, y, z \in H$. This is the classical Ptolemy inequality and it is well-known that it characterizes the inner product spaces among all normed spaces. A Ptolemy metric space is any metric space for which the same inequality holds, replacing norms by distances, for any four points. CAT(0) spaces are geodesic spaces of global nonpositive curvature in the sense of Gromov. Hilbert spaces are CAT(0) spaces and, even more, CAT(0) spaces have many common properties with Hilbert spaces. In particular, although a Ptolemy geodesic metric space need not be CAT(0), any CAT(0) space is a Ptolemy metric space. In this expository talk we will show some recent progress about the connection between Ptolemy metric spaces and CAT(0) spaces.
In this seminar talk we will recall results on the Drop property in Banach spaces and study it in geodesic spaces. In particular, we will show a variant of Danes Drop Theorem in Busemann convex spaces and derive well-posedness results about minimization of convex functions. This talk is based a joint work with Adriana Nicolae.
The choice of a plan for radiotherapy treatment for an individual cancer patient requires the careful trade-off between the goals of delivering a sufficiently high radiation dose to the tumour and avoiding irradiation of critical organs and normal tissue. This problem can be formulated as a multi-objective linear programme (MOLP). In this talk we present a method to compute a finite set of non-dominated points that can be proven to uniformly cover the complete non-dominated set of an MOLP (a finite representation). This method generalises and improves upon two existing methods from the literature. We apply this method to the radiotherapy treatment planning problem, showing some results for clinical cases. We illustrate how the method can be used to support clinician’s decision making when selecting a treatment plan. The treatment planner only needs to specify a threshold for recognising two treatment plans as different and is able to interactively navigate through the representative set without the trial-and-error process often used in practice today.
The talk will begin with a reminder of what a triangulated category is, the context in which they arose, and why we care about them. Then we will discuss the theory of compactly generated triangulated categories, again illustrating the applications. This theory is old and well understood. Finally we will come to well generated categories, where several open problems remain very mysterious.
Noncommutative geometry is based on fairly sophisticated methods: noncommutative C*-algebras are called noncommutative topological spaces, noncommutative von Neumann algebras are noncommutative measure spaces, and Hopf algebras and homological invariants describe the geometry. Standard topology, on the other hand, is based on naive intuitions about discontinuity: a continuous function is one whose graph does not have any gaps, and cutting and gluing are used to analyse and reconstruct geometrical objects. This intuition does not carry over to the noncommutative theory, and the dictum from quantum mechanics that it does not make sense any more to think about point particles perhaps explains a lack of expectation that it should. The talk will describe an attempt to make this transfer by computing the polar decompositions of certain operators in the group C*-algebras of free groups. The computation involves some identities and evaluations of integrals that might interest the audience, and the polar decomposition may be interpreted as a noncommutative version of the double angle formula familiar from high school geometry.
Mean periodic functions of a single real variable were an innovation of the mid-20th century. Although not as well known as almost periodic functions, they have some nice properties, with applications to certain mean value theorems.
We shall start looking at Dave Witte's (now Dave Morris) proof that connected Cayley digraphs of prime power order have Hamilton directed cycles.
This week we shall conclude our look at the paper by Dave Witte on Hamilton directed cycles in Cayley digraphs of prime power order.
I shall describe highlights of my two decades of experience with Advanced Collaborative Environments (ACEs) in Canada and Australia, running shared seminars, conferences, resources and courses over the internet. I shall also describe the AMSI Virtual Lab proposal which has just been submitted to NeCTAR. The slides for much of this talk are at http://www.carma.newcastle.edu.au/jon/aces11.pdf.
We interpret the Hamiltonian Cycle problem (HCP) as a an optimisation problem with the determinant objective function, naturally arising from the embedding of HCP into a Markov decision process. We also exhibit a characteristic structure of the class of all cubic graphs that stems from the spectral properties of their adjacency matrices and provide an analytic explanation of this structure.
We look at (parts of) the survey paper Dependent Random Choice by Jacob Fox and Benny Sudakov: http://arxiv.org/abs/0909.3271
The abstract of the paper says "We describe a simple and yet surprisingly powerful probabilistic technique which shows how to find in a dense graph a large subset of vertices in which all (or almost all) small subsets have many common neighbors. Recently this technique has had several striking applications to Extremal Graph Theory, Ramsey Theory, Additive Combinatorics, and Combinatorial Geometry. In this survey we discuss some of them."
My plan for the seminar is to start with a quick recap of the classics of extremal (hyper)graph theory (i.e. Turan, Ramsey, Ramsey-Turan), then look at some simple examples for the probabilistic method in action, and finally come to the striking applications mentioned in the quoted abstract.
Only elementary probability is required.
The arrival of online casinos in 1996 brought games that you would find at land-based casinos to the computer screens of gamblers all over the world. A major benefit in online casinos is in the automation of systems across several computers for favourable games; as this has the potential to make a significant amount of profit. This article applies this concept to online progressive video poker games. By establishing a set of characteristics to compare different games, analyses are carried out to identify which game should be the starting point for building an automated system. Bankroll management and playing strategies are also analyzed in this article, and are shown to be important components if profiting from online gambling is going to be a long term business.
Within the topic of model-based forecasting with exponential smoothing, this paper seeks to contribute to the understanding of the property of certain stochastic processes to converge almost surely to a constant. It provides a critical discussion of the related views and ideas found in the recent forecasting literature and aims at elucidating the present confusion by review and study of the classical and less known theorems of probability theory and random processes. The paper then argues that a useful role of exponential smoothing for modelling and forecasting sequential count data is limited and methods that are either not based on exponential smoothing or use exponential smoothing in a more flexible way are worthy of exploration. An approach to forecasting such data based on applying exponential smoothing to the probabilities of each count outcome is thus introduced and its merits are discussed in the context of pertinent statistical literature.
In this talk we consider a problem of scheduling several jobs on multiple machines satisfying precedence and resource constraints. Each job has a due date and the objective is to minimize the cumulative weighted tardiness across all jobs. We investigate how to efficiently obtain heuristic solutions on multi-core computers using an Ant Colony Systems framework for the optimisation. The talk will discuss some of the challenges that arise in designing a multi-threaded heuristic and provided computational results for some alternative algorithm variants. The results showing that theACS heuristic is more effective particularly for large problem instances than other methods developed to date.
The relation between mechanics and optimization goes back at least to Euler and was further strengthened by the Lagrangian and Hamiltonian formulations of Newtonian mechanics. Since then, numerous variational formulations of mechanical phenomena have been proposed and although the link to optimization often has been somewhat obscured in the subsequent development of numerical methods, it is in fact as strong as ever. In this talk, I will summarize some of the recent developments in the application of modern mathematical programming methods to problems involving the simulation mechanical phenomena. While the methodology is quite general, emphasis will be on the static and dynamic deformation processes in civil engineering,geomechanics and the earth sciences.
The Feasibility Pump (FP) has proved to be an effective method for finding feasible solutions to Mixed-Integer Programming problems. We investigate the benefits of replacing the rounding procedure with a more sophisticated integer line search that efficiently explores a larger set of integer points with the aim of obtaining an integer feasible solution close to an FP iterate. An extensive computational study on 1000+ benchmark instances demonstrates the effectiveness of the proposed approach.
A common issue when integrating airline planning processes
is the long planning horizon of the crew pairing problem. We propose a
new approach to the crew pairing problem through which we retain a
significant amount of flexibility. This allows us to solve an
integrated aircraft routing, crew pairing, and tail number assignment
problem only few days before the day of operations and with a rolling
planning horizon. The model simultaneously schedules appropriate rest
periods for all crews and maintenance checks for all tail numbers.
A Branch-and-Price method is proposed in which each tail number and
each 'crew block' is formulated as a subproblem.
A water and sewage system, a power grid, a
telecommunication network, are all examples of network
infrastructures. Network infrastructures are a common phenomenon in
many industries. A network infrastructure is characterized by physical
links and connection points. Examples of physical links are pipes
(water and sewage system), fiber optic cables (telecommunication
network), power lines (power grid), and tracks (rail network). Such
network infrastructures have to be maintained and, often, have to be
upgraded or expanded. Network upgrades and expansions typically occur
over a period of time due to budget constraints and other
considerations. Therefore, it becomes important to determine both when
and where network upgrades and expansions should take place so as to
minimize the infrastructure investment as well as current and future
operational costs.
We introduce a class of multi-period network infrastructure expansion
problems that allow us to study the key issues related to the choice
and timing of infrastructure expansions and their impact on the costs
of the activities performed on that infrastructure. We focus on the
simplest variant, an incremental shortest path problem (ISPP). We show
that even ISPP is NP-hard, we introduce a special case that is
polynomially solvable, we derive structural solution properties, we
present an integer programming formulation and classes of valid
inequalities, and discuss the results of a computational study.
The classical single period problem (SPP) has wide applicability especially in service industries which dominates the economy. In this paper a single period production problem, is considered, as a specific type of SPP. The SPPmodel is extended by considering the probability of scrap and rework in production at the beginning and during the period. The optimal solution which maximizes the expected value of total profit obtained. In the case of producing the scrap items and defective items which should rework, the optimal profit of system in comparison to ideal production system reduces. Also, the reduction of profit is more sensitive by increasing the probability of producing scrap items in comparison with the probability of producing defective items. These results would help the managers in order to make the right decision about changing or revising machines or technologies.
In this presentation I briefly discuss practical and philosophical issues related to the role of the peer-review process in maintaining the quality of scientific publications. The discussion is based on, among other things, my experience over the past eight years in containing the spread of voodoo decision theories in Australia. To motivate the discussion, I ask: how do you justify the use a model of local robustness (operating in the neighborhood of a wild guess) to manage Black Swans and Unknown Unknowns?
Design of a complex system needs both micro and macro level competencies to capture the underlying structure of complex problem satisfying convergence to good solution point. Systems such as complex organizations, complex New Product Development (NPD) and complex network of firms (Supply Chains or SC) require competencies at both macro (coordination and integration) and micro (capable designers, teams for NPD and capable firms in SC) entities. Given high complexity in such problems at both macro and micro levels, a couple of different errors can happen at each: 1) Either acceptance of a wrong solution or rejection of a right solution at micro level. 2) Either coordination of a couple of entities that do not need any coordination [e.g. teams or designers working in NPD might put too much time in meetings and firms in SC might lose their flexibility due to limitations from powerful and leader firms in SC] or lack of deployed resources for entities that need coordination [e.g. inconsistencies in decisions made in decentralized systems such as NPD and SC]. In this paper a simple and parsimonious Agent Based Model (ABM) of NK type is build and simulated to study these complex interactive systems. The results of simulations provide some insights on imperfect management of above mentioned complex systems. For instance, we found that asymmetry in any of the above mentioned errors favours a particular policy in management of these systems.
A problem that frequently arises in environmental surveillance is where to place a set of sensors in order to maximise collected information. In this article we compare four methods for solving this problem: a discrete approach based on the classical k-median location model, a continuous approach based on the minimisation of the prediction error variance, an entropy-based algorithm, and simulated annealing. The methods are tested on artificial data and data collected from a network of sensors installed in the Springbrook National Park in Queensland, Australia, for the purpose of tracking the restoration of biodiversity. We present an overview of these methods and a comparison of results.
This talk presents an innovative model for describing the effects of QM on organizational productivity, traditionally researched by statistical models. Learning inside organizations combined with the information processing metaphor of organizations is applied to build a computational model for this research. A reinforcement learning (RL) algorithm is implemented in the computational model to characterize the effects of quality leadership on productivity. The results show that effective quality leadership, being a balanced combination of exploration of new actions and exploitation of previous good actions, outperform pure exploration or exploitation strategies in the long run. However, pure exploitation outperforms the exploration and RL algorithms in the short term. Furthermore, the effects of complexity of customer requirements on productivity are investigated. From the results it can be argued that more complexity usually leads to less productivity. Also, the gap between random action algorithm and RL is reduced when the complexity of customer requirements increases. As regards agent types, it can be inferred that well- balanced business processes comprised of similar agents (in terms of agents’ processing time and accuracy) perform better than other scenarios.
Modular forms have had an important role in number theory for over one hundred years. Modular forms are also of interest in areas such as topology, cryptography and communications network theory. More recently, Peter Sarnak’s August talk, "Chaos, Quantum Mechanics and Number Theory" strongly suggested a link between modular forms and quantum mechanics. In this talk we explain modular forms, in the context of seeking a formula for the number of representations of an integer as the sum of four squares.
We look at (parts of) the survey paper Dependent Random Choice by Jacob Fox and Benny Sudakov: http://arxiv.org/abs/0909.3271. The abstract of the paper says "We describe a simple and yet surprisingly powerful probabilistic technique which shows how to find in a dense graph a large subset of vertices in which all (or almost all) small subsets have many common neighbors. Recently this technique has had several striking applications to Extremal Graph Theory, Ramsey Theory, Additive Combinatorics, and Combinatorial Geometry. In this survey we discuss some of them." My plan for the seminar is to start with a quick recap of the classics of extremal (hyper)graph theory (i.e. Turan, Ramsey, Ramsey-Turan), then look at some simple examples for the probabilistic method in action, and finally come to the striking applications mentioned in the quoted abstract. Only elementary probability is required.
In this talk I attempt to explain a general approach in proving irrationality and linear independence results for q-hypergeometric series. An explicit Pade construction is introduced with some (quantitative) arithmetic implications for well-known q-mathematical constants.
Probability densities are a major tool in exploratory statistics and stochastic modelling. I will talk about a numerical technique for the estimation of a probability distribution from scattered data using exponential families and a maximum a-posteriori approach with Gaussian process priors. Using Cameron-Martin theory, it can be seen that density estimation leads to a nonlinear variational problem with a functional defined on a reproducing kernel Hilbert space. This functional is strictly convex. A dual problem based on Fenchel duality will also be given. The (original) problem is solved using a Newton-Galerkin method with damping for global convergence. In this talk I will discuss some theoretical results relating to the numerical solution of the variational problem and the results of some computational experiments. A major challenge is of course the curse of dimensionality which appears when high-dimensional probability distributions are estimated.
Thomas will be finishing his talks this Thursday where he will finish looking at (parts of) the survey paper Dependent Random Choice by Jacob Fox and Benny Sudakov: http://arxiv.org/abs/0909.3271
We discuss the asymmetric sandwich theorem, a generalization of the Hahn–Banach theorem. As applications, we derive various results on the existence of linear functionals in functional analysis that include bivariate, trivariate and quadrivariate generalizations of the Fenchel duality theorem. We consider both results that use a simple boundedness hypothesis (as in Rockafellar’s version of the Fenchel duality theorem) and also results that use Baire’s theorem (as in the Robinson–Attouch–Brezis version of the Fenchel duality theorem).
Lattice paths effectively model phenomena in chemistry, physics and probability theory. Techniques of analytic combinatorics are very useful in determining asymptotic estimates for enumeration, although asymptotic growth of the number of Self Avoiding Walks on a given lattice is known empirically but not proved. We survey several families of lattice paths and their corresponding enumerative results, both explicit and asymptotic. We conclude with recent work on combinatorial proofs of asymptotic expressions for walks confined by two boundaries.
A Hamilton surface decomposition of a graph is a decomposition of the collection of shortest cycles in such a way that each member of the decomposition determines a surface (with maximum Euler characteristic). Some sufficient conditions for Hamilton surface decomposition of cartesian products of graphs are obtained. Necessary and sufficient conditions are found for the case when factors are even cycles.
The minimal degree of a finite group $G$ is the smallest non-negative integer $n$ such that $G$ embeds in $\Sym(n)$. This defines an invariant of the group $\mu(G)$. In this talk, I will present some interesting examples of calculating $\mu(G)$ and examine how this invariant behaves under taking direct products and homomorphic images.
In particular, I will focus on the problem of determining the smallest degree for which we obtain a strict inequality $\mu(G \times H) < \mu(G) + \mu(H)$, for two groups $G$ and $H$. The answer to this questions also leads us to consider the problem of exceptional permutation groups. These are groups $G$ that possess a normal subgroup $N$ such that $\mu(G/N) > \mu(G)$. They are somewhat mysterious in the sense that a particular homomorphic image becomes 'harder' to faithfully represent than the group itself. I will present some recent examples of exceptional groups and detail recent developments in the 'abelian quotients conjecture' which states that $\mu(G/N) < \mu(G)$, whenever $G/N$ is abelian.
We prove the it is NP-hard for a coalition of two manipulators to compute how to manipulate the Borda voting rule. This resolves one of the last open problems in the computational complexity of manipulating common voting rules. Because of this NP-hardness, we treat computing a manipulation as an approximation problem where we try to minimize the number of manipulators. Based on ideas from bin packing and multiprocessor scheduling, we propose two new approximation methods to compute manipulations of the Borda rule. Experiments show that these methods significantly outperform the previous best known approximation method. We are able to find optimal manipulations in almost all the randomly generated elections tested. Our results suggest that, whilst computing a manipulation of the Borda rule by a coalition is NP-hard, computational complexity may provide only a weak barrier against manipulation in practice.
We also consider Nanson’s and Baldwin’s voting rules that select a winner by successively eliminating candidates with low Borda scores. We theoretically and experimentally demonstrate that these rules are significantly more difficult to manipulate compared to Borda rule. In particular, with unweighted votes, it is NP-hard to manipulate either rule with one manipulator, whilst with weighted votes, it is NP-hard to manipulate either rule with a small number of candidates and a coalition of manipulators.
We consider the problem of packing ellipsoids of different size and shape in an ellipsoidal container so as to minimize a measure of total overlap. The motivating application is chromosome organization in the human cell nucleus. A bilevel optimization formulation is described, together with an algorithm for the general case and a simpler algorithm for the special case in which all ellipsoids are in fact spheres. We prove convergence to stationary points of this nonconvex problem, and describe computational experience. The talk describes joint work with Caroline Uhler (IST, Vienna).
Having been constructed as trading strategies, option spreads are also used in margin calculations for offsetting positions in options. All option spreads that appear in trading and margining practice have two, three or four legs. As shown in Rudd and Schroeder (Management Sci, 1982), the problem of margining option portfolios where option spreads with two legs are used for offsetting can be solved in polynomial time by network flow algorithms. However, spreads with only two legs do not provide sufficient accuracy in measuring risk. Therefore, margining practice also employs spreads with three and four legs. A polynomial-time solution to the extension of the problem where option spreads with three and four legs are also used for offsetting is not known. We propose a heuristic network-flow algorithm for this extension and present a computational study that demonstrates high efficiency of the proposed algorithm in margining practice.
We consider a general class of convex optimization problems in which one seeks to minimize a strongly convex function over a closed and convex set which is by itself an optimal set of another convex problem. We introduce a gradient-based method, called the minimal norm gradient method, for solving this class of problems, and establish the convergence of the sequence generated by the algorithm as well as a rate of convergence of the sequence of function values. A portfolio optimization example is given in order to illustrate our results.
Graph closures became recently an important tool in Hamiltonian Graph Theory since the use of closure techniques often substantially simplifies the structure of a graph under consideration while preserving some of its prescribed properties (usually of Hamiltonian type). In the talk we show basic ideas of construction of some graph closures for claw-free graphs and techniques that allow to reduce the problem to cubic graphs. The approach will be illustrated on a recently introduced closure concept for Hamilton-connectedness in claw-free graphs and, as an application, an asymptotically sharp Ore-type degree condition for Hamilton-connectedness in claw-free graphs will be obtained.
Two sets of functions are studied to ascertain whether they are Stieltjes functions and whether they are completely monotonic. The first group of functions are all built from the Lambert $W$ function. The $W$ function will be reviewed briefly. It will be shown that $W$ is Bernstein and various functions containing $W$ are Stieltjes. Explicit expressions for the Stieltjes transforms are obtained. We also give some new results regarding general Stieltjes functions.
The second set of functions were posed as a challenge by Christian Berg in 2002. The functions are $(1+a/x)^{(x+b)}$ for various $a$ and $b$. We show that the functions is Stieltjes for some ranges of $a,b$ and investigate experimentally complete monotonicity for a larger range. We claim an accurate experimental value for the range.
My co-authors are Rob Corless, Peter Borwein, German Kalugin and Songxin Liang.
Let $K$ be a complete discrete valuation field of characteristic zero with residue field $k_K$ of characteristic $p > 0$. Let $L/K$ be a finite Galois extension with Galois group $G = \text{Gal}(L/K)$ and suppose that the induced extension of residue fields $k_L/k_K$ is separable. Let $W_n(.)$ denote the ring of $p$-typical Witt vectors of length $n$. Hesselholt [Galois cohomology of Witt vectors of algebraic integers, Math. Proc. Cambridge Philos. Soc. 137(3) (2004), 551557] conjectured that the pro-abelian group ${H^1(G,W_n(O_L))}_{n>0}$ is isomorphic to zero. Hogadi and Pisolkar [On the cohomology of Witt vectors of $p$-adic integers and a conjecture of Hesselholt, J. Number Theory 131(10) (2011), 17971807] have recently provided a proof of this conjecture. In this talk, we present a simplified version of the original proof which avoids many of the calculations present in that version.
Integrability theory is the area of mathematics in which methods are developed for the exact solution of partial differential equations, as well as for the study of their properties. We concentrate on PDEs appearing in Physics and other applications. Darboux transformations constitute one of the important methods used in integrability theory and, as well as being a method for the exact solution of linear PDEs, they are an essential part of the method of Lax pairs, used for the solution of non-linear PDEs. A large series of Darboux transformations may be constructed using Wronskians built from some number of individual solutions of the original PDE. In this talk we prove a long-standing conjecture that this construction captures all possible Darboux transformations for transformations of order two, while for transformations of order one the construction captures everything but two Laplace transformations. An introduction into the theory will be provided.
Power line communication has been proposed as a possible solution to the "last mile" problem in telecommunications i.e. providing economical high speed telecommunications to millions of end users. As well as the usual background interference (noise), two other types of noise must also be considered for any successful practical implementation of power line communication. Coding schemes have traditionally been designed to deal only with background noise, and in such schemes it is often assumed that background noise affects symbols in codewords independently at random. Recently, however, new schemes have been proposed to deal with the extra considerations in power line communication. We introduce neighbour transitive codes as a group theoretic analogue to the assumption that background noise affects symbols independently at random. We also classify a family of neighbour transitive codes, and show that such codes have the necessary properties to be useful in power line communication.
We present a technique for enhancing a progressive hedging-based metaheuristic for a network design problem that models demand uncertainty with scenarios. The technique uses machine learning methods to cluster scenarios and, subsequently, the metaheuristic repeatedly solves multi-scenario subproblems (as opposed to single-scenario subproblems as is done in existing work). With a computational study we see that solving multi-scenario subproblems leads to a significant increase in solution quality and that how you construct these multi-scenario subproblems directly impacts solution quality. We also discuss how scenario grouping can be leveraged in a Benders' approach and show preliminary results of its effectiveness. This is joint work with Theo Crainic and Walter Rei at University of Quebec at Montreal.
We start this talk by introducing some basic definitions and properties relative to geodesic in the setting of metric spaces. After showing some important examples of geodesic metric spaces (which will be used through this talk), we shall define the concept of firmly nonexpansive mappings and we shall prove the existence, under mild conditions, of periodic points and fixed points for this class of mappings. Some of these results unify and generalize previous ones. We shall give a result relative to the $\Delta$-convergence to a fixed point of Picard iterates for firmly nonexpansive mappings, which is obtained from the asymptotic regularity of this class of iterates. Moreover, we shall get an effective rate of asymptotic regularity for firmly nonexpansive mappings (this result is new, as far as we know, even in linear spaces). Finally, we shall apply our results to a minimization problem. More precisely, we shall prove the $\Delta$-convergence to a minimizer of a proximal point-like algorithm when applied to a convex proper lower semi-continuous function defined on a CAT(0) space.
The Discrete Mathematics Instructional Seminar will be getting underway again this Thursday.
Parabolic obstacle problems find applications in the financial markets for pricing American put options. We present a mixed and an equivalent variational inequality hp-interior penalty DG (IPDG) method combined with an hp-time DG (TDG) method to solve parabolic obstacle problems approximatively. The contact conditions are resolved by a biorthogonal Lagrange multiplier and are component-wise decoupled. These decoupled contact conditions are equivlent to finding the root of a non-linear complementary function. This non-linear problem can in turn be solved efficiently by a semi-smooth Newton method. For the hp-adaptivity a p-hierarchical error estimator in conjunction with a local analyticity estimate is employed. For the considered stationary problem, this leads to exponential convergence, and for the instationary problem to greatly improved convergence rates. Numerical experiments are given demonstrating the strengths and limitations of the approaches.
Network infrastructures are a common phenomenon. Network upgrades and expansions typically occur over time due to budget constraints. We introduce a class of incremental network design problems that allow investigation of many of the key issues related to the choice and timing of infrastructure expansions and their impact on the costs of the activities performed on that infrastructure. We focus on the simplest variant: incremental network design with shortest paths, and show that even its simplest variant is NP-hard. We investigate structural properties of optimal solutions, we analyze the worst-case performance of natural greedy heuristics, we derive a 4-approximation algorithm, and we present an integer program formulation and conduct a small computational study.
Joint work withSelection theorems assert that one can pick a well behaved function from a corresponding multifunction. They play a very important role in modern optimization theory. I will survey their structure and some applications before sketching some important open research problems.
The celebrated Littlewood conjecture in Diophantine approximation concerns the simultaneous approximation of two real numbers by rationals with the same denominator. A cousin of this conjecture is the mixed Littlewood conjecture of de Mathan and Teulié, which is concerned with the approximation of a single real number, but where some denominators are preferred to others.
In the talk, we will derive a metrical result extending work of Pollington and Velani on the Littlewood conjecture. Our result implies the existence of an abundance of numbers satisfying both conjectures.
Selection theorems assert that one can pick a well behaved function from a corresponding multifunction. They play a very important role in modern optimization theory. In Part I, I will survey their structure and some applications before sketching some important applications and open research problems in Part II.
In this talk, we present a numerical method for a class of generalized inequality constrained integer linear programming (GILP) problems that includes the usual mixed-integer linear programming (MILP) problems as special cases. Instead of restricting certain variables to integer values as in MILP, we require in these GILP problems that some of the constraint functions take integer values. We present a tighten-and-branch method that has a number of advantages over the usual branch-and-cut algorithms. This includes the ability of keeping the number of constraints unchanged for all subproblems throughout the solution process and the capability of eliminating equality constraints. In addition, the method provides an algorithm framework that allows the existing cutting-plane techniques to be incorporated into the tightening process. As a demonstration, we will solve a well-known "hard ILP problem".
Symbolic and numeric computation have been distinguished by definition: numeric computation puts numerical values in its variables as soon as possible, symbolic computation as late as possible. Chebfun blurs this distinction, aiming for the speed of numerics with the generality and flexibility of symbolics. What happens when someone who has used both Maple and Matlab for decades, and has thereby absorbed the different fundamental assumptions into a "computational stance", tries to use Chebfun to solve a variety of computational problems? This talk reports on some of the outcomes.
The Mathematics and Statistics Learning Centre was established at the University of Melbourne over a decade ago, to respond to the needs of, initially, first year students of mathematics and statistics. The role of the centre and its Director has grown. The current Director, Dr Deborah King, will expound upon her role in the Centre.
The modernization of infrastructure networks requires coordinated planning and control. Considering traffic networks and electricity grids raises similar issues on how to achieve substantial new capabilities of effectiveness and efficiency. For instance, power grids need to integrate renewable energy sources and electric vehicles. It is clear that all this can only be achieved by greater reliance on systematic planning in the presence of uncertainty and sensing, communications, computing and control on an unprecedented scale, these days captured in the term "smart grids". This talk will outline current research on planning future grids and control of smart grids. In particular, the possible roles of network science will be emphasized and the challenges arising.
The problem posed by Hilbert in 1900 was resolved in the 1930s independently by A. Gelfond and Th. Schneider. The statement is that $a^b$ is transcendental for algebraic $a \ne 0,1$ and irrational algebraic $b$. The aim of the two 2-hour lectures is to give a proof of this result using the so-called method of interpolation determinants.
In this paper, we construct maximally monotone operators that are not of Gossez's dense-type (D) in many nonreflexive spaces. Many of these operators also fail to possess the Brønsted-Rockafellar (BR) property. Using these operators, we show that the partial inf-convolution of two BC-functions will not always be a BC-function. This provides a negative answer to a challenging question posed by Stephen Simons. Among other consequences, we deduce that every Banach space which contains an isomorphic copy of the James space J or its dual $J^*$, or of $c_0$ or its dual $l^1$ admits a non type (D) operator.
In this talk, we consider the automorphism groups of the Cayley graph with respect to the Coxeter generators and the Davis complex of an arbitrary Coxeter group. We determine for which Coxeter groups these automorphism groups are discrete. In the case where they are discrete, we express them as semidirect products of two obvious families of automorphisms. This extends a result of Haglund and Paulin.
We investigate various properties of the sublevel set $\{x : g(x) \leq 1\}$ and the integration of $h$ on this sublevel set when $g$ and $h$ are positively homogeneous functions. For instance, the latter integral reduces to integrating $h\exp(- g)$ on the whole space $\mathbb{R}^n$ (a non-Gaussian integral) and when $g$ is a polynomial, then the volume of the sublevel set is a convex function of its coefficients.
In fact, whenever $h$ is non-negative, the functional $\int \phi(g)h dx$ is a convex function of $g$ for a large class of functions $\phi:\mathbb{R}_{+} \to \mathbb{R}$. We also provide a numerical approximation scheme to compute the volume or integrate $h$ (or, equivalently, to approximate the associated non-Gaussian integral). We also show that finding the sublevel set $\{x : g(x) \leq 1\}$ of minimum volume that contains some given subset $K$ is a (hard) convex optimization problem for which we also propose two convergent numerical schemes. Finally, we provide a Gaussian-like property of non-Gaussian integrals for homogeneous polynomials that are sums of squares and critical points of a specific function.
Simultaneous Localisation and Mapping (SLAM) has become prominent in the field of robotics over the last decade, particularly in application to autonomous systems. SLAM enables any system equipped with exteroceptive (and often inertial) sensors to simultaneously update its own positional estimate and map of the environment by utilising information collected from the surroundings. The solution to the probabilistic SLAM problem can be derived using Bayes Theorem to yield estimates of the system state and covariance. In recursive form, the basic prediction-correction algorithm employs an Extended Kalman Filter (EKF) with Cholesky decomposition for numerical stability during inversion. This talk will present the mathematical formulation and solution of the SLAM problem, along with some algorithms used in implementation. We will then look at some applications of SLAM in the real world and discuss some of the challenges for future development.
In my opinion, the most significant unsolved problem in graph decompositions is the cycle double conjecture. This begins a series of talks on this conjecture in terms of background, relations to other problems and partial results.
This will be an introductory talk which begins by describing the four colour theorem and finite projective planes in the setting of graph decompositions. A problem posed by Ringel at a graph theory meeting in Oberwolfach in 1967 will then be discussed. This problem is now widely known as the Oberwolfach Problem, and is a generalisation of a question asked by Kirkman in 1850. It concerns decompositions of complete graphs into isomorphic copies of spanning regular graphs of degree two.
In this talk, we consider the structure of maximally monotone operators in Banach space whose domains have nonempty interior and we present new and explicit structure formulas for such operators. Along the way, we provide new proofs of the norm-to-weakstar closedness and property (Q) of these operators (recently established by Voisei). Various applications and limiting examples are given. This is the joint work with Jon Borwein.
Brian Alspach will continue with "The Anatomy of a Famous Conjecture" this Thursday. One can easily pick up the thread this week without having attended last week, but if you miss this week it will not be easy to join in next week.
I have embarked on a project of looking for Hamilton paths in Cayley graphs on finite Coxeter groups. This talk is a report on the progress thus far.
Exceptional Lie group $G_2$ is a beautiful 14-dimensional continuous group, having relations with such diverse notions as triality, 7-dimensional cross product and exceptional holonomy. It was found abstractly by Killing in 1887 (complex case) and then realized as a symmetry group by Engel and Cartan in 1894 (real split case). Later in 1910 Cartan returned to the topic and realized split $G_2$ as the maximal finite-dimensional symmetry algebra of a rank 2 distribution in $\mathbb{R}^5$. In other words, Cartan classified all symmetry groups of Monge equations of the form $y'=f(x,y,z,z',z'')$. I will discuss the higher-dimensional generalization of this fact, based on the joint work with Ian Anderson. Compact real form of $G_2$ was realized by Cartan as the automorphism group of octonions in 1914. In the talk I will also explain how to realize this $G_2$ as the maximal symmetry group of a geometric object.
12:00-1:00 | Michael Coons (University of Waterloo) |
1:00-2:00 | Claus Koestler (Aberystwyth University) |
2:00-3:00 | Eric Mortenson (The University of Queensland) |
3:00-4:00 | Ekaterina Shemyakova (University of Western Ontario) |
Brian Alspach will continue with "The Anatomy Of A Famous Conjecture" this Thursday. One can easily pick up the thread this week without having attended last week, but if you miss this week it will not be easy to join in next week.
In this talk, we consider a general convex feasibility problem in Hilbert space, and analyze a primal-dual pair of problems generated via a duality theory introduced by Svaiter. We present some algorithms and their convergence properties. The focus is a general primal-dual principle for strong convergence of some classes of algorithms. In particular, we give a different viewpoint for the weak-to-strong principle of Bauschke and Combettes. We also discuss how subgradient and proximal type methods fit in this primal-dual setting.
Joint work with Maicon Marques Alves (Universidade Federal de Santa Catarina-Brazil)
The talk will outline some topics associated with constructions for Hadamard matrices, in particular, a relatively simple construction, given by a sum of Kronecker products of ingredient matrices obeying certain conditions. Consideration of the structure of the ingredient matrices leads, on the one hand, to consideration of division algebras and Clifford algebras, and on the other hand, to searching for multisets of {-1,1} ingredient matrices. Structures within the sets of ingredient matrices can make searching more efficent.
We consider some fundamental generalized Mordell-Tornheim-Witten (MTW) zeta-function values along with their derivatives, and explore connections with multiple-zeta values (MZVs). To achieve these results, we make use of symbolic integration, high precision numerical integration, and some interesting combinatorics and special-function theory.
Our original motivation was to represent previously unresolved constructs such as Eulerian log-gamma integrals. Indeed, we are able to show that all such integrals belong to a vector space over an MTW basis, and we also present, for a substantial subset of this class, explicit closed-form expressions. In the process, we significantly extend methods for high-precision numerical computation of polylogarithms and their derivatives with respect to order. That said, the focus of our paper is the relation between MTW sums and classical polylogarithms. It is the adumbration of these relationships that makes the study significant.
The associated paper (with DH Bailey and RE Crandall) is at http://carmasite.newcastle.edu.au/jon/MTW1.pdf.
Approximation theory is a classical part of the analysis of functions defined on an Euclidean space or its subset and the foundation of its applications, while the problems related to high or infinite dimensions create known challenges even in the setting of Hilbert spaces. The stability (uniform continuity) of a mapping is one of the traditional properties investigated in various branches of pure and applied mathematics and further applications in engineering. Examples include analysis of linear and non-linear PDEs, (short-term) prediction problems and decision-making and data evolution.
We describe the uniform approximation properties of the uniformly continuous mappings between the pairs of Banach and, occasionally, metric spaces from various wide parameterised and non-parameterised classes of spaces with or without the local unconditional structure in a quantitative manner. The striking difference with the finite-dimensional setting is represented by the presence of Tsar'kov's phenomenon. Many tools in use are developed under the scope of our quasi-Euclidean approach. Its idea seems to be relatively natural in light of the compressed sensing and distortion phenomena.
We consider some fundamental generalized Mordell-Tornheim-Witten (MTW) zeta-function values along with their derivatives, and explore connections with multiple-zeta values (MZVs). To achieve these results, we make use of symbolic integration, high precision numerical integration, and some interesting combinatorics and special-function theory.
Our original motivation was to represent previously unresolved constructs such as Eulerian log-gamma integrals. Indeed, we are able to show that all such integrals belong to a vector space over an MTW basis, and we also present, for a substantial subset of this class, explicit closed-form expressions. In the process, we significantly extend methods for high-precision numerical computation of polylogarithms and their derivatives with respect to order. That said, the focus of our paper is the relation between MTW sums and classical polylogarithms. It is the adumbration of these relationships that makes the study significant.
The associated paper (with DH Bailey and RE Crandall) is at http://carmasite.newcastle.edu.au/jon/MTW1.pdf.
Brian Alspach will continue his discussion "The Anatomy Of A Famous Conjecture."
This talk will discuss opportunities and challenges related to the development and application of operations research techniques to transportation and logistics problems in non-profit settings. Much research has been conducted on transportation and logistics problems in commercial settings where the goal is either to maximize profit or to minimize cost. Significantly less work has been conducted for non-profit applications. In such settings, the objectives are often more difficult to quantify since issues such as equity and sustainability must be considered, yet efficient operations are still crucial. This talk will present several research projects that introduce new approaches tailored to the objectives and constraints unique to non-profit agencies, which are often concerned with obtaining equitable solutions given limited, and often uncertain, budgets, rather than with maximizing profits.
This talk will assess the potential of operations research to address the problems faced by non-profit agencies and attempt to understand why these problems have been understudied within the operations research community. To do so, we will ask the following questions: Are non-profit operations problems rich enough for academic study? and Are solutions to non-profit operations problems applicable to real communities?
Brian Alspach will continue his discussion "The Anatomy Of A Famous Conjecture."
This talk will survey some of the classical and recent results concerning operators composed of a projection onto a compact set in time, followed by a projection onto a compact set in frequency. Such "time- and band-limiting" operators were studied by Landau, Slepian, and Pollak in a series of papers published in the Bell Systems Tech. Journal in the early 1960s identifying the eigenfunctions, providing eigenvalue estimates, and describing spaces of "essentially time- and band-limited signals."
Further progress on time- and band-limiting has been intermittent, but genuine recent progress has been made in terms of numerical analysis, sampling theory, and extensions to multiband signals, all driven to some extent by potential applications in communications. After providing an outline of the historical developments in the mathematical theory of time- and bandlimiting, some details of the sampling theory and multiband setting will be given. Part of the latter represents joint work with Jeff Hogan and Scott Izu.
Brian Alspach will continue his discussion "The Anatomy Of A Famous Conjecture."
This involves (in pre-nonstandard analysis times) the development of a simple system of infinites and infinitesmals that help to clarify Cantor's Ternary Set, nonmeasurable sets and Lebesgue integration. The talk will include other memories as a maths student at Newcastle University College, Tighes Hill, from 1959 to 1961.
This week Brian Alspach concludes his series of talks entitled "The Anatomy Of A Famous Conjecture." We shall be in V27 - note room change.
A graph on v vertices is called pancyclic if it contains cycles of every length from 3 to v. Obviously such graphs exist — the complete graph on v vertices is an example. We shall look at the question, what is the minimum number of edges in a pancyclic graph? Interestingly, this question was "solved", incorrectly, in 1978. A complete solution is not yet known.
This week the speaker in the Discrete Mathematics Instructional Seminar is Judy-anne Osborn who will be discussing Hadamard matrices.
There is a high prevalence of tuberculosis (TB) in Papua New Guinea (PNG), which is exacerbated by the presence of drug-resistant TB strains and HIV infection. This is an important public health issue not only locally within PNG, but also in Australia due to the high cross-border traffic in the Torres Strait Island–Western Province (PNG) treaty region. A metapopulation model is used to evaluate the effect of varying control strategies in the region, and some initial cost-benefit analysis figures are presented.
The double zeta values are one natural way to generalise the Riemann zeta function at the positive integers; they are defined by $\zeta(a,b) = \sum_{n=1}^\infty \sum_{m=1}^{n-1} 1/n^a/m^b$. We give a unified and completely elementary method to prove several sum formulae for the double zeta values. We also discuss an experimental method for discovering such formulae.
Moreover, we use a reflection formula and recursions involving the Riemann zeta function to obtain new relations of closely related functions, such as the Witten zeta function, alternating double zeta values, and more generally, character sums.
TBA
This week the speaker in the Discrete Mathematics Instructional Seminar is Judy-anne Osborn who will be discussing Hadamard matrices.
I will give a brief introduction to the theory of self-similar groups, focusing on a couple of pertinent examples: Grigorchuk's group of intermediate growth, and the basilica group.
Based on generalized backward shift operators we introduce adaptive Fourier decomposition. Then we discuss its relations and applications to (i) system identification; (2) computation of Hilbert transform; (3) algorithm for the best order-n rational approximation to functions in the Hardy space H2; (4) forward and backward shift invariant spaces; (5) band preserving in filter designing; (6) phase retrieving; and (7) the Bedrosian identity. The talk also concerns possible generalizations of the theory and applications to higher dimensional spaces.
The Douglas-Rachford algorithm is an iterative method for finding a point in the intersection of two (or more) closed sets. It is well-known that the iteration (weakly) converges when it is applied to convex subsets of a Hilbert space. Despite the absence of a theoretical justification, the algorithm has also been successfully applied to various non-convex practical problems, including finding solutions for the eight queens problem, or sudoku puzzles. In particular, we will show how these two problems can be easily modelled.
With the aim providing some theoretical explanation of the convergence in the non-convex case, we have established a region of convergence for the prototypical non-convex Douglas-Rachford iteration which finds a point on the intersection of a line and a circle. Previous work was only able to establish local convergence, and was ineffective in that no explicit region of convergence could be given.
PS: Bring your hardest sudoku puzzle :)
A body moves in a rarefied medium of resting particles and at the same time very slowly rotates (somersaults). Each particle of the medium is reflected elastically when hitting the body boundary (multiple reflections are possible). The resulting resistance force acting on the body depends on the time; we are interested in minimizing the time-averaged value of resistance (which is called $R$). The value $R(B)$ is well defined in terms of billiard in the complement of $B$, for any bounded body $B \subset \mathbb{R}^d$, $d\geq 2$ with piecewise smooth boundary.
Let $C\subset\mathbb{R}^d$ be a bounded convex body and $C_1\subset C$ be another convex body with $\partial C_1 \cap C=\varnothing$. It would be interesting to get an estimate for $$R(C1_,C)= \inf_{C_1\subset B \subset C} R(B) .................. (1)$$ If $\partial C_1$ is close to $\partial C$, problem (1) can be referred to as minimizing the resistance of the convex body $C$ by "roughening" its surface. We cannot solve problem (1); however we can find the limit $$\lim_{\text{dist}(\partial C_1,\partial C)\rightarrow 0} \frac{R(C_1,C)}{R(C)}. .................. (2) $$
It will be explained that problem (2) can be solved by reduction to a special problem of optimal mass transportation, where the initial and final measurable spaces are complementary hemispheres, $X=\{x=(x_1,...,x_d)\in S^{d-1}: x_1\geq 0\}$ and $Y=\{x\in S^{d-1}:x_1\leq 0\}$. The transportation cost is the squared distance, $c(x,y)=\frac{1}{2}|x-y|^2$, and the measures in $X$ and $Y$ are obtained from the $(d-1)$-dimensional Lebesgue measure on the equatorial circle $\{x=(x_1,...,x_d):|x|\leq 1,x_1=0\}$ by parallel translation along the vector $e_1=(1,0,...,0)$. Let $C(\nu)$ be the total cost corresponding to the transport plan $\nu$ and let $\nu_0$ be the transport plan generated by parallel translation along $e_1$; then the value $\frac{\inf C(\nu)}{C(\nu_0)}$ coincides with the limit in (2).
Surprisingly, this limit does not depend on the body $C$ and depends only on the dimension $d$.
In particular, if $d=3$ ($d=2$), it equals (approximately) 0.96945 (0.98782). In other words, the resistance of a 3-dimensional (2-dimensional) convex body can be decreased by 3.05% (correspondingly, 1.22%) at most by roughening its surface.
Motivated by questions of algorithm analysis, we provide several distinct approaches to determining convergence and limit values for a class of linear iterations.
This is joint work with D. Borwein and B. Sims.
We consider the bipartite version of the degree/diameter problem; namely, find the maximum number Nb(d,D) of vertices in a bipartite graph of maximum degree d>2 and diameter D>2. The actual value of Nb(d,D) is still unknown for most (d,D) pairs.
The well-known Moore bound Mb(d,D) gives a general upper bound for Nb(d,D); graphs attaining this bound are called Moore (bipartite) graphs. Moore bipartite graphs are very scarce; they may only exist for D=3,4 or 6, but no other diameters. Interest has then shifted to investigate the existence or otherwise of graphs missing the Moore bound by a few vertices. A graph with order Mb(d,D)-e is called a graph of defect e.
It has been proved that bipartite graphs of defect 2 do not exist when D>3. In our paper we 'almost' prove that bipartite graphs of defect 4 cannot exist when D>4, thereby establishing a new upper bound on Nb(d,D) for more than 2/3 of all (d,D) combinations.
We present a nonconvex bundle technique where function and subgradient values are available only up to an error tolerance which remains unknown to the user. The challenge is to develop an algorithm which converges to an approximate solution which, despite the lack of information, is as good as one can hope for. For instance, if data are known up to the error $O(\epsilon)$, the solution should also be accurate up to $O(\epsilon)$. We show that the oracle of downshifted tangents is an excellent tool to deal with this difficult situation.
Dr Koerber will speak about the experience of using MapleTA extensively in undergraduate teaching at the University of Adelaide, and demonstrate how they have been using the system there. Bio: Adrian Koerber is Director of First Year Studies in Mathematics at the University of Adelaide. His mathematical research is in the area of modelling gene networks.
We consider the bipartite version of the degree/diameter problem; namely, find the maximum number Nb(d,D) of vertices in a bipartite graph of maximum degree d>2 and diameter D>2. The actual value of Nb(d,D) is still unknown for most (d,D) pairs.
The well-known Moore bound Mb(d,D) gives a general upper bound for Nb(d,D); graphs attaining this bound are called Moore (bipartite) graphs. Moore bipartite graphs are very scarce; they may only exist for D=3,4 or 6, but no other diameters. Interest has then shifted to investigate the existence or otherwise of graphs missing the Moore bound by a few vertices. A graph with order Mb(d,D)-e is called a graph of defect e.
It has been proved that bipartite graphs of defect 2 do not exist when D>3. In our paper we 'almost' prove that bipartite graphs of defect 4 cannot exist when D>4, thereby establishing a new upper bound on Nb(d,D) for more than 2/3 of all (d,D) combinations.
Snarks are 3-regular graphs that are not 3-edge-colourable and are cyclically 4-edge-connected. They exist but are hard to find. On the other hand, it is believed that Cayley graphs can never be snarks. The latter is the subject of the next series of talks.
Hajek proved that a WUR Banach space is an Asplund space. This result suggests that the WUR property might have interesting consequences as a dual property. We show that
(i) every Banach Space with separable second dual can be equivalently renormed to have WUR dual,
(ii) under certain embedding conditions a Banach space with WUR dual is reflexive.
Snarks are 3-regular graphs that are not 3-edge-colourable and are cyclically 4-edge-connected. They exist but are hard to find. On the other hand, it is believed that Cayley graphs can never be snarks. The latter is the subject of the next series of talks.
Snarks are 3-regular graphs that are not 3-edge-colourable and are cyclically 4-edge-connected. They exist but are hard to find. On the other hand, it is believed that Cayley graphs can never be snarks. The latter is the subject of the next series of talks.
Let $F(z)$ be a power series, say with integer coefficients. In the late 1920s and early 1930s, Kurt Mahler discovered that for $F(z)$ satisfying a certain type of functional equation (now called Mahler functions), the transcendence of the function $F(z)$ could be used to prove the transcendence of certain special values of $F(z)$. Mahler's main application at the time was to prove the transcendence of the Thue-Morse number $\sum_{n\geq 0}t(n)/2^n$ where $t(n)$ is either 0 or 1 depending on the parity of the number of 1s in the base 2 expansion of $n$. In this talk, I will talk about some of the connections between Mahler functions and finite automata and highlight some recent approaches to large problems in the area. If time permits, I will outline a new proof of a version of Carlson's theorem for Mahler functions; that is, a Mahler function is either rational or it has the unit circle as a natural boundary.
(Joint speakers, Jon Borwein and Michael Rose)
p>Using fractal self-similarity and functional-expectation relations, the classical theory of box integrals is extended to encompass a new class of fractal “string-generated Cantor sets” (SCSs) embedded in unit hypercubes of arbitrary dimension. Motivated by laboratory studies on the distribution of brain synapses, these SCSs were designed for dimensional freedom: a suitable choice of generating string allows for fine-tuning the fractal dimension of the corresponding set. We also establish closed forms for certain statistical moments on SCSs and report various numerical results. The associated paper is at http://www.carma.newcastle.edu.au/jon/papers.html#PAPERS.Burnside's Theorem characterising transitive permutation groups of prime degree has some wonderful applications for graphs. This week we start an exploration of this topic.
We are holding an afternoon mini-conference, in conjunction with the School of Mathematical and Physical Sciences.
If you are engaged in any of the many Outreach Activities in the Mathematical Sciences that people from CARMA, our School and beyond contribute to, for example visiting primary or secondary schools, presenting to schools who visit us, public lectures, media interviews, helping run maths competitions, etc etc, and would like to share what you're doing, please let us know. Also if you're not currently engaged in an outreach activity but have an idea that you would like to try, and want to use a talk about your idea as a "sounding board", please feel free to do so.
There will be some very short talks: 5 minutes, and some longer talks: 20 minutes, with time for discussion in between. We'll be serving afternoon tea throughout the afternoon; and will have an open discussion forum near the end of the day. If you're interested in giving a talk please contact Judy-anne.Osborn@newcastle.edu.au, indicating whether you'd prefer a 5-minute or a 20-minute slot. If you're simply interested in attending, please let us know as well for catering purposes. The event will be held in one of the function rooms in the Shortland building.
12:05 — Begin, with welcome and lunch
15:45 — Last talk finishes
15:45-16:15 — Open discussion
George is going to start giving some talks on the embedding of Higman-Thompson groups into the groups of almost automorphisms of trees, and over several weeks will look at the structure theorems and scale calculations for these examples.
We shall continue exploring implications of Burnside's Theorem for vertex-transitive graphs.
Groundwater makes up nearly 30% of the entire world’s freshwater but the mathematical models for the better understanding of the system are difficult to validate due to the disordered nature of the porous media and the complex geometry of the channels of flow. In this seminar, after establishing the statistical macroscopic equivalent of the Navier-Stokes equations for the groundwater hydrodynamic and its consequences in term of Laplace and diffusion equations, some cases will be solved in term of special functions by using the modern Computer Algebra System.
Variational methods have been used to derive symmetric solutions for many problems related to real world applications. To name a few we mention periodic solutions to ODEs related to N-body problems and electrical circuits, symmetric solutions to PDEs, and symmetry in derivatives of spectral functions. In this talk we examine the commonalities of using variational methods in the presence of symmetry.
This is an ongoing collaborative research project with Jon Borwein. So far our questions still outnumber our answers.
George is going to continue his series of talks on the embedding of Higman-Thompson groups into the groups of almost automorphisms of trees, and over several weeks look at the structure theorems and scale calculations for these examples.
We shall continue exploring implications of Burnside's Theorem for vertex-transitive graphs.
Given a positive integer b, we say that a mathematical constant alpha is "b-normal" or "normal base b" if every m-long string of digits appears in the base-b expansion of alpha with precisely the limiting frequency 1/b^m. Although it is well known from measure theory that almost all real numbers are b-normal for all integers b > 1, nonetheless proving normality (or nonnormality) for specific constants, such as pi, e and log(2), has been very difficult.
In the 21st century, a number of different approaches have been attempted on this problem. For example, a recent study employed a Poisson model of normality to conclude that based on the first four trillion hexadecimal digits of pi, it is exceedingly unlikely that pi is not normal. In a similar vein, graphical techniques, in most cases based on the digit-generated "random" walks, have been successfully employed to detect certain nonnormality in some cases.
On the analytical front, it was shown in 2001 that the normality of certain reals, including log(2) and pi (or any other constant given by a BBP formula), could be reduced to a question about the behavior of certain specific pseudorandom number generators. Subsequently normality was established for an uncountable class of reals (the "Stoneham numbers"), the simplest of which is: alpha_{2,3} = Sum_{n >= 0} 1/(3^n 2^(3^n)), which is provably normal base 2. Just as intriguing is a recent result that alpha_{2,3}, for instance, is provably NOT normal base 6. These results have now been generalized to some extent, although many open cases remain.
In this talk I will present an introduction to the theory of normal numbers, including brief mention of new graphical- and statistical-based techniques. I will then sketch a proof of the normality base 2 (and nonnormality base 6) of Stoneham numbers, then suggest some additional lines of research. Various parts of this research were conducted in collaboration with Richard Crandall, Jonathan and Peter Borwein, Francisco Aragon, Cristian Calude, Michael Dinneen, Monica Dumitrescu and Alex Yee.
A frequent theme of 21st century experimental math is the computer discovery of identities, typically done by means of computing some mathematical entity (a sum, limit, integral, etc) to very high numeric precision, then using the PSLQ algorithm to identify the entity in terms of well known constants.
Perhaps the most successful application of this methodology has been to identify integrals arising in mathematical physics. This talk will present numerous examples of this type, including integrals from quantum field theory, Ising theory, random walks, 3D lattice problems, and even mouse brains. In some cases, it is necessary to compute these integrals to 3000-digit precision, and developing techniques to do such computations is a daunting technical challenge.
George continues his series of talks on the embedding of Higman-Thompson groups into the groups of almost automorphisms of trees, structure theorems and scale calculations for these examples.
This week Brian Alspach will complete the discussion on Burnside's Theorem and vertex-transitive graphs of prime order.
Recently the Alternating Projection Algorithm was extended into CAT(0) spaces. We will look at this and also current work on extending the Douglas Rachford Algorithm into CAT(0) spaces. By using CAT(0) spaces the underlying linear structure of the space is dispensable and this allows certain algorithms to be extended to spaces such as classical hyperbolic spaces, simply connected Riemannian manifolds of non-positive curvature, R-trees and Euclidean buildings.
In this talk, we study the properties of integral functionals induced on $L_\text{E}^1(S,\mu)$ by closed convex functions on a Euclidean space E. We give sufficient conditions for such integral functions to be strongly rotund (well-posed). We show that in this generality functions such as the Boltzmann-Shannon entropy and the Fermi-Dirac entropy are strongly rotund. We also study convergence in measure and give various limiting counter-example.
This is joint work with Jon Borwein.
George continues his series of talks on the embedding of Higman-Thompson groups into the groups of almost automorphisms of trees, structure theorems and scale calculations for these examples.
I will discuss four much abused words Interdisciplinarity, Innovation, Collaboration and Creativity. I will describe what they mean for different stakeholder groups and will speak about my own experiences as a research scientist, as a scientific administrator, as an educator and even as a small high-tech businessman. I will also offer advice that can of course be ignored.
Linear Water Wave theory is one of the most important branches on fluid mechanics. Practically it underpins most of the engineering design of ships, offshore structures, etc. It also has a very rich history in the development of applied mathematics. In this talk I will focus on the connection between solutions in the frequency and time-domains and show how we can use various formulations to make numerical calculations and to construct approximate solutions. I will illustrate these methods with application to some simple wave scattering problems.
We consider the problem of characterising embeddings of an abstract group into totally disconnected locally compact (tdlc) groups. Specifically, for each pair of nonzero integers $m,n$ we construct a tdlc group containing the Baumslag-Solitar group $BS(m,n)$ as a dense subgroup, and compute the scales of elements and flat rank of the tdlc group.
This is joint work with George Willis.
In this talk, we study the properties of integral functionals induced on the Banach space of integrable functions by closed convex functions on a Euclidean space.
We give sufficient conditions for such integral functions to be strongly rotund (well-posed). We show that in this generality functions such as the Boltzmann-Shannon entropy and the Fermi-Dirac entropy are strongly rotund. We also study convergence in measure and give various limiting counter-example.
In this talk projection algorithms for solving (nonconvex) feasibility problems in Euclidian spaces are considered. Of special interest are the Method of Alternating Projections (MAP) and the Averaged Alternating Reflection Algorithm (AAR) which cover some of the state of the art algorithms for our intended application, the phase retrieval problem. In the case of convex feasibility, firm nonexpansiveness of projection mappings is a global property that yields global convergence of MAP, and, for consistent problems, AAR. Based on epsilon-delta-regularity of sets (Bauschke, Luke, Phan, Wang 2012) a relaxed local version of firm nonexpansiveness with respect to the intersection is introduced for consistent feasibility problems. This combined with a type of coercivity condition, which relates to the regularity of the intersection, yields local linear convergence of MAP for a wide class of nonconvex problems, and even local linear convergence of AAR in more limited nonconvex settings.
If some arithmetical sums are small then the complex zeroes of the zeta-function are linearly dependent. Since we don't believe the conclusion we ought not to believe the premise. I will show that the zeroes are 'almost linearly independent' which implies, in particular, that the Mertens conjecture fails more drastically than was previously known.
In this talk, we will show that a D-finite Mahler function is necessarily rational. This gives a new proof of the rational-transcendental dichotomy of Mahler functions due to Nishioka. Using our method of proof, we also provide a new proof of a Pólya-Carlson type result for Mahler functions due to Randé; that is, a Mahler function which is meromorphic in the unit disk is either rational or has the unit circle as a natural boundary. This is joint work with Jason Bell and Eric Rowland.
In 1966 Gallai conjectured that a connected graph of order n can be decomposed into n/2 or fewer paths when n is even, or (n+1)/2 or fewer paths when n is odd. We shall discuss old and new work on this as yet unsolved conjecture.
Motivated by the desire to visualise large mathematical data sets, especially in number theory, we offer various tools for representing floating point numbers as planar walks and for quantitatively measuring their “randomness”.
What to expect: some interesting ideas, many beautiful pictures (including a 108-gigapixel picture of π), and some easy-to-understand maths.
What you won’t get: too many equations, difficult proofs, or any “real walking”.
This is a joint work with David Bailey, Jon Borwein and Peter Borwein.
Many cognitive models derive their predictions through simulation. This means that it is difficult or impossible to write down a probability distribution or likelihood that characterizes the random behavior of the data as a function of the model's parameters. In turn, the lack of a likelihood means that standard Bayesian analyses of such models are impossible. In this presentation we demonstrate a procedure called approximate Bayesian computation (ABC), a method for Bayesian analysis that circumvents the evaluation of the likelihood. Although they have shown great promise for likelihood-free inference, current ABC methods suffer from two problems that have largely revented their mainstream adoption: long computation time and an inability to scale beyond models with few parameters. We introduce a new ABC algorithm, called ABCDE, that includes differential evolution as a computationally efficient genetic algorithm for proposal generation. ABCDE is able to obtain accurate posterior estimates an order of magnitude faster than a popular rejection-based method and scale to high-dimensional parameter spaces that have proven difficult for the current rejection-based ABC methods. To illustrate its utility we apply ABCDE to several well-established simulation-based models of memory and decision-making that have never been fit in a Bayesian framework.
AUTHORS: Brandon M. Turner (Stanford University) Per B. Sederberg (The Ohio State University)
In 1966 Gallai conjectured that a connected graph of order n can be decomposed into n/2 or fewer paths when n is even, or (n+1)/2 or fewer paths when n is odd. We shall discuss old and new work on this as yet unsolved conjecture.
We discuss how the title is related to π.
I will give an extended version of my talk at the AustMS meeting about some ongoing work with Pierre-Emmanuel Caprace and George Willis.
Given a locally compact topological group G, the connected component of the identity is a closed normal subgroup G_0 and the quotient group is totally disconnected. Connected locally compact groups can be approximated by Lie groups, and as such are relatively well-understood. By contrast, totally disconnected locally compact (t.d.l.c.) groups are a more difficult class of objects to understand. Unlike in the connected case, it is probably hopeless to classify the simple t.d.l.c. groups, because this would include for instance all simple groups (equipped with the discrete topology). Even classifying the finitely generated simple groups is widely regarded as impossible. However, we can prove some general results about broad classes of (topologically) simple t.d.l.c. groups that have a compact generating set.
Given a non-discrete t.d.l.c. group, there is always an open compact subgroup. Compact totally disconnected groups are residually finite, so have many normal subgroups. Our approach is to analyse a t.d.l.c. group G (which may itself be simple) via normal subgroups of open compact subgroups. From these we obtain lattices and Cantor sets on which G acts, and we can use properties of these actions to demonstrate properties of G. For instance, we have made some progress on the question of whether a compactly generated topologically simple t.d.l.c. group is abstractly simple, and found some necessary conditions for G to be amenable.
We study the problem of finding an interpolating curve passing through prescribed points in the Euclidean space. The interpolating curve minimizes the pointwise maximum length, i.e., L∞-norm, of its acceleration. We re-formulate the problem as an optimal control problem and employ simple but effective tools of optimal control theory. We characterize solutions associated with singular (of infinite order) and nonsingular controls. We reduce the infinite dimensional interpolation problem to an ensuing finite dimensional one and derive closed form expressions for interpolating curves. Consequently we devise numerical techniques for finding interpolating curves and illustrate these techniques on examples.
Infecting aedes aegypti with Wolbachia has been proposed as an alternative in reducing dengue transmission. If Wolbachia-infected mosquitoes can invade and dominate the population of aedes aegypti mosquitoes, they can reduce dengue transmission. Cytoplasmic Incompatibility (CI) provides the reproductive advantage for Wolbachia-infected mosquitoes with which they can reproduce more and dominate the population. A mosquito population model is developed in order to determine the survival of Wolbachia-infected mosquiotes when they are released into the wild. The model has two physically stable realistic steady states. The model reveals that once the Wolbachia-infected mosquitoes survive, they ultimately dominate the population.
Giuga's conjecture will be introduced, and we will discuss what's changed in the computation of a counterexample in the last 17 years.
Automata groups are a class of groups generated by recursively defined automorphisms of a regular rooted tree. Associated to each automata group is an object known as the self-similarity graph. Nekrashevych showed that in the case where the group satisfies a natural condition known as contracting, the self-similarity graph is Gromov-hyperbolic and has boundary homeomorphic to the limit space of the group action. I will talk about self-similarity graphs of automata groups that do not satisfy the contracting condition.
In this talk, we present our ongoing efforts in solving a number of continuous facility location problems that involve sets using recently developed tools of variational analysis and generalized differentiation. Subgradients of a class of nonsmooth functions called minimal time functions are developed and employed to study these problems. Our approach advances the applications of variational analysis and optimization to a well-developed field of facility location, while shedding new light on well-known classical geometry problems such as the Fermat-Torricelli problem, the Sylvester smallest enclosing circle problem, and the problem of Apollonius.
I will discuss a new algorithm for counting points on hyperelliptic curves over finite fields.
This talk is an introduction to symbolic convex analysis.
Multi-linear functions appear in many global optimization problems, including reformulated quadratic and polynomial optimization problems. There is a extended formulation for the convex hull of the graph of a multi-linear function that requires the use of an exponential number of variables. Relying on this result, we study an approach that generates relaxations for multiple terms simultaneously, as opposed to methods that relax the nonconvexity of each term individually. In some special cases, we are able to establish analytic bounds on the ratio of the strength of the term-by-term and convex hull relaxations. To our knowledge, these are the first approximation-ratio results for the strength of relaxations of global optimization problems. The results lend insight into the design of practical (non-exponentially sized) relaxations. Computations demonstrate that the bounds obtained in this manner are competitive with the well-known semi-definite programming based bounds for these problems.
Joint work with Jim Luedtke, University of Wisconsin-Madison, and Mahdi Namazifar, now with Opera Solutions.
Nonexpansive operators in Banach spaces are of utmost importance in Nonlinear Analysis and Optimization Theory. We are concerned in this talk with classes of operators which are, in some sense, nonexpansive not with respect to the norm, but with respect to Bregman distances. Since these distances are not symmetric in general, it seems natural to distinguish between left and right Bregman nonexpansive operators. Some left classes have already been studied quite intensively, so this talk is mainly devoted to right Bregman nonexpansive operators and the relationship between both classes.
This talk is based on joint works with Prof. Simeon Reich and Shoham Sabach from Technion-Israel Institute of Technology, Haifa.
This is the second part of the informal seminar on an introduction to symbolic convex analysis. The published paper on which this seminar is mainly based on can be found at http://www.carma.newcastle.edu.au/jon/fenchel.pdf.
Parameterised approximation is a relatively new but growing field of interest. It merges two ways of dealing with NP-hard optimisation problems, namely polynomial approximation and exact parameterised (exponential-time) algorithms.
We explore opportunities for parameterising constant factor approximation algorithms for vertex cover, and we provide a simple algorithm that works on any approximation ratio of the form $\frac{2l+1}{l+1}$, $l=1,2,\dots$, and has complexity that outperforms previously published algorithms by Bourgeois et al. based on sophisticated exact parameterised algorithms. In particular, for $l=1$ (factor-$1.5$ approximation) our algorithm runs in time $\text{O}^*(\text{simpleonefiveapproxbase}^k)$, where parameter $k \leq \frac{2}{3}\tau$, and $\tau$ is the size of a minimum vertex cover.
Additionally, we present an improved polynomial-time approximation algorithm for graphs of average degree at most four and a limited number of vertices with degree less than two.
Motivated by laboratory studies on the distribution of brain synapses, the classical theory of box integrals - being expectations on unit hypercubes - is extended to a new class of fractal "string-generated Cantor sets" that facilitate fine-tuning of their fractal dimension through a suitable choice of generating string. Closed forms for certain statistical moments on these fractal sets will be presented, together with a precision algorithm for higher embedding dimensions. This is based on joint work with Laur. Prof. Jon Borwein, Prof. David Bailey and Dr. Richard Crandall.
Many problems in diverse areas of mathematics and modern physical sciences can be formulated as a Convex Feasibility Problem, consisting of finding a point in the intersection of finitely many closed convex sets. Two other related problems are the Split Feasibility Problem and the Multiple-Sets Split Feasibility Problem, both very useful when solving inverse problems where constraints are imposed in the domain as well as in the range of a linear operator. We present some recent contributions concerning these problems in the setting of Hilbert spaces along with some numerical experiments to illustrate the implementation of some iterative methods in signal processing.
Automaton semigroups are a natural generalisation of the automaton groups introduced by Grigorchuk and others in the 1980s as examples of groups having various 'exotic' properties. In this talk I will give a brief introduction to automaton semigroups, and then discuss recent joint work with Alan Cain on the extent to which the class of automaton semigroups is closed under certain semigroup constructions (free products and wreath products).
Fundamental questions in basic and applied ecology alike involve complex adaptive systems, in which localized interactions among individual agents give rise to emergent patterns that feed back to affect individual behavior. In such systems, a central challenge is to scale from the "microscopic" to the "macroscopic", in order to understand the emergence of collective phenomena, the potential for critical transitions, and the ecological and evolutionary conflicts between levels of organization. This lecture will explore some specific examples, from universality in bacterial pattern formation to collective motion and collective decision-making in animal groups. It also will suggest that studies of emergence, scaling and critical transitions in physical systems can inform the analysis of similar phenomena in ecological systems, while raising new challenges for theory.
Professor Levin received his B.A. from Johns Hopkins University and his Ph.D. in mathematics from the University of Maryland. At Cornell University 1965-1992 , he was Chair of the Section of Ecology and Systematics, and then Director of the Ecosystems Research Center, the Center for Environmental Research and the Program on Theoretical and Computational Biology, as well as Charles A. Alexander Professor of Biological Sciences (1985-1992). Since 1992, he has been at Princeton University, where he is currently George M. Moffett Professor of Biology and Director of the Center for BioComplexity. He retains an Adjunct Professorship at Cornell.
His research interests are in understanding how macroscopic patterns and processes are maintained at the level of ecosystems and the biosphere, in terms of ecological and evolutionary mechanisms that operate primarily at the level of organisms; in infectious diseases; and in the interface between basic and applied ecology.
Simon Levin visits Australia for the first in the Maths of Planet Earth Simons Public Lecture Series. http://mathsofplanetearth.org.au/events/simons/
Let $s_q(n)$ be the sum of the $q$-ary digits of $n$. For example $s_{10}(1729) = 1 + 7 + 2 + 9 = 19$. It is known what $s_q(n)$ looks like "on average". It can be shown that $s_q(n^h)$ looks $h$ times bigger "on average". This raises the question: is the ratio of these two things $h$ on average? In this talk we will give some history on the sum of digits function, and will give a proof of one of Stolarsky's conjecture concerning the minimal values of the ratio of $s_q(n)$ and $s_q(n^h)$.
Three ideas --- active sets, steepest descent, and smooth approximations of functions --- permeate nonsmooth optimization. I will give a fresh perspective on these concepts, and illustrate how many results in these areas can be strengthened in the semi-algebraic setting. This is joint work with A.D. Ioffe (Technion), A.S. Lewis (Cornell), and M. Larsson (EPFL).
After Gromov's work in the 1980s, the modern approach to studying infinite groups is from the geometric point of view, seeing them as metric spaces and using geometric concepts. One of these is the concept of distortion of a subgroup in a group. Here we will give the definition and some examples of distorted and nondistorted subgroups and some recent results on them. The main tools used to establish these results are quasi-metrics or metric estimates, which are quantities which differ from the distance by a multiplicative constant, but which still capture the concept enough to understand distortion.
The joint spectral radius of a finite set of real $d \times d$ matrices is defined to be the maximum possible exponential rate of growth of long products of matrices drawn from that set. A set of matrices is said to have the finiteness property if there exists a periodic product which achieves this maximal rate of growth. J. C. Lagarias and Y. Wang conjectured in 1995 that every finite set of real $d \times d$ matrices satisfies the finiteness property. However, T. Bousch and J. Mairesse proved in 2002 that counterexamples to the finiteness conjecture exist, showing in particular that there exists a family of pairs of $2 \times 2$ matrices which contains a counterexample. Similar results were subsequently given by V. D. Blondel, J. Theys and A. A. Vladimirov and by V. S. Kozyakin, but no explicit counterexample to the finiteness conjecture was given. This talk will discuss an explicit counter-example to this conjecture.
In 1997, Kaneko introduced the poly-Bernoulli number. Poly-Euler numbers are introduced as a generalization of the Euler numbers in a manner similar to the introduction of the poly-Bernoulli numbers. In my talk, some properties of poly-Euler numbers, for example, explicit formulas, sign change, Clausen-von Staudt type formula, combinatorial interpretations and so on are showed.
This research is a joint work with Yasuo Ohno.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html
We prove that if $q\ne0,\pm1$ and $\ell\ge1$ are fixed integers, then the numbers $$ 1, \quad \sum_{n=1}^\infty\frac{1}{q^n-1}, \quad \sum_{n=1}^\infty\frac{1}{q^{n^2}-1}, \quad \dots, \quad \sum_{n=1}^\infty\frac{1}{q^{n^\ell}-1} $$ are linearly independent over $\mathbb{Q}$. This generalizes a result of Erdős who treated the case $\ell=1$. The method is based on the original approaches of Chowla and Erdős, together with some results about primes in arithmetic progressions with large moduli of Ahlford, Granville and Pomerance.
This is joint work with Yohei Tachiya.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
The desire to understand $\pi$, the challenge, and originally the need, to calculate ever more accurate values of $\pi$, the ratio of the circumference of a circle to its diameter, has captured mathematicians - great and less great - for many many centuries. And, especially recently, $\pi$ has provided compelling examples of computational mathematics. $\pi$, uniquely in mathematics, is pervasive in popular culture and the popular imagination. In this lecture I shall intersperse a largely chronological account of $\pi$'s mathematical and numerical status with examples of its ubiquity. It is truly a number for Planet Earth.
I am grateful to have been appointed in a role with a particular focus on First Year Teaching as well as a research mandate. The prospect of trying to do both well is daunting but exciting. I have begun talking with some of my colleagues who are in somewhat similar roles in other Universities in Australia and overseas about what they do. I would like to share what I've learnt, as well as some of my thoughts so far about how this new role might evolve. I am also very interested in input from the Maths discipline or indeed any of my colleagues as to what you think is important and how this role can benefit the maths discipline and our school.
Reaction-diffusion processes occur in many materials with microstructure such as biological cells, steel or concrete. The main difficulty in modelling and simulating accurately such processes is to account for the fine microstructure of the material. One method of upscaling multi-scale problems, which has proven reliable for obtaining feasible macroscopic models, is the method of periodic homogenisation.
The talk will give an introduction to multi-scale modelling of chemical mechanisms in domains with microstructure as well as to the method of periodic homogenisation. Moreover, a few aspects of solving the resulting systems of equations numerically will also be discussed.
I will survey what is known and some of the open questions.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html
I will survey what is known and some of the open questions.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
We discuss some recently discovered relations between L-values of modular forms and integrals involving the complete elliptic integral K. Gentle and illustrative examples will be given. Such relations also lead to closed forms of previously intractable integrals and (chemical) lattice sums.
I will survey what is known and some of the open questions.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html
I will survey what is known and some of the open questions.
Modern mathematics suffers from subtle but serious logical problems connected with the widespread use of infinite sets and the non-computational aspects of real numbers. The result is an ever-widening gap between the theories of pure mathematics and the computations available to computer scientists.
In this talk we discuss a new approach to mathematics that aims to remove many of the logical difficulties by returning our focus to the all important aspect of the rational numbers and polynomial arithmetic. The key is rational trigonometry, which shows how to rethink the fundamentals of trigonometry and metrical geometry in a purely algebraic way, opens the door to more general non-Euclidean geometries, and has numerous concrete applications for computer scientists interested in graphics and robotics.
The classical prolate spheroidal wavefunctions (prolates) arise when solving the Helmholtz equation by separation of variables in prolate spheroidal coordinates. They interpolate between Legendre polynomials and Hermite functions. In a beautiful series of papers published in the Bell Labs Technical Journal in the 1960's, they were rediscovered by Landau, Slepian and Pollak in connection with the spectral concentration problem. After years spent out of the limelight while wavelets drew the focus of mathematicians, physicists and electrical engineers, the popularity of the prolates has recently surged through their appearance in certain communication technologies. In this talk we outline some developments in the sampling theory of bandlimited signals that employ the prolates, and the construction of bandpass prolate functions.
This is joint work with Joe Lakey (New Mexico State University)
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
We introduce and study a new dual condition which characterizes zero duality gap in nonsmooth convex optimization. We prove that our condition is weaker than all existing constraint qualifications, including the closed epigraph condition. Our dual condition was inspired by, and is weaker than, the so-called Bertsekas’ condition for monotropic programming problems. We give several corollaries of our result and special cases as applications. We pay special attention to the polyhedral and sublinear cases, and their implications in convex optimization.
This research is a joint work with Jonathan M. Borwein and Liangjin Yao.
Vulnerability is the resistance of a network after any disruptions in its links or nodes. Since any network can be modelled by a graph, many vulnerability measures were defined to observe the resistance of networks. For this purpose vulnerability measures such as connectivity,integrity, toughness etc., have been studied widely over all vertices of a graph. In recent many researches began to study on vulnerability measures on graphs over vertices or edges which have a special property rather than over all vertices of the graph.
Independent domination, connected domination and total domination measures are examples of such these measures. Total Accessibility number of a graph is defined as a new measure by choosing the accessible sets $S \subset V$ which have a special property accesibility. Total Accessibility number of a graph G is based on the accessibility number of a graph. The subsets S are accessible sets of the graph. Accessibility number of any connected graph G is a concept based on neighborhood relation between any two vertices by using another vertex connected to both these two vertices.
Graph automatic groups are an extension of the notion of an automatic group, introduced by Kharlampovich, Khoussainov and Miasnikov in 2011, with the intention to capture a wider class of groups while preserving computational properties such as having quadratic time word problem. We extend the notion further by replacing regular with more general language classes. We prove that nonsolvable Baumslag-Solitar groups are (context free)-graph automatic, (context sensitive)-graph automatic implies a context-sensitive word problem and conversely groups with context sensitive word problem are (context sensitive)-automatic. Finally an obstruction to (context sensitive)-graph automatic implying polynomial time word problem is given.
This is joint work with Jennifer Taback, Bowdoin College.
Spatial patterns of events that occur on a network of lines, such as traffic accidents recorded on a street network, present many challenges to a statistician. How do we know whether a particular stretch of road is a "black spot", with a higher-than-average risk of accidents? How do we know which aspects of road design affect accident risk? These important questions cannot be answered satisfactorily using current techniques for spatial analysis. The core problem is that we need to take account of the geometry of the road network. Standard methods for spatial analysis assume that "space" is homogeneous; they are inappropriate for point patterns on a linear network, and give fallacious results. To make progress, we must abandon some of the most cherished assumptions of spatial statistics, with far-reaching implications for statistical methodology.
The talk will describe the first few steps towards a new methodology for analysing point patterns on a linear network. Ingredients include stochastic processes, discrete graph theory and classical partial differential equations as well as statistical methodology. Examples come from ecology, criminology and neuroscience.
In this talk we introduce a Douglas-Rachford inspired projection algorithm, the cyclic Douglas-Rachford iteration scheme. We show, unlike the classical Douglas-Rachford scheme, that the method can be applied directly to convex feasibility problems in Hilbert space without recourse to a product space formulation. Initial results, from numerical experiments comparing our methods to the classical Douglas-Rachford scheme, are promising.
This is joint work with Prof. Jonathan Borwein.
We will discuss the substantial mathematical, computational, historical and philosophical aspects of this celebrated and controversial theorem. Much of this talk should be accessible to undergraduates, but we will also discuss some of the crucial details of the actual revision by Robertson, Sanders, Seymour and Thomas of the original Appel-Haken computer proof. We will additionally cover recent new computer proofs by Gonthier, and by Steinberger, and also the generalisations of the theorem by Hajos and Hadwiger which are currently still open. New software developed by the speaker will be used to visually illustrate many of the subtle points involved, and we will examine the air of controversy that still surrounds existing computer proofs. Finally, the prospect of a human proof will be canvased.
ABOUT THE SPEAKER: Mr Michael Reynolds has a Masters degree in Maths and an extensive experience in Software Industry. He is currently doing his PhD in Graph Theory at the University of Newcastle.
In response to a recent report from Australia's Chief Scientist (Prof Ian Chubb), the Australian government recently sought applications from consortia of universities (and other interested parties) interested in developing pre-service programs that will improve the quality of mathematics and science school teachers. In particular, the programs should:
At UoN, a group of us from Education and MAPS produced the outline of a vision for our own BTeach/BMath program which builds on local strengths. In the context of very tight timelines, this became a part of an application together with five other universities. In this seminar we will outline the vision that we produced, and invite further contributions and participation, with a view to improving the BMath/BTeach program regardless of the outcome of the application of which we are a part.
We continue on the Probabilistic Method, looking at Chapter 4 of Alon and Spencer. We will consider the second moment, the Chebyshev's inequality, Markov's inequality and Chernoff's inequality.
Our most recent computations tell us that any counterexample to Giuga’s 1950 primality conjecture must have at least 19,907 digits. Equivalently, any number which is both a Giuga and a Carmichael number must have at least 19,907 digits. This bound has not been achieved through exhaustive testing of all numbers with up to 19,907 digits, but rather through exploitation of the properties of Giuga and Carmichael numbers. We introduce the conjecture and an algorithm for finding lower bounds to a counterexample, then present our recent results and discuss challenges to further computation.
Network infrastructures are a common phenomenon. Network upgrades and expansions typically occur over time due to budget constraints. We introduce a class of incremental network design problems that allow investigation of many of the key issues related to the choice and timing of infrastructure expansions and their impact on the costs of the activities performed on that infrastructure. We examine three variants: incremental network design with shortest paths, incremental network design with maximum flows, and incremental design with minimum spanning trees. We investigate their computational complexity, we analyse the performance of natural heuristics, we derive approximation algorithms and we study integer program formulations.
Degree/diameter problem in graph theory is a theoretical problem which has applications in network design. The problem is to find the maximum possible number of nodes in a network with the limitations on the number of links attached to any node and also the limitation on the largest number of links that should be traversed when a message is sent from one node to another inside the network. An upper bound, known as the Moore bound, is given to this problem. The graphs that obtain the bound are called Moore graphs.
In this talk we give an overview of the existing Moore graphs and we discuss the existence of a Moore graph of degree 57 with diameter 2 which has been an open problem for more than 50 years.
In this talk, we study the rate of convergence of the cyclic projection algorithm applied to finitely many semi-algebraic convex sets. We establish an explicit convergence rate estimate which relies on the maximum degree of the polynomials that generate the semi-algebraic convex sets and the dimension of the underlying space. We achieve our results by exploiting the algebraic structure of the semi-algebraic convex sets.
This is the joint work with Jon Borwein and Guoyin Li.
W. T. Tutte published a paper in 1963 entitled "How to Draw a Graph". Tutte's motivation was mathematical, and his paper can be seen as a contribution to the long tradition of geometric representations of combinatorial objects.
Over the following 40-odd years, the motivation for creating visual representations of graphs has changed from mathematical curiosity to visual analytics. Current demand for graph drawing methods is now high, because of the potential for more human-comprehensible visual forms in industries as diverse as biotechnology, homeland security and sensor networks. Many new methods have been proposed, tested, implemented, and found their way into commercial tools. This paper describes two strands of this history: the force directed approach, and the planarity approach. Both approaches originate in Tutte's paper.
Further, we demonstrate number of methods for graph visualization that can be derived from the weighted version of Tutte's method. These include results on clustered planar graphs, edge-disjoint paths, an animation method, interactions such as adding/deleting vertices/edges and a focus-plus-context view method.
This will be a short course of lectures. See http://maths-people.anu.edu.au/~brent/probabilistic.html.
Presenters: Judy-anne Osborn, Ben Brawn, Mick Gladys.
Eric Mazur is a Harvard physicist who has become known for the strategies that he introduced for teaching large first year service (physics) classes, in such a way that seems to improve the students' conceptual understanding of the material whilst not hurting their exam performance. The implementation of the ideas include the use of clicker-like technology (Mick Gladys will talk about his own implementation using mobile phones) as well as lower tech card-based analogues. We will screen a Youtube video showing Professor Mazur explain his ideas, and then describe how we have adapted some of them in maths and physics.
In trajectory optimization, the optimal path of a flight system or a group of flight systems is searched for, often in an interplanetary setting: we are in search of trajectories for one or more spacecrafts. On the one hand, this is a well-developed field of research, in which commercial software packages are already available for various scenarios. On the other hand, the computation of such trajectories can be rather demanding, especially when low-thrust missions with long travel times (e.g., years) are considered. Such missions invariably involve gravitational slingshot maneuvers at various celestial bodies in order to save propellant or time. Such maneuvers involve vastly different time scales: years of coasting can be followed by course corrections on a daily basis. In this talk, we give an overview over trajectory optimization for space vehicles and highlight some recent algorithmic developments.
You are invited to a celebration of the 21st anniversary of the Factoring Lemma. This lemma was the key to solving some long-standing open problems, and was the starting point of an investigation of totally disconnected, locally compact groups that has ensued over the last 20 years. In this talk, the life of the lemma will described from its conception through to a very recent strengthening of it. It will be described at a technical level, as well as viewed through its relationships with topology, geometry, combinatorics, algebra, linear algebra and research grants.
A birthday cake will be served afterwards.
Please make donations to the Mathematics Prize Fund in lieu of gifts.
Given a set T of the Euclidean space, whose elements are called sites, and a particular site s, the Voronoi cell of s is the set formed by all points closer to s than to any other site. The Voronoi diagram of T is the family of Voronoi cells of all the elements of T. In this talk we show some applications of the Voronoi diagrams of finite and infinite sets and analyze direct and inverse problems concerning the cells. We also discuss the stability of the cells under different types of perturbations and the effect of assigning weights to the sites.
Geodesic metric spaces provide a setting in which we can develop much of nonlinear, and in particular convex, analysis in the absence of any natural linear structure. For instance, in a state space it often makes sense to speak of the distance between two states, or even a chain of connecting intermediate states, whereas the addition of two states makes no sense at all.
We will survey the basic theory of geodesic metric spaces, and in particular Gromov's so called CAT($\kappa$) spaces. And if there is time (otherwise in a later talk), we will examine some recent results concerning alternating projection type methods, principally the Douglas--Rachford algorithm, for solving the two set feasibility problem in such spaces.
In a recent referee report, the referee said he/she could not understand the proofs of either of the two main results. Come and judge for yourself! This is joint work with Darryn Bryant and Don Kreher.
Complex (and Dynamical) Systems
A Data-Based View of Our World
Population censuses and the human face of Australia
Scientific Data Mining
Earth System Modeling
Mitigating Natural Disaster Risk
Sustainability – Environmental modelling
BioInvasion and BioSecurity
Realising Our Subsurface Potential
Abstract submission closes 31st May, 2013.
For more information, visit the conference website.
Roughly speaking, an automorphism $a$ of a graph $G$ is geometric if there is a drawing $D$ of $G$ such that $a$ induces a symmetry of $D$; if $D$ is planar then a is planar. In this talk we discuss geometric and planar automorphisms. In particular we sketch a linear time algorithm for finding a planar drawing of a planar graph with maximum symmetry.
We show that a combination of two simple preprocessing steps would generally improve the conditioning of a homogeneous system of linear inequalities. Our approach is based on a comparison among three different notions of condition numbers for linear inequalities.
The talk is based on a joint work with Javier Peña and Negar Soheili (Carnegie-Mellon University).
Overview of Course Content
The classical regularity theory is centred around the implicit and Lyusternik-Graves theorems, on the one hand, and the Sard theorem and transversality theory, on the other. The theory (and a number of its applications to various problems of variational analysis) to be discussed in the course deals with similar problems for non-differentiable and set-valued mappings. This theory grew out of demands that came from needs of (mainly) optimization theory and subsequent understanding that some key ideas of the classical theory can be naturally expressed in purely metric terms without mention of any linear and/or differentiable structures.
Topics to be covered
The "theory" part of the course consists of five sections:
Formally, for understanding of the course basic knowledge of functional analysis plus some acquaintance with convex analysis and nonlinear analysis in Banach spaces (e.g. Frechet and Gateaux derivatives, implicit function theorem) will be sufficient. Understanding of the interplay between analytic and geometric concepts would be very helpful.
I will explain how the probabalistic method can be used to obtain lower bounds for the Hadamard maximal determinant problem, and outline how the Lovasz local lemma (Alon and Spencer, Corollary 5.1.2) can be used to improve the lower bounds.
This is a continuation of last semester's lectures on the probabilistic method, but is intended to be self-contained.
The finite element method has become the most powerful approach in approximating solutions of partial differential equations arising in modern engineering and physical applications. We present some efficient finite element methods for Reissner-Mindlin, biharmonic and thin plate equations.
In the first part of the talk I present some applied partial differential equations, and introduce the finite element method using the biharmonic equation. In the second part of the talk I will discuss about the finite element method for Reissner-Mindlin, biharmonic and thin plate spline equations in a unified framework.
Yes! Finally there is some discrete maths in the high school curriculum! Well, perhaps.
In this talk I will go over the inclusion of discrete mathematics content in the new national curriculum, the existing plans for its implementation, what this will mean for high school teachers, and brainstorm ideas for helping out, if they need our help. I will also talk about "This is Megamathematics" and perhaps, if we have time, we can play a little bit with "Electracity".
This talk deals with problems that are asymptotically related to best-packing and best-covering. In particular, we discuss how to efficiently generate N points on a d-dimensional manifold that have the desirable qualities of well-separation and optimal order covering radius, while asymptotically having a prescribed distribution. Even for certain small numbers of points like N=5, optimal arrangements with regard to energy and polarization can be a challenging problem.
I will report on work I performed with Jim Zhu over the past three years on how to exploit different forms of symmetry in variational analysis. Various open problems will be flagged.
This talk is available at http://carma.newcastle.edu.au/jon/symva-talk.pdf and the related paper is at http://carma.newcastle.edu.au/jon/symmetry.pdf. It has recently appeared in Advances in Nonlinear Analysis.
I will discuss Symmetric criticality and the Mountain pass lemma. I will provide the needed background for anyone who did not come to Part 1.
This talk is available at http://carma.newcastle.edu.au/jon/symva-talk.pdf and the related paper is at http://carma.newcastle.edu.au/jon/symmetry.pdf. It has recently appeared in Advances in Nonlinear Analysis.
Do you every wonder what goes on behind the closed doors of some of your professors? Or colleagues? What kind of stuff can I do for my Honours degree? Or my RHD studies? Well, let these wonders cease!
This sequence of talks will expose the greatest (mathematical) desires of mathematicians at Newcastle, highlighting several areas of current research from the purest of the pure to the most applicable of the applied. Talks will aim to be accessible to undergraduates (mostly), or anyone with a desire to learn more mathematics.
Program:The feasibility problem associated with nonempty closed
convex sets $A$ and $B$ is to find some $x\in A \cap B$.
Projection algorithms in general aim to compute such a point.
These algorithms play key roles in optimization and have many applications outside mathematics - for example in medical imaging.
Until recently convergence results were only available in the setting of linear spaces (more particularly, Hilbert spaces) and where the two sets are closed and convex.
The extension into geodesic metric spaces allows their use in spaces where there is no natural linear structure, which is the case for instance in tree spaces, state spaces, phylogenomics and configuration spaces for robotic movements.
After reviewing the pertinent aspects of CAT(0) spaces introduced in Part I, including results for von Neumann's alternating projection method, we will focus on the Douglas-Rachford algorithm, in CAT(0) spaces. Two situations arise; spaces with constant curvature and those with non-constant curvature. A prototypical space of the later kind will be introduced and the behavior of the Douglas-Rachford algorithm within it examined.
Do you every wonder what goes on behind the closed doors of some of your professors? Or colleagues? What kind of stuff can I do for my Honours degree? Or my RHD studies? Well, let these wonders cease!
This sequence of talks will expose the greatest (mathematical) desires of mathematicians at Newcastle, highlighting several areas of current research from the purest of the pure to the most applicable of the applied. Talks will aim to be accessible to undergraduates (mostly), or anyone with a desire to learn more mathematics.
Programme:The talk will be about new results on modular forms obtained by the speaker in collaboration with Shaun Cooper.
Universities are facing a tumultuous time with external regulation through TEQSA and the rise of MOOCs (Massive Open Online Courses). Disciplines within universities face the challenge of doing research, as well as producing a range of graduates capable of undertaking diverse careers. These are not new challenges. The emergence of MOOCs has raised the question, 'Why go to a University?' These tumultuous times provide a threat as well as an opportunity. How do we balance our activities? Does teaching and learning need to be re-conceptuliased? Is it time to seriously consider the role of education and the 'value-add' university education provides? This talk will provide snapshots of work that demonstrate the value-add universities do provide. Evidence is used to challenge current understandings and to chart a way forward.
The aim of this Douglas-Rachford brainstorming session to discuss:
-New applications and large scale experiments
-Diagnosing and profiling successful non-convex applications
-New conjectures
-Anything else you may think is relevant
Let spt(n) denote the number of smallest parts in the partitions of n. In 2008, Andrews found surprising congruences for the spt-function mod 5, 7 and 13. We discuss new congruences for spt(n) mod powers of 2. We give new generating function identities for the spt-function and Dyson's rank function. Recently with Andrews and Liang we found a spt-crank function that explains Andrews spt-congruences mod 5 and 7. We extend these results by finding spt-cranks for various overpartition-spt-functions of Ahlgren, Bringmann, Lovejoy and Osburn. This most recent work is joint with Chris Jennings-Shaffer.
Image processing research is dominated, to a considerable degree, by linear-additive models of images. For example, wavelet decompositions are very popular both with experimentalists and theoreticians primarily because of their neatly convergent properties. Fourier and orthogonal series decompositions are also popular in applications, as well as playing an important part in the analysis of wavelet methods.
Multiplicative decomposition, on the other hand, has had very little use in image processing. In 1-D signal processing and communication theory it has played a vital part (amplitude, phase, and frequency modulations of communications theory especially).
In many cases 2-D multiplicative decompositions have just been too hard to formulate or expand. Insurmountable problems (divergences) often occur as the subtle consequences of unconscious errors in the choice of mathematical structure. In my work over the last 17 years I've seen how to overcome some of the problems in 2-D, and the concept of phase is a central, recurring theme. But there is still so much more to be done in 2-D and higher dimensions.
This talk will be a whirlwind tour of some main ideas and applications of phase in imaging.
(Joint work with Konrad Engel and Martin Savelsbergh)
In an incremental network design problem we want to expand an existing network over several time periods, and we are interested in some quality measure for all the intermediate stages of the expansion process. In this talk, we look at the following simple variant: In each time period, we are allowed to add a single edge, the cost of a network is the weight of a minimum spanning tree, and the objective is to minimize the sum of the costs over all time periods. We describe a greedy algorithm for this problem and sketch a proof of the fact that it provides an optimal solution. We also indicate that incremental versions of other basic network optimization problems (shortest path and maximum flow) are NP-hard.
In his deathbed letter to G.H. Hardy, Ramanujan gave a vague definition of a mock modular function: at each root of unity its asymptotics matches the one of a modular form, though a choice of the modular function depends on the root of unity. Recently Folsom, Ono and Rhoades have proved an elegant result about the match for a general family related to Dyson’s rank (mock theta) function and the Andrews—Garvan crank (modular) function. In my talk I will outline some heuristics and elementary ingredients of the proof.
Joint work with David Wood (Monash University, Australia) and Eran Nevo (Ben-Gurion University of the Negev, Israel).
The maximum number of vertices of a graph of maximum degree $\Delta\ge 3$ and diameter $k\ge 2$ is upper bounded by $\Delta^{k}$. If we restrict our graphs to certain classes, better upper bounds are known. For instance, for the class of trees there is an upper bound of $2\Delta^{\lfloor k/2\rfloor}$. The main result of this paper is that, for large $\Delta$, graphs embedded in surfaces of bounded Euler genus $g$ behave like trees. Specifically, we show that, for large $\Delta$, such graphs have orders bounded from above by
\begin{cases} (c_0g+c_1)\Delta^{\lfloor k/2\rfloor} & \text{if $k$ is even}\\
(c_0g^2+c_1)\Delta^{\lfloor k/2\rfloor} & \text{if $k$ is odd}
\end{cases}
where $c_0,c_1$ are absolute constants.
With respect to lower bounds, we construct graphs of Euler genus $g$, odd diameter and orders $(c_0\sqrt{g}+c_1)\Delta^{\lfloor k/2\rfloor}$, for absolute constants $c_0,c_1$.
Our results answer in the negative a conjecture by Miller and Širáň (2005). Before this paper, there were constructions of graphs of Euler genus $g$ and orders $c_0\Delta^{\lfloor k/2\rfloor}$ for an absolute constant $c_0$. Also, Šiagiová and Simanjuntak (2004) provided an upper bound of $(c_0g+c_1)k\Delta^{\lfloor k/2\rfloor}$ with absolute constants $c_0,c_1$.
I will talk about the metrical theory of Diophantine approximation associated with linear forms that are simultaneously small in terms of absolute value rather than the classical nearest integer norm. In other words, we consider linear forms which are simultaneously close to the origin. A complete Khintchine-Groshev type theorem for monotonic approximating functions is established within the absolute value setup. Furthermore, the Hausdorff measure generalization of the Khintchine-Groshev type theorem is obtained. As a consequence we obtain the complete Hausdorff dimension theory. Staying within the absolute value setup, we prove that the corresponding set of badly approximable vectors is of full dimension.
The degree/diameter problem is to find the largest possible order of a graph (or digraph) with given maximum degree (or maximum out-degree) and given diameter. This is one of the unsolved problems in Extremal Graph Theory. Since the general problem is difficult many variations of the problem have been considered, including bipartite, vertex-transitive, mixed, planar, etc.
This talk is part of a series started in May. The provisional schedule is
Random matrix theory has undergone significant theoretical progress in the last two decades, including proofs on universal behaviour of eigenvalues as the matrix dimension becomes large, and a deep connection between algebraic manipulations of random matrices and free probability theory. Underlying many of the analytical advances are tools from complex analysis. By developing numerical versions of these tools, it is now possible to calculate random matrix statistics to high accuracy, leading to new conjectures on the behaviour of random matrices. We overview recent advances in this direction.
An exact bucket indexed (BI) mixed integer linear programming formulation for nonpreemptive single machine scheduling problems is presented that is a result of an ongoing investigation into strategies to model time in planning applications with greater efficacy. The BI model is a generalisation of the classical time indexed (TI) model to one in which at most two jobs can be processing in each time period. The planning horizon is divided into periods of equal length, but unlike the TI model, the length of a period is a parameter of the model and can be chosen to be as long as the processing time of the shortest job. The two models are equivalent if the problem data are integer and a period is of unit length, but when longer periods are used in the BI model, it can have significantly fewer variables and nonzeros than the TI model at the expense of a greater number of constraints. A computational study using weighted tardiness instances reveals the BI model significantly outperforms the TI model on instances where the mean processing time of the jobs is large and the range of processing times is small, that is, the processing times are clustered rather than dispersed.
Joint work with Natashia Boland and Riley Clement.
TBA
20 minute presentation followed by 10 minutes of questions and discussion.
Joint work with M. Mueller, B. O'Donoghue, and Y. Wang
We consider dynamic trading of a portfolio of assets in discrete periods over a finite time horizon, with arbitrary time-varying distribution of asset returns. The goal is to maximize the total expected revenue from the portfolio, while respecting constraints on the portfolio such as a required terminal portfolio and leverage and risk limits. The revenue takes into account the gross cash generated in trades, transaction costs, and costs associated with the positions, such as fees for holding short positions. Our model has the form of a stochastic control problem with linear dynamics and convex cost function and constraints. While this problem can be tractably solved in several special cases, such as when all costs are convex quadratic, or when there are no transaction costs, our focus is on the more general case, with nonquadratic cost terms and transaction costs.
We show how to use linear matrix inequality techniques and semidefinite programming to produce a quadratic bound on the value function, which in turn gives a bound on the optimal performance. This performance bound can be used to judge the performance obtained by any suboptimal policy. As a by-product of the performance bound computation, we obtain an approximate dynamic programming policy that requires the solution of a convex optimization problem, often a quadratic program, to determine the trades to carry out in each step. While we have no theoretical guarantee that the performance of our suboptimal policy is always near the performance bound (which would imply that it is nearly optimal) we observe that in numerical examples the two values are typically close.
In many problems in control, optimal and robust control, one has to solve global
optimization problems of the form: $\mathbf{P}:f^\ast=\min_{\mathbf x}\{f(\mathbf x):\mathbf x\in\mathbf K\}$, or, equivalently, $f^\ast=\max\{\lambda:f-\lambda\geq0\text{ on }\mathbf K\}$, where $f$ is a polynomial (or even a semi-algebraic function) and $\mathbf K$ is a basic semi-algebraic set. One may even need solve the "robust" version $\min\{f(\mathbf x):\mathbf x\in\mathbf K;h(\mathbf x,\mathbf u)\geq0,\forall \mathbf u\in\mathbf U\}$ where $\mathbf U$ is a set of parameters. For
instance, some static output feedback problems can be cast as polynomial optimization
problems whose feasible set $\mathbf K$ is defined by a polynomial matrix inequality (PMI). And
robust stability regions of linear systems can be modeled as parametrized polynomial
matrix inequalities (PMIs) where parameters $\mathbf u$ account for uncertainties and (decision)
variables x are the controller coefficients.
Therefore, to solve such problems one needs tractable characterizations of polynomials
(and even semi-algebraic functions) which are nonnegative on a set, a topic of independent
interest and of primary importance because it also has implications in many other areas.
We will review two kinds of tractable characterizations of polynomials which are non-negative on a basic closed semi-algebraic set $\mathbf K\subset\mathbb R^n$. The first type of characterization is
when knowledge on $\mathbf K$ is through its defining polynomials, i.e., $\mathbf K=\{\mathbf x:g_j(\mathbf x)\geq 0, j =1,\dots, m\}$, in which case some powerful certificates of positivity can be stated in terms of some sums of squares (SOS)-weighted representation. For instance, this allows to define a hierarchy fo semidefinite relaxations which yields a monotone sequence of lower bounds
converging to $f^\ast$ (and in fact, finite convergence is generic). There is also another way
of looking at nonnegativity where now knowledge on $\mathbf K$ is through moments of a measure
whose support is $\mathbf K$. In this case, checking whether a polynomial is nonnegative on $\mathbf K$
reduces to solving a sequence of generalized eigenvalue problems associated with a count-
able (nested) family of real symmetric matrices of increasing size. When applied to $\mathbf P$, this
results in a monotone sequence of upper bounds converging to the global minimum, which
complements the previous sequence of upper bounds. These two (dual) characterizations
provide convex inner (resp. outer) approximations (by spectrahedra) of the convex cone
of polynomials nonnegative on $\mathbf K$.
UPDATE: Abstract submission is now open.
The main thrust of this workshop will be exploring the interface between important methodological areas of infectious disease modelling. In particular, two main themes will be explored: the interface between model-based data analysis and model-based scenario analysis, and the relationship between agent-based/micro-simulation and modelling.
I will discuss some models of what a "random abelian group" is, and some conjectures (the Cohen-Lenstra heuristics of the title) about how they show up in number theory. I'll then discuss the function field setting and a proof of these heuristics, with Ellenberg and Westerland. The proof is an example of a link between analytic number theory and certain classes of results in algebraic topology ("homological stability").
It is well known that the Moore digraph, namely a diregular digraph of degree d, diameter k and order 1 + d + d 2 + ... + d k , only exists if d = 1 or k = 1. Let (d,k)-digraph be a diregular digraph of degree d ≥ 2, diameter k ≥ 2 and order d+d 2 +...+d k , one less than the Moore bound. Such a (d,k)-digraph is also called an almost Moore digraph.
The study of the existence of an almost Moore digraph of degree d and diameter k has received much attention. Fiol, Allegre and Yebra (1983) showed the existence of (d,2)-digraphs for all d ≥ 2. In particular, for d = 2 and k = 2, Miller and Fris (1988) enumerated all non-isomophic (2,2)-digraphs. Furthermore, Gimbert (2001) showed that there is only one (d,2)-digraph for d ≥ 3. However for de- gree 2 and diameter k ≥ 3, it is known that there is no (2,k)-digraph (Miller and Fris, 1992). Furthermore, it was proved that there is no (3,k)-digraph with k ≥ 3 (Baskoro, Miller, Siran and Sutton, 2005). Recently, Conde, Gimbert, Gonzáles, Miret, and Moreno (2008 & 2013) showed that no (d,k)-digraphs exist for k = 3,4 and for any d ≥ 2. Thus, the remaining case still open is the existence of (d,k)- digraphs with d ≥ 4 and k ≥ 5.
Several necessary conditions for the existence of (d,k)-digraphs, for d ≥ 4 and k ≥ 5, have been obtained. In this talk, we shall discuss some necessary conditions for these (d,k)-digraphs. Open problems related to this study are also presented.
Joint work with N. Parikh, E. Chu, B. Peleato, and J. Eckstein
Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features, training examples, or both. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. We argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas-Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, and support vector machines.
The related paper, code and talk slides are available at http://www.stanford.edu/~boyd/papers/admm_distr_stats.html.
It was understood by Minkowski that one could prove interesting results in number theory by considering the geometry of lattices in R(n). (A lattice is simply a grid of points.) This technique is called the "geometry of numbers". We now understand much more about analysis and dynamics on the space of all lattices, and this has led to a deeper understanding of classical questions. I will review some of these ideas, with emphasis on the dynamical aspects.
There exist a variety of mechanisms to share indivisible goods between agents. One of the simplest is to let the agents take turns to pick an item. This mechanism is parameterized by a policy, the order in which agents take turns. A simple model of this mechanism was proposed by Bouveret and Lang in 2011. We show that in their setting the natural policy of letting the agents alternate in picking items is optimal. We also present a number of potential generalizations and extensions.
This is joint work with Nina Narodytska and Toby Walsh.
TBA
Within a nonzero, real Banach space we study the problem of characterising a maximal extension of a monotone operator in terms of minimality properties of representative functions that are bounded by the Penot and Fitzpatrick functions. We single out a property of the space of representative functions that enable a very compact treatment of maximality and pre-maximality issues. As this treatment does not assume reflexivity and we characterises this property the existence of a counter example has a number of consequences for the search for a suitable certificate for maximality in non-reflexive spaces. In particular one is lead to conjecture that some extra side condition to the usual CQ is inevitable. We go on to look at the simplest such condition which is boundedness of the domain of the monotone operator and obtain some positive results.
Many successful non-convex applications of the Douglas-Rachford method can be viewed as the reconstruction of a matrix, with known properties, from a subset of its entries. In this talk we discuss recent successful applications of the method to a variety of (real) matrix reconstruction problems, both convex and non-convex.
This is joint work with Fran Aragón and Matthew Tam.
I will report on recent joint work (with J.Y. Bello Cruz, H.M. Phan, and X. Wang) on the Douglas–Rachford algorithm for finding a point in the intersection of two subspaces. We prove that the method converges strongly to the projection of the starting point onto the intersection. Moreover, if the sum of the two subspaces is closed, then the convergence is linear with the rate being the cosine of the Friedrichs angle between the subspaces. Our results improve upon existing results in three ways: First, we identify the location of the limit and thus reveal the method as a best approximation algorithm; second, we quantify the rate of convergence, and third, we carry out our analysis in general (possibly infinite-dimensional) Hilbert space. We also provide various examples as well as a comparison with the classical method of alternating projections.
Extremal graph theory includes problems of determining the maximum number of edges in a graph on $n$ vertices that contains no forbidden subgraphs. We consider only simple graphs with no loops or multiple edges and the forbidden subgraphs under consideration are cycles of length 3 and 4 (triangle and square). This problem was proposed by Erdos in 1975. Let $n$ denote the number of vertices in a graph $G$. By $ex(n; {C3,C4})$, or simply $ex(n;4)$ we mean the maximum number of edges in a graph of order $n$ and girth at least $g \geq 5$. There are only 33 exact values of $ex(n;4)$ currently known. In this talk, I give an overview of the current state of research in this problem, regarding the exact values, as well as the lower bound and the upper bound of the extremal numbers when the exact value is not known.
Popular accounts of evolution typically create an expectation that populations become ever better adapted over time, and some formal treatments of evolutionary processes suggest this too. However, such analyses do not highlight the fact that competition with conspecics has negative population-level consequences too, particularly when individuals invest in success in zero-sum games. My own work is at the interface of theoretical biology and empirical data, and I will discuss several examples where an adaptive evolutionary process leads to something that appears silly from the population point of view, including a heightened risk of extinction in the Gouldian finch, reduced productivity of species in which males do not participate in parental care, and deterministic extinction of local populations in systems that feature sexual parasitism.
Recently a great deal of attention from biologists has been directed to understanding the role of knots in perhaps the most famous of long polymers - DNA. In order for our cells to replicate, they must somehow untangle the approximately two metres of DNA that is packed into each nucleus. Biologists have shown that DNA of various organisms is non-trivially knotted with certain topologies preferred over others. The aim of our work is to determine the "natural" distribution of different knot-types in random closed curves and compare that to the distributions observed in DNA.
Our tool to understand this distribution is a canonical model of long chain polymers - self-avoiding polygons (SAPs). These are embeddings of simple closed curves into a regular lattice. The exact computation of the number of polygons of length n and fixed knot type K is extremely difficult - indeed the current best algorithms can barely touch the first knotted polygons. Instead of exact methods, in this talk I will describe an approximate enumeration method - which we call the GAS algorithm. This is a generalisation of the famous Rosenbluth method for simulating linear polymers. Using this algorithm we have uncovered strong evidence that the limiting distribution of different knot-types is universal. Our data shows that a long closed curve is about 28 times more likely to be a trefoil than a figure-eight, and that the natural distribution of knots is quite different from those found in DNA.
Let $G$ be a connected graph with vertex set $V$ and edge set $E$. The distance $d(u,v)$ between two vertices $u$ and $v$ in $G$ is the length of a shortest $u-v$ path in $G$. For an ordered set $W = \{w_1, w_2, ..., w_k\}$ of vertices and a vertex $v$ in a connected graph $G$, the code of $v$ with respect to $W$ is the $k$-vector \begin{equation} C_W(v)=(d(v,w_1),d(v,w_2), ..., d(v,w_k)). \end{equation} The set $W$ is a resolving set for $G$ if distinct vertices of $G$ have distinct codes with respect to $W$. A resolving set for $G$ containing a minimum number of vertices is called a minimum resolving set or a basis for $G$. The metric dimension, denoted, $dim(G)$ is the number of vertices in a basis for $G$. The problem of finding the metric dimension of an arbitrary graph is NP-complete.
The problem of finding minimum metric dimension is NP-complete for general graphs. Manuel et al. have proved that this problem remains NP-complete for bipartite graphs. The minimum metric dimension problem has been studied for trees, multi-dimensional grids, Petersen graphs, torus networks, Benes and butterfly networks, honeycomb networks, X-trees and enhanced hypercubes.
These concepts have been extended in various ways and studied for different subjects in graph theory, including such diverse aspects as the partition of the vertex set, decomposition, orientation, domination, and coloring in graphs. Many invariants arising from the study of resolving sets in graph theory offer subjects for applicable research.
The theory of conditional resolvability has evolved by imposing conditions on the resolving set. This talk is to recall the concepts and mention the work done so far and future work.
The rough Cayley graph is the analogue in the context of topological groups of the standard Cayley graph, which is defined for finitely generated group. It will be shown how it is possible to associate such a graph to a compactly generated totally disconnected and locally compact (t.d.l.c.) group and how the rough Cayley graph represents an important tool to study the structure of this kind of group.
We analyse local combinatorial structure in product sets of two subsets of a countable group which are "large" with respect to certain classes (not necessarily invariant) means on the group. As an example of such phenomenon, we can mention the result by Bergelson, Furstenberg and Weiss which says that the sumset of two sets of positive density in integers contains locally an almost-periodic set. In this theorem, large sets are the sets of positive density, and a combinatorial structure is an almost periodic set.
How do a student’s attitude, learning behaviour and achievement in mathematics or statistics relate to each other and how do these change during the course of their undergraduate degree program? These are some of the questions I have been addressing in a longitudinal study that I have undertaken as part of my PhD research. The questions were addressed by soliciting comments from students several times during their undergraduate degree programs; through an initial attitude survey, course-specific surveys for up to two courses each semester and interviews with students near the end of their degrees. In this talk I will introduce you to the attitudes and learning behaviours of the mathematics students I followed through the three years of my research, and discuss their responses to the completed surveys (attitude and course-specific). To illuminate the general responses obtained from the surveys (1074 students completed the initial attitude survey and 645 course-specific surveys were completed), I will also introduce you to Tom, Paul, Kate and Ben, four students of varying degrees of achievement, who I interviewed near the end of their mathematics degrees.
The split feasibility problem (SFP) consists in finding a point in a closed convex subset of a Hilbert space such that its image under a bounded linear operator belongs to a closed convex subset of another Hilbert space. Since its inception in 1994 by Censor and Elfving, it has received much attention thanks mainly to its applications to signal processing and image reconstruction. Iterative methods can be employed to solve the SFP. One of the most popular iterative method is Byrne's CQ algorithm. However, this algorithm requires prior knowledge (or at least an estimate) of the norm of the bounded linear operator. We introduce a stepsize selection method so that the implementation of the CQ algorithm does not need any prior information regarding the operator norm. Furthermore, a relaxed CQ algorithm, where the two closed convex sets are both level sets of convex functions, and a Halpern-type algorithm are studied under the same stepsize rule, yielding both weak and strong convergence. A more general problem, the Multiple-sets split feasibility problem, will be also presented. Numerical experiments are included to illustrate the applications to signal processing and, in particular, to compressed sensing and wavelet-based signal restoration.
Based on joint works with G. López and H-K Xu.
Our goal is to estimate the rate of growth of a population governed by a simple stochastic model. We may choose (n) sampling times at which to count the number of individuals present, but due to detection difficulties, or constraints on resources, we are able only to observe each individual with fixed probability (p). We discuss the optimal sampling times at which to make our observations in order to approximately maximize the accuracy of our estimation. To achieve this, we maximize the expected volume of information obtained from such binomial observations, that is the Fisher Information. For a single sample, we derive an explicit form of the Fisher Information. However, finding the Fisher Information for higher values of (n) appears intractable. Nonetheless, we find a very good approximation function for the Fisher Information by exploiting the probabilistic properties of the underlying stochastic process and developing a new class of delayed distributions. Both numerical and theoretical results strongly support this approximation and confirm its high level of accuracy.
A numerical method is proposed for constructing an approximation of the Pareto front of nonconvex multi-objective optimal control problems. First, a suitable scalarization technique is employed for the multi-objective optimal control problem. Then by using a grid of scalarization parameter values, i.e., a grid of weights, a sequence of single-objective optimal control problems are solved to obtain points which are spread over the Pareto front. The technique is illustrated on problems involving tumor anti-angiogenesis and a fed-batch bioreactor, which exhibit bang–bang, singular and boundary types of optimal control. We illustrate that the Bolza form, the traditional scalarization in optimal control, fails to represent all the compromise, i.e., Pareto optimal, solutions.
Joint work with Helmut Maurer.
C. Y. Kaya and H. Maurer, A numerical method for nonconvex multi-objective optimal control problems, Computational Optimization and Applications, (appeared online: September 2013, DOI 10.1007/s10589-013-9603-2)
In this talk I will discuss a method of finding simple groups acting on trees. I will discuss the theory behind this process and outline some proofs (time permitting).
The scale function plays a key role in the structure theory of totally disconnected locally compact (t.d.l.c.) groups. Whereas it is known that the scale function is continuous when acting on a t.d.l.c. group, analysis of the continuity of the scale in a wider context requires the topologization of the group of continuous automorphisms. Existing topologies for Aut(G) are outlined and shown to be insufficient for guaranteeing the continuity of the scale function. Possible methods of generalising these topologies are explored.
In this talk I will describe an algorithm to do a random walk in the space of all words equal to the identity in a finitely presented group. We prove that the algorithm samples from a well defined distribution, and using the distribution we can find the expected value for the mean length of a trivial word. We then use this information to estimate the cogrowth of the group. We ran the algorithm on several examples -- where the cogrowth series in known exactly our results are in agreement with the exact results. Running the algorithm on Thompson's group $F$, we see behaviour consistent with the hypothesis that $F$ is not amenable.
We propose and study a new method, called the Interior Epigraph Directions (IED)
method, for solving constrained nonsmooth and nonconvex optimization. The IED
method considers the dual problem induced by a generalized augmented Lagrangian
duality scheme, and obtains the primal solution by generating a sequence of
iterates in the interior of the dual epigraph. First, a deflected subgradient
(DSG) direction is used to generate a linear approximation to the dual
problem. Second, this linear approximation is solved using a Newton-like step.
This Newton-like step is inspired by the Nonsmooth Feasible Directions Algorithm
(NFDA), recently proposed by Freire and co-workers for solving unconstrained,
nonsmooth convex problems. We have modified the NFDA so that it takes advantage
of the special structure of the epigraph of the dual function. We prove that all
the accumulation points of the primal sequence generated by the IED method are
solutions of the original problem. We carry out numerical experiments by using
test problems from the literature. In particular, we study several instances of
the Kissing Number Problem, previously solved by various approaches such as an
augmented penalty method, the DSG method, as well as the popular differentiable
solvers ALBOX (a predecessor of ALGENCAN), Ipopt and LANCELOT. Our experiments
show that the quality of the solutions obtained by the IED method is comparable
with (and sometimes favourable over) those obtained by the other solvers mentioned.
Joint work with Wilhelm P. Freire and C. Yalcin Kaya.
This colloquium will explain some of the background and significance of the concept of amenability. Arguments with finite groups frequently, without remark, count the number of elements in a subset or average a function over the group. It is usually important in these arguments that the result of the calculation is invariant under translation. Such calculations cannot be so readily made in infinite groups but the concepts of amenability and translation invariant measure on a group in some ways take their place. The talk will explain this and also say how random walks relate to these same ideas.
The link to the animation of the paradoxical decomposition is here.
Times and Dates:
Mon 2 Dec 2013: 10-12, 2-4
Tue 3 Dec 2013: 10-12, 2-4
Wed 4 Dec 2013: 10-12, 2-4
Thu 5 Dec 2013: 10-12, 2-4
Abstract: This will be a short and fast introduction to the field of geometric group theory. Assumed knowledge is abstract algebra (groups and rings) and metric spaces. Topics to be covered include: free groups, presentations, quasiisometry, hyperbolic groups, Dehn functions, growth, amenable groups, cogrowth, percolation, automatic groups, CAT(0) groups, examples: Thompson's group F, self-similar groups (Grigorchuk group), Baumslag-Solitar groups.
In this talk, we provide some characterizations of ultramaximally monotone operators. We establish the Brezis--Haraux condition in the setting of a general Banach space. We also present some characterizations of reflexivity of a Banach space by a linear continuous ultramaximally monotone operator.
Joint work with Jon Borwein.
We develop an integer programming based decision support tool that quickly assesses the throughput of a coal export supply chain for a given level of demand. The tool can be used to rapidly evaluate a number of infrastructures for several future demand scenarios in order to identify a few that should be investigated more thoroughly using a detailed simulation model. To make the natural integer programming model computationally tractable, we exploit problem structure to reduce the number of variables and employ aggregation as well as disaggregation to strengthen the linear programming relaxation. Afterward, we implicitly reformulate the problem to exclude inherent symmetry in the original formulation and use Hall's marriage theorem to ensure its feasibility. Studying polyhedron structure of a sub-problem, we enhance the formulation by generating strong valid inequalities. The integer programming tool is used in a computational study in which we analyze system performance for different levels of demand to identify potential bottlenecks.
Psychologists and other experiment designers are often faced with the task of creating sets of items to be used in factorial experiments. These sets need to be as similar as possible to each other in terms of the items' given attributes. We name this problem Picking Items for Experimental Sets (PIES). In this talk I will discuss how similarity can be defined, mixed integer programs to solve PIES and heuristic methods.
I will also examine the popular integer programming heuristic, the feasibility pump. The feasibility pump aims to find an integer feasible solution for a MIP. I will be showing how using different projection algorithms, including Douglas-Rachford, added randomness and reformulating the projection spaces change the effectiveness of the heuristic.
A classical nonlinear PDE used for modelling heat transfer between concentric cylinders by fluid convection and also for modelling porous flow can be solved by hand using a low-order perturbation method. Extending this solution to higher order using computer algebra is surprisingly hard owing to exponential growth in the size of the series terms, naively computed. In the mid-1990's, so-called "Large Expression Management" tools were invented to allow construction and use of so-called "computation sequences" or "straight-line programs" to extend the solution to 11th order. The cost of the method was O(N^8) in memory, high but not exponential.
Twenty years of doubling of computer power allows this method to get 15 terms. A new method, which reduces the memory cost to O(N^4), allows us to compute to N=30. At this order, singularities can reliably be detected using the quotient-difference algorithm. This allows confident investigation of the solutions, for different values of the Prandtl number.
This work is joint with Yiming Zhang (PhD Oct 2013).
We consider a problem of minimising $f_1(x)+f_2(y)$ over $x \in X \subseteq R^n$ and $y \in Y \subseteq R^m$ subject to a number of extra coupling constraints of the form $g_1(x) g_2(y) \geq 0$. Due to these constraints, the problem may have a large number of local minima. For any feasible combination of signs of $g_1(x)$ and $g_2(y)$, the coupled problem is decomposable, and the resulting two problems are assumed to be easily solved. An approach to solving the coupled problem is presented. We apply it to solving coupled monotonic regression problems arising in experimental psychology.
I will review the creation and development of the concept of number and the role of visualisation in that development. The relationship between innate human capabilities on the one hand and mathematical research and education on the other will be discussed.
In this seminar I will review my recent work into Hankel determinants and their number theoretic uses. I will briefy touch on the p-adic evaluation of a particular determinant and comment on how Hankel determinants together with Padé approximants can be used in some irrationality proofs. A fundamental determinant property will be demonstrated and I will show what implications this holds for positive Hankel determinants and where we might go from here.
The previous assessment method for MCHA2000 - Mechatronic Systems (which is common to many other courses) allowed students collect marks from assessments and quizzes during the semester and pass the course without reaching a satisfactory level of competency in some topics. In 2013, we obtained permission from the President of Academic Senate to test a different assessment scheme that aimed at preventing students from passing without attaining a minimum level of competency in all topics of the course. This presentation discusses the assessment scheme tested and the results we obtained, which suggest that the proposed scheme makes a difference.
MCHA2000 is a course about modelling, simulation, and analysis of physical system dynamics. It is believed that the proposed model is applicable to other courses.
Bio: A/Prof Tristan Perez, Lecturer of MCHA2000. http://www.newcastle.edu.au/profile/tristan-perez
I'll discuss the analytic solution to the limit shape problem for random domino tilings and "lozenge" tilings, and in particular try to explain how these limiting surfaces develop facets.
It is now known for a number of models of statistical physics in two dimensions (such as percolation or the Ising model) that at their critical point, they do behave in a conformally invariant way in the large-scale limit, and do give rise in this limit to random fractals that can be mathematically described via Schramm's Stochastic Loewner Evolutions.
The goal of the present talk will be to discuss some aspects of what remains valid or should remain valid about such models and their conformal invariance, when one looks at them within a fractal-type planar domain. We shall in particular describe (and characterize) a continuous percolation interface within certain particular random fractal carpets. Part of this talk will be based on joint work with Jason Miller and Scott Sheffield.
Liz will talk about how the UoN could make more use of the flipped classroom. The flipped classroom is an approach where content is provided in advance to students and instead of the traditional lecture the time is spent interacting with students through worked examples etc.
Liz will examine impacts on student learning, but also consider how to make this approach manageable to staff workloads and how lecture theatres design can be altered to facilitate this new way of learning.
Polyhedral links, interlinked and interlocked architectures, have been proposed for the description and characterization of DNA and protein polyhedra. Chirality is a very important feature for biomacromolecules. In this talk, we discuss the topological chirality of a type of DNA polyhedral links constructed by the strategy of "n-point stars and a type of protein polyhedral links constructed by "three-cross curves" covering. We shall ignore DNA sequence and use the orientation of the 2 backbone strands of the dsDNA to orient DNA polyhedral links, thus consider DNA polyhedral links as oriented links with antiparallel orientations. We shall ignore protein sequence and view protein polyhedral links as unoriented ones. It is well known that there is a correspondence between alternating links and plane graphs. We prove that links corresponding to bipartite plane graphs have antiparallel orientations, and under these orientations, their writhes are not zero. As a result, the type of right-handed double crossover 4-turn DNA polyhedral links are topologically chiral. We also prove that the unoriented link corresponding to a connected, even, bipartite plane graph has self-writhe 0 and using the Jones polynomial we present a criterion for chirality of unoriented alternating links with self-writhe 0. By applying this criterion we obtain that 3-regular protein polyhedral links are also topologically chiral. Topological chirality always implies chemical chirality, hence the corresponding DNA and protein polyhedra are all chemically chiral. Our chiral criteria may be used to detect the topological chirality of more complicated DNA and protein polyhedral links to be synthesized by chemists and biologists in the future.
Jonathan Kress of UNSW will be talking about the UNSW experience of using MapleTA for online assignments in Mathematics over an extended period of time.
Ben Carter will be talking about some of the rationale for online assignments, how we're using MapleTA here, and our hopes for the future, including how we might use it as a basis for a flipped classroom approach to some of our teaching.
This talk will give an introduction to the Kelper-Coulomb and harmonic oscillator systems fundamental in both the classical and quantum worlds. These systems are related by "coupling constant metamorphosis", a remarkable trick that exchanges the energy of one system with the coupling constant of the other. The trick can be seen to be a type of conformal transformation, that is, a scaling of the underlying metric, that maps "conformal symmetries" to "true symmetries" of a Hamiltonian system.
In this talk I will explain the explain the statements above and discuss some applications of coupling constant metamorphosis to superintegrable systems and differential geometry.
A lattice rule with a randomly-shifted lattice estimates a mathematical expectation, written as an integral over the s-dimensional unit hypercube, by the average of n evaluations of the integrand, at the n points of the shifted lattice that lie inside the unit hypercube. This average provides an unbiased estimator of the integral and, under appropriate smoothness conditions on the integrand, it has been shown to converge faster as a function of n than the average at n independent random points (the standard Monte Carlo estimator). In this talk, we study the behavior of the estimation error as a function of the random shift, as well as its distribution for a random shift, under various settings. While it is well known that the Monte Carlo estimator obeys a central limit theorem when $n \rightarrow \infty$, the randomized lattice rule does not, due to the strong dependence between the function evaluations. We show that for the simple case of one-dimensional integrands, the limiting error distribution is uniform over a bounded interval if the integrand is non-periodic, and has a square root form over a bounded interval if the integrand is periodic. We find that in higher dimensions, there is little hope to precisely characterize the limiting distribution in a useful way for computing confidence intervals in the general case. We nevertheless examine how this error behaves as a function of the random shift from different perspectives and on various examples. We also point out a situation where a classical central-limit theorem holds when the dimension goes to infinity, we provide guidelines on when the error distribution should not be too far from normal, and we examine how far from normal is the error distribution in examples inspired from real-life applications.
Without convexity the convergence of a descent algorithm can normally only be certified in the weak sense that every accumulation point of the sequence of iterates is critical. This does not at all correspond to what we observe in practice, where these optimization methods always converge to a single limit point, even though convergence may sometimes be slow.
Around 2006 it has been observed that convergence to a single limit can be proved for objective functions having certain analytic features. The property which is instrumental here is called the Lojasiewicz inequality, imported from analytic function theory. While this has been successfully applied to smooth functions, the case of non-smooth functions turns out more difficult. In this talk we obtain some progress for upper-C1 functions. Then we proceed to show that this is not just out of a theoretical sandpit, but has consequences for applications in several fields. We sketch an application in destructive testing of laminate materials.
As is well-known semidefinite relaxations of discrete optimization problems can yield excellent bounds on their solutions. We present three examples from our collaborative research. The first addresses the quadratic assignment problem and a formulation is developed which yields the strongest lower bounds known for larger dimensions. Utilizing the latest iterative SDP solver and ideas from verified computing a realistic problem from communications is solved for dimensions up to 512.
A strategy based on the Lovasz theta function is generalized to compute upper bounds on the spherical kissing number utilizing SDP relaxations. Multiple precision SDP solvers are needed and improvements on known results for all kissing numbers in dimensions up to 23 are obtained. Finally, generalizing ideas of Lex Schrijver improved upper bounds for general binary codes are obtained in many cases.
Brad Pitt's zombie-attack movie "World War Z" may not seem like a natural jumping-off point for a discussion of mathematics or science, but in fact it was a request I received to review that movie in "The Conversation" and the review I wrote that led me to be invited to give a public lecture on zombies and maths at the Academy of Science next week. This week's colloquium will largely be a preview of that talk, so should be generally accessible.
My premise is that movies and maths have something in common. Both enable a trait which seems to be more highly developed in humans than in any other species, with profound consequences: the desire and capacity to explore possibility-space.
The same mathematical models can let us playfully explore how an outbreak of zombie-ism might play out, or how an outbreak of an infectious disease like measles would spread, depending, in part, on what choices we make. Where a movie gives us deep insight into one possibility, mathematics enables us to explore, at all once, millions of scenarios, and see where the critical differences lie.
I will try to use mathematical models of zombie outbreak to discuss how mathematical modelling and mathematical ideas such as functions and phase transitions might enter the public consciousness in a positive way.
The Erdos-Ko-Rado (EKR) Theorem is a classical result in combinatorial set theory and is absolutely fundamental to the development of extremal set theory. It answers the following question: What is the maximum size of a family F of k-element subsets of the set {1,2,...,n} such that any two sets in F have nonempty intersection?
In the 1980's Manickam, Miklos and Singhi (MMS) asked the following question: Given that a set A of n real numbers has sum zero, what is the smallest possible number of k-element subsets of A with nonnegative sum? They conjectured that the optimal solutions for this problem look precisely like the extremal families in the EKR theorem. This problem has been open for almost 30 years and many partial results have been proved. There was a burst of activity in 2012, culminating in a proof of the conjecture in October 2013.
This series of talks will explore the basic EKR theorem and discuss some of the recent results on the MMS conjecture.
Nowadays huge amounts of personal data are regularly collected in all spheres of life, creating interesting research opportunities but also a risk to individual privacy. We consider the problem of protecting confidentiality of records used for statistical analysis, while preserving as much of the data utility as possible. Since OLAP cubes are often used to store data, we formulate a combinatorial problem that models a procedure to anonymize 2-dimensional OLAP cubes. In this talk we present a parameterised approach to this problem.
The Erdos-Ko-Rado (EKR) Theorem is a classical result in combinatorial set theory and is absolutely fundamental to the development of extremal set theory. It answers the following question: What is the maximum size of a family F of k-element subsets of the set {1,2,...,n} such that any two sets in F have nonempty intersection?
In the 1980's Manickam, Miklos and Singhi (MMS) asked the following question: Given that a set A of n real numbers has sum zero, what is the smallest possible number of k-element subsets of A with nonnegative sum? They conjectured that the optimal solutions for this problem look precisely like the extremal families in the EKR theorem. This problem has been open for almost 30 years and many partial results have been proved. There was a burst of activity in 2012, culminating in a proof of the conjecture in October 2013.
This series of talks will explore the basic EKR theorem and discuss some of the recent results on the MMS conjecture.
New questions regarding the reliability and verifiability of scientific findings are emerging as computational methods are being increasingly used in research. In this talk I will present a framework for incorporating computational research into the scientific method, namely standards for carrying out and disseminating research to facilitate reproducibility. I will present some recent empirical results on data and code publication; the pilot project http://ResearchCompendia.org for linking data and codes to published results and validating findings; and the "Reproducible Research Standard" for ensuring the distribution of legally usable data and code. If time permits, I will present preliminary work on assessing the reproducibility of published computational findings based on the 2012 ICERM workshop on Reproducibility in Computational and Experimental Mathematics report [1]. Some of this research is described in my forthcoming co-edited books "Implementing Reproducible Research" and "Privacy, Big Data, and the Public Good."
[1] D. H. Bailey, J. M. Borwein, Victoria Stodden "Set the Default to 'Open'," Notices of the AMS, June/July 2013.
This PhD so far has focussed on two distinct optimisation problems pertaining to public transport, as detailed below:
Within public transit systems, so-called flexible transport systems have great potential to of- fer increases in mobility and convenience and decreases in travel times and operating costs. One such service is the Demand Responsive Connector, which transports commuters from residential ad- dresses to transit hubs via a shuttle service, from where they continue their journey via a traditional timetabled service. We investigate various options for implementing a demand responsive connector and the associated vehicle scheduling problems. Previous work has only considered regional systems, where vehicles drop passengers off at a predetermined station -- we relax that condition and investigate the benefits of allowing alternative transit stations. An extensive computational study shows that the more flexible system offers cost advantages over regional systems, especially when transit services are frequent, or transit hubs are close together, without little impact on customer convenience.
A compliment to public transport systems is that of ad hoc ride sharing, where participants (either offering or requesting rides) are paired with participants wanting the reverse, by some central service provider. Although such schemes are currently in operation, the lack of certainty offered to riders (i.e. the risk of not finding a match, or a driver not turning up) discourages potential users. Critically, this can prevent the system from reaching a "critical mass" and becoming self sustaining. We are investigating the situation where the provider has access to a fleet of dedicated drivers, and may use these to service riders, especially when such a system is in its infancy. We investigate some of the critical pricing issues surrounding this problem, present some optimisation models and provide some computational results.
We show that ESO universal Horn logic (existential second logic where the first order part is a universal Horn formula) is insufficient to capture P, the class of problems decidable in polynomial time. This is true in the presence of a successor relation in the input vocabulary. We provide two proofs -- one based on reduced products of two structures, and another based on approximability theory (the second proof is under the assumption that P is not the same as NP). The difference between the results here and those in (Graedel 1991) is due to the fact that the expressions this talk deals with are at the "structure level", whereas the expressions in (Graedel 1991) are at the "machine level" since they encode machine computations -- a case of "Easier DONE than SAID".
The Erdos-Ko-Rado (EKR) Theorem is a classical result in combinatorial set theory and is absolutely fundamental to the development of extremal set theory. It answers the following question: What is the maximum size of a family F of k-element subsets of the set {1,2,...,n} such that any two sets in F have nonempty intersection?
In the 1980's Manickam, Miklos and Singhi (MMS) asked the following question: Given that a set A of n real numbers has sum zero, what is the smallest possible number of k-element subsets of A with nonnegative sum? They conjectured that the optimal solutions for this problem look precisely like the extremal families in the EKR theorem. This problem has been open for almost 30 years and many partial results have been proved. There was a burst of activity in 2012, culminating in a proof of the conjecture in October 2013.
This series of talks will explore the basic EKR theorem and discuss some of the recent results on the MMS conjecture.
I will describe the research I have been doing with Fran Aragon and others, using graphical methods to study the properties of real numbers. There will be very few formulas and more pictures and movies.
: In this final talk of the sequence we will sketch Blinovsky's recent proof of the conjecture: Whenever n is at least 4k, and A is a set of n numbers with sum 0, then there are at least (n-1) choose (k-1) subsets of size k which have non-negative sum. The nice aspect of the proof is the combination of hypergraph concepts with convex geometry arguments and a Berry-Esseen inequality for approximating the hypergeometric distribution. The not so nice aspect (which will be omitted in the talk) is the amount of very tedious algebraic manipulation that is necessary to verify the required estimates. There are slides for all four MMS talks here.
The TELL ME agent based model will simulate personal protective decisions such as vaccination or hand hygiene during an influenza epidemic. Such behaviour may be adopted in response to communication from health authorities, taking into account perceived influenza risk. The behaviour decisions are to be modelled with a combination of personal attitude, average local attitude, the local number of influenza cases and the case fatality rate. The model is intended to be used to understand the effects of choices about how to communicate with citizens about protecting themselves from epidemics. I will discuss the TELL ME model design, the cognitive theory supporting the design and some of the expected problems in building the simulation.
This year is the fiftieth anniversary of Ringel's posing of the well-known graph decomposition problem called the Oberwolfach problem. In this series of talks, I shall examine what has been accomplished so far, take a look at current work, and suggest a possible new avenue of approach. The material to be presented essentially will be self-contained.
In this talk I will present a general method of finding simple groups acting on trees. This process, beginning with any group $G$ acting on a tree, produces more groups known as the $k$-closures of $G$. I will use several examples to highlight the versatility of this method, and I will discuss the properties of the $k$-closures that allow us to find abstractly simple groups.
This year is the fiftieth anniversary of Ringel's posing of the well-known graph decomposition problem called the Oberwolfach problem. In this series of talks, I shall examine what has been accomplished so far, take a look at current work, and suggest a possible new avenue of approach. The material to be presented essentially will be self-contained.
This is joint work with Geoffrey Lee.
The set of permutations generated by a passing an ordered sequence through a stack of depth 2 followed by an infinite stack in series was shown to be finitely based by Elder in 2005. In this new work we obtain an algebraic generating function for this class, by showing it is in bijection with an unambiguous context-free grammar.
This year is the fiftieth anniversary of Ringel's posing of the well-known graph decomposition problem called the Oberwolfach problem. In this series of talks, I shall examine what has been accomplished so far, take a look at current work, and suggest a possible new avenue of approach. The material to be presented essentially will be self-contained.
In these two talks chapter I want to talk, both generally and personally, about the use of tools in the practice of modern research mathematics. To focus my attention I have decided to discuss the way I and my research group members have used tools primarily computational (visual, numeric and symbolic) during the past five years. When the tools are relatively accessible I shall exhibit details; when they are less accessible I settle for illustrations and discussion of process.
Long before current graphic, visualisation and geometric tools were available, John E. Littlewood, 1885-1977, wrote in his delightful Miscellany:
A heavy warning used to be given [by lecturers] that pictures are not rigorous; this has never had its bluff called and has permanently frightened its victims into playing for safety. Some pictures, of course, are not rigorous, but I should say most are (and I use them whenever possible myself).
Over the past five years, the role of visual computing in my own research has expanded dramatically. In part this was made possible by the increasing speed and storage capabilities - and the growing ease of programming - of modern multi-core computing environments. But, at least as much, it has been driven by paying more active attention to the possibilities for graphing, animating or simulating most mathematical research activities.
The idea of an almost automorphisms of a tree will be introduced as well as what we are calling quasi-regular trees. I will then outline what I have been doing in regard to the almost automorphisms of almost quasi-regular trees with two valencies and the challenges that come with using more valencies.
In these two talks chapter I want to talk, both generally and personally, about the use of tools in the practice of modern research mathematics. To focus my attention I have decided to discuss the way I and my research group members have used tools primarily computational (visual, numeric and symbolic) during the past five years. When the tools are relatively accessible I shall exhibit details; when they are less accessible I settle for illustrations and discussion of process.
Long before current graphic, visualisation and geometric tools were available, John E. Littlewood, 1885-1977, wrote in his delightful Miscellany:
A heavy warning used to be given [by lecturers] that pictures are not rigorous; this has never had its bluff called and has permanently frightened its victims into playing for safety. Some pictures, of course, are not rigorous, but I should say most are (and I use them whenever possible myself).
Over the past five years, the role of visual computing in my own research has expanded dramatically. In part this was made possible by the increasing speed and storage capabilities - and the growing ease of programming - of modern multi-core computing environments. But, at least as much, it has been driven by paying more active attention to the possibilities for graphing, animating or simulating most mathematical research activities.
The American mathematical research community experienced remarkable changes over the course of the three decades from 1920 to 1950. The first ten years witnessed the "corporatization" and "capitalization" of the American Mathematical Society, as mathematicians like Oswald Veblen and George Birkhoff worked to raise private, governmental, and foundation monies in support of research-level mathematics. The next decade, marked by the stock market crash and Depression, almost paradoxically witnessed the formation and building up of a number of strongly research-oriented departments across the nation at the same time that noted mathematical refugees were fleeing the ever-worsening political situation in Europe. Finally, the 1940s saw the mobilization of American research mathematicians in the war effort and their subsequent efforts to insure that pure mathematical research was supported as the Federal government began to open its coffers in the immediately postwar period. Ultimately, the story to be told here is a success story, but one of success in the face of many obstacles. At numerous points along the way, things could have turned out dramatically differently. This talk will explore those historical contingencies.
About the speaker:
Karen Parshall is Professor of History and Mathematics at the University of Virginia, where she has served on the faculty since 1988. Her research focuses primarily on the history of science and mathematics in America and in the history of 19th- and 20th-century algebra. In addition to exploring technical developments of algebra—the theory of algebras, group theory, algebraic invariant theory—she has worked on more thematic issues such as the development of national mathematical research communities (specifically in the United States and Great Britain) and the internationalization of mathematics in the nineteenth and twentieth centuries. Her most recent book (co-authored with Victor Katz), Taming the Unknown: A History of Algebra from Antiquity to the Early Twentieth Century, will be published by Princeton University Press in June 2014.
This talk is a practice talk for an invited talk I will soon give in Indonesia, in which I was asked to present on Education at a conference on Graph Theory.
In 1929 Alfred North Whitehead wrote: "The university imparts information, but it imparts it imaginatively. At least, this is the function it should perform for society. A university which fails in this respect has no reason for existence. This atmosphere of excitement, arising from imaginative consideration, transforms knowledge. A fact is no longer a bare fact: it is invested with all its possibilities."
In the light and inspiration of Whitehead's quote, I will discuss some aspects of the problem and challenge of mathematical education as we meet it in Universities today, with reference to some of the ways that combinatorics may be an ideal vehicle for sharing authentic mathematical experiences with diverse students.
We begin the talk with the story of Dido and the Brachistochrone problem. We show how these two problems leads to the two must fundamental problems of the calculus of variations. The Brachistochrone problem leads to the basic problem of calculus of variations and that leads to the Euler-Lagrange equation. We show the link between the Euler-Lagrange equations and the laws of classical mechanics.
We also discuss about the Legendre conditions and Jacobi conjugate points which leads to the sufficient conditions for weak local minimum points
.The Dido's problem leads to the problem of Lagrange in which Lagrange introduces his multiplier rule. We also speak a bit about the problem of Bolza and further also discuss about how the class of extremals can be enlarged and the issue of existence of solutions in calculus of variations, the Tonelli's direct methods and some more facts on the quest for multiplier rules.
Using gap functions to devise error bounds for some special classes of monotone variational inequality is a fruitful venture since it allows us to obtain error bounds for certain classes of convex optimization problem. It is to be noted that if we take a Hoffman type approach to obtain error bounds to the solution set of a convex programming problem it does not turn out to be fruitful and thus using the vehicle of variational inequality seems fundamental in this case. We begin the discussion by introducing several popular gap functions for variational inequalities like the Auslender gap function and the Fukushima's regularized gap function and how error bounds can be created out of them. We then also spent a brief time with gap functions for variational inequalities with set-valued maps which correspond to the non-smooth convex optimization problems. We then quickly shift our focus on the creating error bounds using the dual gap function which is possibly the only convex gap function known in the literature to the best of our knowledge. In fact this gap function was never used for creating error bounds. Error bounds can be used as stopping criteria and this the dual gap function can be used to solve the variational inequality and also be used to develop a stopping criteria. We present several recent research on error bounds using the dual gap function and also provide an application to quasiconvex optimization.
I will solve a variety of mathematical problems in Maple. These will come from geometry, number theory, analysis and discrete mathematics.
A companion book chapter is http://carma.newcastle.edu.au/jon/hhm.pdf.
The need for well-trained secondary mathematics teachers is well documented. In this talk we will discuss strategies we have developed at JCU to address the quality of graduating mathematics teachers. These strategies are broadly grouped as (i) having students develop a sense of how they learn mathematics and the skills they can work on to improve their learning of mathematics, and (ii) the need for specific mathematics content subjects for pre-service secondary mathematics teachers.
Our aim in this talk is to show that D-gap function can play a pivotal role in developing inexact descent methods to solve monotone variational inequality problem where the feasible set of the variational inequality is a closed convex set rather than just the non-negative orthant. We also focus on the issue of regularization of variational inequality. Freidlander and Tseng has shown in 2007 that by the regularizing the convex objective function by using another convex function which in practice is chosen correctly can make the solution of the problem simpler. Tseng and Freiedlander has provided a criteria for exact regularization of convex optimization problems. In this section we ask the question as to what extent one can extend the idea of exact regularization in the context of variational inequalities. We study this in this talk and we show the central role played by the dual gap function in this analysis.
In this talk we are going to discuss the importance of M-stationary conditions for a special class of one-stage stochastic mathematical programming problem with complementarity constraints (SMPCC, for short). M-stationarity concept is well known for deterministic MPCC problems. Now using the results of deterministic MPCC problems we can easily derive the M-stationarity for SMPCC problems under some well known constraint qualifications. It is well observed that under MPCC-linear independence constraint qualification we obtain strong stationarity conditions at a local minimum, which is a stronger notion than M-stationarity. Same result cab be derived for SMPCC problems under SMPCC-LICQ. Then the question that will arise is: What is the importance to study M-stationarity under the assumption of SMPCC-LICQ. To answer this question we have to discuss sample average approximation (SAA) method, which is a common technique to solve stochastic optimization problems. Here one has to discretize the underlying probability space and then using the strong Law of Large Numbers one has to approximate the expectation functionals. Now the main result of this discussion as follows: If we consider a sequence of M-type Fritz John points of the SAA problems then any accumulation point of this sequence will be an M-stationarity point under SMPCC-LICQ. But this kind of result, in general, does not hold for strong stationarity conditions.
It is axiomatic in mathematics research that all steps of an argument or proof are open to scrutiny. However, a proof based even in part on commercial software is hard to assess, because the source code---and sometimes even the algorithm used---may not be made available. There is the further problem that a reader of the proof may not be able to verify the author's work unless the reader has access to the same software.
For this reason open-source software systems have always enjoyed some use by mathematicians, but only recently have systems of sufficient power and depth become available which can compete with---and in some cases even surpass---commercial systems.
Most mathematicians and mathematics educators seem to gravitate to commercial systems partly because such systems are better marketed, but also in the view that they may enjoy some level of support. But this comes at the cost of initial purchase, plus annual licensing fees. The current state of tertiary funding in Australia means that for all but the very top tier of universities, the expense of such systems is harder to justify.
For educators, a problem is making the system available to students: it is known that students get the most use from a system when they have unrestricted access to it: at home as well as at their institution. Again, the use of an open-source system makes it trivial to provide access.
This talk aims to introduce several very powerful and mature systems: the computer algebra systems Sage, Maxima and Axiom; the numerical systems Octave and Scilab; and the assessment system WeBWorK (or as many of those as time permits). We will briefly describe these systems: their history, current status, usage, and comparison with commercial systems. We will also indicate ways in which anybody can be involved in their development. The presenter will describe his own experiences in using these software systems, and his students' attitudes to them.
Depending on audience interests and expertise, the talk might include looking at a couple of applications: geometry and Gr\"obner bases, derivation of Runge-Kutta explicit formulas, cryptography involving elliptic curves and finite fields, or digital image processing.
The talk will not assume any particular mathematics beyond undergraduate material or material with which the audience is comfortable, and will be as polemical as the audience allows.
The additive or linearized polynomials were introduced by Ore in 1933 as an analogy over finite fields to his theory of difference and difference equations over function fields. The additive polynomials over a finite field $F=GF(q)$, where $q=p^e$ for some prime $p$, are those of the form
$f = f_0 x + f_1 x^p + f_2 x^{p^2} + ... + f_m x^{p^m}$ in $F[x]$
They form a non-commutative left-euclidean principal ideal domain under the usual addition and functional composition, and possess a rich structure in both their decomposition structures and root geometries. Additive polynomials have been employed in number theory and algebraic geometry, and applied to constructing error-correcting codes and cryptographic protocols. In this talk we will present fast algorithms for decomposing and factoring additive polynomials, and also for counting the number of decompositions with particular degree sequences.
Algebraically, we show how to reduce the problem of decomposing additive polynomials to decomposing a related associative algebra, the eigenring. We give computationally efficient versions of the Jordan-Holder and Krull-Schmidt theorems in this context to describe all possible factorization. Geometrically, we show how to compute a representation of the Frobenius operator on the space of roots, and show how its Jordan form can be used to count the number of decompositions. We also describe an inverse theory, from which we can construct and count the number of additive polynomials with specified factorization patterns.
Some of this is joint work with Joachim von zur Gathen (Bonn) and Konstantin-Ziegler (Bonn).
I am refereeing a manuscript in which a new construction for producing graphs from a group is given. There are some surprising aspects of this new method and that is what I shall discuss.
The Australian Mathematical Sciences Student Conference is held annually for Australian postgraduate and honours students of any mathematical science. The conference brings students together, gives an opportunity for presentation of work, facilitates dialogue, and encourages collaboration, within a friendly and informal atmosphere.
Visit the conference website for more details.
I will survey my career both mathematically and personally offering advice and opinions, which should probably be taken with so many grains of salt that it makes you nauseous. (Note: Please bring with you a sense of humour and all of your preconceived notions of how your life will turn out. It will be more fun for everyone that way.)
What do the three elements of the title have in common is the utility of using graph searching as a model. In this talk I shall discuss the relatively brief history of graph searching, several models currently being employed, several significant results, unsolved conjectures, and the vast expanse of unexplored territory.
I will talk about the geometric properties of conic problems and their interplay with ill-posedness and the performance of numerical methods. This includes some new results on the facial structure of general convex cones, preconditioning of feasibility problems and characterisations of ill-posed systems.
Many biological environments, both intracellular and extracellular, are often crowded by large molecules or inert objects which can impede the motion of cells and molecules. It is therefore essential for us to develop appropriate mathematical tools which can reliably predict and quantify collective motion through crowded environments.
Transport through crowded environments is often classified as anomalous, rather than classical, Fickian diffusion. Over the last 30 years many studies have sought to describe such transport processes using either a continuous time random walk or fractional order differential equation. For both these models the transport is characterized by a parameter $\alpha$, where $\alpha=1$ is associated with Fickian diffusion and $\alpha<1$ is associated with anomalous subdiffusion. In this presentation we will consider the motion of a single agent migrating through a crowded environment that is populated by impenetrable, immobile obstacles and we estimate $\alpha$ using mean squared displacement data. These results will be compared with computer simulations mimicking the transport of a population of such agents through a similar crowded environment and we match averaged agent density profiles to the solution of a related fractional order differential equation to obtain an alternative estimate of $\alpha$. I will examine the relationship between our estimate of $\alpha$ and the properties of the obstacle field for both a single agent and a population of agents; in both cases $\alpha$ decreases as the obstacle density increases, and that the rate of decrease is greater for smaller obstacles. These very simple computer simulations suggests that it may be inappropriate to model transport through a crowded environment using widely reported approaches including power laws to describe the mean squared displacement and fractional order differential equations to represent the averaged agent density profiles.
More details can be found in Ellery, Simpson, McCue and Baker (2014) The Journal of Chemical Physics, 140, 054108.
The talk will provide a brief overview of the findings of two completed research projects and one ongoing project related to the knowledge and beliefs of teachers of school mathematics. It will consider some existing frameworks for types of teacher knowledge, and the place of teachers’ beliefs and confidence in relation to these, as well as touching on how a broad construct of teacher knowledge might develop.
We shall finish our look at two-sided group graphs.
The relentless advance of computer technology, a gift of Moore's Law, and the data deluge available via the Internet and other sources, has been a gift to both scientific research and business/industry. Researchers in many fields are hard at work exploiting this data. The discipline of "machine learning," for instance, attempts to automatically classify, interpret and find patterns in big data. It has applications as diverse as supernova astronomy, protein molecule analysis, cybersecurity, medicine and finance. However, with this opportunity comes the danger of "statistical overfitting," namely attempting to find patterns in data beyond prudent limits, thus producing results that are statistically meaningless.
The problem of statistical overfitting has recently been highlighted in mathematical finance. A just-published paper by the present author, Jonathan Borwein, Marcos Lopez de Prado and Jim Zhu, entitled "Pseudo-Mathematics and Financial Charlatanism," draws into question the present practice of using historical stock market data to "backtest" a new proposed investment strategy or exchange-traded fund. We demonstrate that in fact it is very easy to overfit stock market data, given powerful computer technology available, and, further, without disclosure of how many variations were tried in the design of a proposed investment strategy, it is impossible for potential investors to know if the strategy has been overfit. Hence, many published backtests are probably invalid, and this may explain why so many proposed investment strategies, which look great on paper, later fall flat when actually deployed.
In general, we argue that not only do those who directly deal with "big data" need to be better aware of the methodological and statistical pitfalls of analyzing this data, but those who observe these problems of this sort arising in their profession need to be more vocal about them. Otherwise, to quote our "Pseudo-Mathematics" paper, "Our silence is consent, making us accomplices in these abuses."
(see PDF)
Lagrange multiplier method is fundamental in dealing with constrained optimization problems and is also related to many other important results.
In these two talks we first survey several different ideas in proving the Lagrange multiplier rule and then concentrate on the variational approach.
We will first discuss the idea, a variational proof the Lagrange multiplier rule in the convex case and then consider the general case and relationship with other results.
These talks are continuation of the e-mail discussions with Professor Jon Borwein and are very informal.
Reproducibility is emerging as a major issue for highly parallel computing, in much the same way (and for many of the same reasons) that it is emerging as an issue in other fields of science, technology and medicine, namely the growing numbers of cases where other researchers cannot reproduce published results. This talk will summarize a number of these issues, including the need to carefully document computational experiments, the growing concern over numerical reproducibility and, once again, the need for responsible reporting of performance. Have we learned the lessons of history?
My talk will be on the projection/reflection methods and the application of tools from convex and variational analysis to optimisation problems, and I will talk about my thesis problem which focuses on the following:
We consider convexity conditions ensuring the monotonicity of right and left Riemann sums of a function $f:[0,1]\rightarrow \mathbb{R}$; applying our results in particular to functions such as
$f(x) =1/\left(1+x^2\right)$.Lagrange multiplier method is fundamental in dealing with constrained optimization problems and is also related to many other important results.
In these two talks we first survey several different ideas in proving the Lagrange multiplier rule and then concentrate on the variational approach.
We will first discuss the idea, a variational proof the Lagrange multiplier rule in the convex case and then consider the general case and relationship with other results.
These talks are continuation of the e-mail discussions with Professor Jon Borwein and are very informal.
Usually, when we want to study permutation groups, we look first at the primitive permutation groups (transitive groups in which point stabilizers are maximal); in the finite case these groups are the basic building blocks from which all finite permutation groups are comprised. Thanks to the seminal O'Nan—Scott Theorem and the Classification of the Finite Simple Groups, the structure of finite primitive permutation groups is broadly known.
In this talk I'll describe a new theorem of mine which extends the O'Nan—Scott Theorem to a classification of all primitive permutation groups with finite point stabilizers. This theorem describes the structure of these groups in terms of finitely generated simple groups.
The eighth edition of the conference series GAGTA (Geometric and Asymptotic Group Theory with Applications) will be held in Newcastle, Australia July 21-25 (Mon-Fri) 2014.
GAGTA conferences are devoted to the study of a variety of areas in geometric and combinatorial group theory, including asymptotic and probabilistic methods, as well as algorithmic and computational topics involving groups. In particular, areas of interest include group actions, isoperimetric functions, growth, asymptotic invariants, random walks, algebraic geometry over groups, algorithmic problems and their complexity, generic properties and generic complexity, and applications to non-commutative cryptography.
Visit the conference web sitefor more information.
A vast amount of natural processes can be modelled by partial differential equations involving diffusion operators. The Navier-Stokes equations of fluid dynamics is one of the most popular of such models, but many other equations describing flows involve diffusion processes. These equations are often non-linear and coupled, and theoretical analysis can only provided limited information on the qualitative behaviours of their solutions. Numerical analysis is then used to obtain a prediction of the fluid's behaviour.
In many circumstances, the numerical methods used to approximate the models must satisfy engineering or computational constraints. For examples, in underground flows in porous media (involved in oil recovery, carbon storage or hydrogeology), the diffusions properties of the medium vary a lot between geological layers, and can be strongly skewed in one direction. Moreover, the available meshes used to discretise the equations may be quite irregular. The sheer size of the domain of study (a few kilometres wide) also calls for methods that can be easily parallelised and give good and stable results on relatively large grids. These constraints make the construction and study of numerical methods for diffusion models very challenging.
In the first part of this talk, I will present some numerical schemes, developed in the last 10 years and designed to discretise diffusion equations as encountered in reservoir engineering, with all the associated constraints. In the second part, I will focus on mathematical tools and techniques constructed to analyse the convergence of numerical schemes under realistic hypotheses (i.e. without assuming non-physical smoothness on the data or the solutions). These techniques are based on the adaptation to the discrete setting of functional analysis results used to study the continuous equations.
Colin Reid will present some thoughts on limits of contraction groups.
I shall be describing a largely unexplored concept in graph theory which is, I believe, an ideal thesis topic. I shall be presenting this at the CIMPA workshop in Laos in December.
Mathematics can often seen almost too good to be true. This sense that mathematics is marvellous enlivens learning and stimulates research but we tend to let remarkable things pass without remark after we become familiar with them. The miracles of Pythagorean triples and eigenvalues will be highlights of this talk.
The talk will include some ideas of what could be blending into our teaching program.
We give some background to the metric basis problem (or resolving set) of a graph. We discuss various resolving sets with different conditions forced on them. We mainly stress the ideas of strong metric basis and partition dimension of graphs. We give the necessary literature background on these concepts and some preliminary results. We present our new results obtained so far as part of the research during my candidature. We also list the research problems I propose to study during the remainder of my PhD candidature and we present a tentative timeline of my research activities.
This week I shall start a series of talks on basic pursuit-evasion in graphs (frequently called cops and robber in the literature). We shall do some topological graph theory leading to an intriguing conjecture, and we'll look at a characterization problem.
The Diophantine Problem in group theory can be stated as: is it algorithmically decidable whether an equation whose coefficients are elements of a given group has at least one solution in that group?
The talk will be a survey on this topic, with emphasis on what is known about solving equations in free groups. I will also present some of the algebraic geometry over groups developed in the last 20 years, and the connections to logic and geometry. I will conclude with results concerning the asymptotic behavior of satisfiable homogeneous equations in surface groups.
Jon Borwein will discuss CARMA's new "Risk and finance study group". Please come and learn about the opportunities. See also http://www.financial-math.org/ and http://www.financial-math.org/blog/.
This week I shall continue the discussion of searching graphs.
We present a PSPACE-algorithm to compute a finite graph of exponential size that describes the set of all solutions of equations in free groups with rational constraints. This result became possible due to the recently invented recompression technique of Artur Jez. We show that it is decidable in PSPACE whenever the set of all solutions is finite. If the set of all solutions is finite, then the length of a longest solution is at most doubly exponential.
This talk is based on a joint paper with Artur Jez and Wojciech Plandowski (arXiv:1405.5133 and LNCS 2014, Proceedings CSR 2014, Moscow, June 7 -- 11, 2014).
Ben will attempt to articulate what he has been meaning to work on. That is, choosing representatives with smallest 1-norm in an effort to find a nice bound on the number of vertices on level 1 of the corresponding rooted almost quasi-regular tree with 1 defect, and other ideas on choosing good representatives.
Brian Alspach will continue discussing searching graphs embedded on the torus.
The restricted product over $X$ of copies of the $p$-adic numbers $\mathbb{Q}_p$, denoted $\mathbb{Q}_p(X)$, is self-dual and is the natural $p$-adic analogue of Hilbert space. The additive group of this space is locally compact and the continuous endomorphisms of the group are precisely the continuous linear operators on $\mathbb{Q}_p(X)$.
Attempts to develop a spectral theory for continuous linear operators on $\mathbb{Q}_p(X)$ will be described at an elementary level. The Berkovich spectral theory over non-Archimedean fields will be summarised and the spectrum of the linear operator $T$ compared with the scale of $T$ as an endomorphism of $(\mathbb{Q}_p(X),+)$.
The original motivation for this work, which is joint with Andreas Thom (Leipzig), will also be briefly discussed. A certain result that holds for representations of any group on a Hilbert space, proved by operator theoretic methods, can only be proved for representations of sofic groups on $\mathbb{Q}_p(X)$ and it is thought that the difficulty might lie with the lack of understanding of linear operators on $\mathbb{Q}_p(X)$ rather than with non-sofic groups.
This forum is a follow-on from the seminar that Professor Willis gave three weeks prior, on maths that seems too good to be true; and his ideas for incorporating the surprisingly and enlivening into what and how we teach: he gave as exemplars the miracles of Pythagoreans triples and eigenvalues. A question raised in the discussion at that seminar was if/how might we use assessment to encourage the kinds of learning we would like. This forum will be an opportunity to further that conversation.
Jeff, Andrew and Massoud have each kindly agreed to give us 5 minute presentations relating to the latter year maths courses that they have recently been teaching, to get our forum started. Jeff may speak on his developments in his new course on Fourier methods, Andrew will talk about some of the innovations that were introduced into Topology in the last few offerings which he has been using and further developing, and Massoud has a range of OR courses he might speak about.
Everyone is encouraged to share examples of their own practice or ideas that they have that may be of interest to others.
A locating-total dominating set (LTDS) in a connected graph G is a total dominating set $S$ of $G$ such that for every two vertices $u$ and $v$ in $V(G)-S$, $N(u) \cap S \neq N(v) \cap S$. Determining the minimum cardinality of a locating-total dominating set is called as the locating-total dominating problem and it is denoted as $\gamma_t^l (G)$. We have improved the lower bound obtained by M.A.Henning and N.D.Rad [1]. We have also proved that the bound obtained is sharp for some special families of regular graphs.
[1] M. A. Henning and N. J. Rad, Locating-total dominations in graphs, Discrete Applied Mathematics, 160(2012), 1986-1993.
8:30 am | Registration, coffee and light breakfast |
9:00 am | Director's Welcome |
9:30 am | Session: "Research at CARMA" |
10:30 am | Morning tea |
11:00 am | Session: "Academic Liasing" |
11:30 am | Session: "Education/Outreach Activities" |
12:30 pm | Lunch |
2:00 pm | Session: "Future of Research at the University" |
2:30 pm | Session: "Future Planning for CARMA" |
3:30 pm | Afternoon tea |
4:00 pm | Session: Talks by members (to 5:20 pm) |
6:00 pm | Dinner |
In this talk we consider economic Model Predictive Control (MPC) schemes. "Economic" means that the MPC stage cost models economic considerations (like maximal yield, minimal energy consumption...) rather than merely penalizing the distance to a pre-computed steady state or reference trajectory. In order to keep implementation and design simple, we consider schemes without terminal constraints and costs.
In the first (longer) part of the talk, we summarize recent results on the performance and stability properties of such schemes for nonlinear discrete time systems. Particularly, we present conditions under which one can guarantee practical asymptotic stability of the optimal steady state as well as approximately optimal averaged and transient performance. Here, dissipativity of the underlying optimal control problems and the turnpike property are shown to play an important role (this part is based on joint work with Tobias Damm, Marleen Stieler and Karl Worthmann).
In the second (shorter) part of the talk we present an application of an economic MPC scheme to a Smart Grid control problem (based on joint work with Philipp Braun, Christopher Kellett, Steven Weller and Karl Worthmann). While economic MPC shows good results for this control problem in numerical simulations, several aspects of this application are not covered by the available theory. This is explained in the last part of the talk, along with some suggestions on how to overcome this gap.
Classical umbral calculus was introduced by Blissard in the 1860's and later studied by E. T. Bell and Rota. It is a symbolic computation method that is particularly efficient for proving identities involving elementary special functions such as Bernoulli or Hermite polynomials. I will show the link between this technique and moment representation, and provide examples of its application.
This is the first in a series of lectures on this fascinating group.
If you’re enrolled in a BMath or Combined Maths degree or have Maths or Stats as a co-major, you’re invited to the B Math Party.
Come along for free food and soft drinks, meet fellow students and talk to staff about courses. Discover opportunities for summer research, Honours, Higher Degrees and scholarships.
The topological and measure structures carried by locally compact groups make them precisely the class of groups to which the methods of harmonic analysis extend. These methods involve study of spaces of real- or complex-valued functions on the group and general theorems from topology guarantee that these spaces are sufficiently large. When analysing particular groups however, particular functions deriving from the structure of the group are at hand. The identity function in the cases of $(\mathbb{R},+)$ and $(\mathbb{Z},+)$ are the most obvious examples, and coordinate functions on matrix groups and growth functions on finitely generated discrete groups are only slightly less obvious.
In the case of totally disconnected groups, compact open subgroups are essential structural features that give rise to positive integer-valued functions on the group. The set of values of $p$ for which the reciprocals of these functions belong to $L^p$ is related to the structure of the group and, when they do, the $L^p$-norm is a type of $\zeta$-function of $p$. This is joint work with Thomas Weigel of Milan.
This Thursday, sees a return to graph searching in the discrete mathematics instructional seminar. I’ll be looking at characterization results.
More than 120 years after their introduction, Lyapunov's so-called First and Second Methods remain the most widely used tools for stability analysis of nonlinear systems. Loosely speaking, the Second Method states that if one can find an appropriate Lyapunov function then the system has some stability property. A particular strength of this approach is that one need not know solutions of the system in order to make definitive statements about stability properties. The main drawback of the Second Method is the need to find a Lyapunov function, which is frequently a difficult task.
Converse Lyapunov Theorems answer the question: given a particular stability property, can one always (in principle) find an appropriate Lyapunov function? In the first installment of this two-part talk, we will survey the history of the field and describe several such Converse Lyapunov Theorems for both continuous and discrete-time systems. In the second instalment we will discuss constructive techniques for numerically computing Lyapunov functions.
In 1976, Ribe showed that if two Banach spaces are uniformly homeomorphic, then their finite dimensional subspaces are similar in some sense. This suggests that properties of Banach spaces which depend only on finitely many vectors should have a purely metric characterization. We will shortly discuss the history of the Ribe program, as well as some recent developments.
In particular:
It is known that the function s defined on an ordering of the 4^m monomial basis matrices of the real representation of the Clifford algebra Cl(m, m), where s(A) = 0 if A is symmetric, s(A) = 1 if A is skew, is a bent function. It is perhaps less well known that the function t, where t(A) = 0 if A is diagonal or skew, t(A) = 1 otherwise, is also a bent function, with the same parameters as s. The talk will describe these functions and their relation to Hadamard difference sets and strongly regular graphs.
The talk was originally presented at ADTHM 2014 in Lethbridge this year.
I will survey some recent and not-so-recent results surrounding the areas of Diophantine approximation and Mahler's method related to variations of the Chomsky-Schützenberger hierarchy.
Third lecture: metric properties.
More than 120 years after their introduction, Lyapunov's so-called First and Second Methods remain the most widely used tools for stability analysis of nonlinear systems. Loosely speaking, the Second Method states that if one can find an appropriate Lyapunov function then the system has some stability property. A particular strength of this approach is that one need not know solutions of the system in order to make definitive statements about stability properties. The main drawback of the Second Method is the need to find a Lyapunov function, which is frequently a difficult task.
Converse Lyapunov Theorems answer the question: given a particular stability property, can one always (in principle) find an appropriate Lyapunov function? In the first installment of this two-part talk, we will survey the history of the field and describe several such Converse Lyapunov Theorems for both continuous and discrete-time systems. In the second instalment we will discuss constructive techniques for numerically computing Lyapunov functions.
This week I shall finish my discussion about searching graphs by looking at the recent paper by Clarke and MacGillavray that characterizes graphs that are k-searchable.
Optimization problems involving polynomial functions are of great importance in applied mathematics and engineering, and they are intrinsically hard problems. They arise in important engineering applications such as the sensor network localization problem, and provide a rich and fruitful interaction between algebraic-geometric concepts and modern convex programming (semi-definite programming). In this talk, we will discuss some recent progress of the polynomial (semi-algebraic) optimization with a focus on the intrinsic link between the polynomial structure and the hidden convexity structure. The talk will be divided into two parts. In the first part, we will describe the key results in this new area, highlighting the geometric and conceptual aspects as well as recent work on global optimality theory, algorithms and applications. In the second part, we will explain how the semi-algebraic structure helps us to analyze some important and classical algorithms in optimization such as alternating projection algorithm, proximal point algorithm and Douglas-Rachford algorithm (if time is permitted).
One of the key components in the earth’s climate is the formation and melting of sea ice. Currently, we struggle to model correctly this process. One possible explanation for this shortcoming is that ocean waves play a key role and that their effect needs to be include in climate models. I will describe a series of recent experiments which seem to validate this hypothesis and discuss attempts my myself and others to model wave-ice interaction.
We introduce a subfamily of enlargements of a maximally monotone operator $T$. Our definition is inspired by a 1988 publication of Fitzpatrick. These enlargements are elements of the family of enlargements $\mathbb{E}(T)$ introduced by Svaiter in 2000. These new enlargements share with the $\epsilon$-subdifferential a special additivity property, and hence they can be seen as structurally closer to the $\epsilon$-subdifferential. For the case $T=\nabla f$, we prove that some members of the subfamily are smaller than the $\epsilon$-subdifferential enlargement. In this case, we construct a specific enlargement which coincides with the $\epsilon$-subdifferential.
Joint work with Juan Enrique Martínez Legaz, Mahboubeh Rezaei, and Michel Théra.
We discuss the genesis of symbolic computation, its deployment into computer algebra systems, and the applications of these systems in the modern era.
We will pay special attention to polynomial system solvers and highlight the problems that arise when considering non-linear problems. For instance, forgetting about actually solving, how does one even represent infinite solution sets?
The completion with respect to the degree valuation of the field of rational functions over a finite field is often a fruitful analogue to consider when one would like to test ideas, methods and conjectures in Diophantine approximation for the real numbers. In many respects, this setting behaves very similarly to the real numbers, and in particular the metric theory of Diophantine approximation in this setup is well-developed and and in some respects more is known to be true in this setup than in the real numbers. However, natural analogues of other classical theorems in Diophantine approximation fail spectacularly in positive characteristic. In this talk, I will introduce the topic and give old and new results underpinning the similarities and differences of the theories of Diophantine approximation in positive characteristic and in characteristic zero.
Self-avoiding walks are a widely studied model of polymers, which are defined as walks on a lattice where each successive step visits a neighbouring site, provided the site has not already been visited. Despite the apparent simplicity of the model, it has been of much interest to statistical mechanicians and probabilists for over 60 years, and many important questions about it remain open.
One of the most powerful methods to study self-avoiding walks is Monte Carlo simulation. I'll give an overview of the historical developments in this field, and will explain what ingredients are needed for a good Monte Carlo algorithm. I'll then describe how recent progress has allowed for the efficient simulation of truly long walks with many millions of steps. Finally, I'll discuss whether lessons we've learned from simulating self-avoiding walks may be applicable to a wide range of Markov chain Monte Carlo simulations.
We first introduce the notations of pattern sequence, which is defined by the number of (possibly) overlapping occurrences of a given word in the $\langle q,r\rangle$-numeration system. After surveying several properties of pattern sequence, we will give necessary and sufficient criteria for the algebraic independence of their generating functions. As applications, we deduce the linear relations between pattern sequences.
The proofs of the theorem and the corollaries are based on Mahler's method.
Mixed Littlewood conjecture proposed by de Mathan and Teulie in 2004 states that for every real number $x$ one has $\liminf q * |q|_D * ||qx|| = 0,$ where $|q|_D$ is a so called pseudo norm which generalises the standard p-adic norm. In the talk we'll consider the set mad of potential counterexamples to this conjecture. Thanks to the results of Einsiedler and Kleinbock we already know that the Haudorff dimension of mad is zero, so this set is very tiny. During the talk we'll see that the continued fraction expansion of every element in mad should satisfy some quite restrictive conditions. As one of them we'll see that for these expansions, considered as infinite words, the complexity function can neither grow too fast nor too slow.
Tensor trains are a new class of functions which are thought to have some potential to deal with high-dimensional problems. While connected with algebraic geometry the main concepts used are rank-k matrix factorisations. In this talk I will review some basic properties of tensor trains. In particular I will consider algorithms for the solution of linear systems Ax=0. This talk is related to research in progress with Jochen Garcke (Uni Bonn and Fraunhofer Institute) on the solution of the chemical master equation. This talk assumes a basic background in matrix algebra. No background in algebraic geometry is required.
Supervisor: Thomas Kalinowski
Supervisor: Thomas Kalinowski
Supervisor: Brailey Sims
Supervisor: Brian Alspach
Multi-objective optimisation is one of the earliest fields of study in operations research. In fact, Francis Edgeworth (1845--1926) and Vilfredo Pareto (1848--1923) laid the foundations of this field of study over one hundred years ago. Many real world-problems involve multiple objectives. Due to conflict between objectives, finding a feasible solution that simultaneously optimises all objectives is usually impossible. Consequently, in practice, decision makers want to understand the trade off between objectives before choosing suitable solution. Thus, generating many or all efficient solutions, i.e., solutions in which it is impossible to improve the value of one objective without a deterioration in the value of at least one other objective, is the primary goal in multi-objective optimisation. In this talk, I will focus on Multi-objective Integer Programs (MOIPs) and explain briefly some new efficient algorithms that I have developed since starting my PhD to solve MOIPs. I also explain some links between the ideas of multi-objective integer programming and other fields of study such as game theory.
The Mathematical Sciences Institute will host a three day workshop on more effective use of visualization in mathematics, physics, and statistics, from the perspectives of education, research and outreach. This is the second EViMS meeting, following the highly successful one held in Newcastle in November 2012. Our aim for the workshop is to help mathematical scientists understand the opportunities, risks and benefits of visualization, in research and education, in a world where visual content and new methods are becoming ubiquitous.
Visit the conference website for more information.
(Groups & Dynamics Special Session)
(Maths Education Special Session)
(Operator Algebra/ Functional Analysis Special Session)
(Computational Mathematics Special Session)
In this seminar I will talk on decomposing sequences into maximal palindrome factors and its applications in hairpin analysis of viruses like HIV or TB.
We apply the piecewise constant, discontinuous Galerkin method to discretize a fractional diffusion equation with respect to time. Using Laplace transform techniques, we show that the method is first order accurate at the $n$th time level~$t_n$, but the error bound includes a factor~$t_n^{-1}$ if we assume no smoothness of the initial data. We also show that for smoother initial data the growth in the error bound for decreasing time is milder, and in some cases absent altogether. Our error bounds generalize known results for the classical heat equation and are illustrated using a model 1D problem.
The AMSI Summer School is an exciting opportunity for mathematical sciences students from around Australia to come together over the summer break to develop their skills and networks. Details are available from the 2015 AMSI Summer School website.
Also see the CARMA events page for details of some Summer School seminars, open to all!
Long before current graphic, visualisation and geometric tools were available, John E. Littlewood, 1885-1977, wrote in his delightful Miscellany:
A heavy warning used to be given [by lecturers] that pictures are not rigorous; this has never had its bluff called and has permanently frightened its victims into playing for safety. Some pictures, of course, are not rigorous, but I should say most are (and I use them whenever possible myself). [[L], p. 53]
Over the past decade, the role of visual computing in my own research has expanded dramatically. In part this was made possible by the increasing speed and storage capabilities and the growing ease of programming of modern multi-core computing environments [BSC]. But, at least as much, it has been driven by my groups paying more active attention to the possibilities for graphing, animating or simulating most mathematical research activities.
I shall describe diverse work from my group in transcendental number theory (normality of real numbers [AB3]), in dynamic geometry (iterative reflection methods [AB]), probability (behaviour of short random walks [BS, BSWZ]), and matrix completion problems (especially, applied to protein conformation [ABT]). While all of this involved significant numerical-symbolic computation, I shall focus on the visual and experimental components.
AB F. Aragon and J.M. Borwein, ``Global convergence of a non-convex Douglas-Rachford iteration.’’ J. Global Optimization. 57(3) (2013), 753{769. DOI 10.1007/s10898-012-9958-4.
AB3 F. Aragon, D. H. Bailey, J.M. Borwein and P.B. Borwein, Walking on real numbers." Mathematical Intelligencer. 35(1) (2013), 42{60. See also http://walks.carma.newcastle.edu.au/.
ABT F. Aragon, J. M.Borwein, and M. Tam, ``Douglas-Rachford feasibility methods for matrix completion problems.’’ ANZIAM Journal. Galleys June 2014. See also http://carma.newcastle.edu.au/DRmethods/.
BSC J.M. Borwein, M. Skerritt and C. Maitland, ``Computation of a lower bound to Giuga's primality conjecture.’’ Integers 13 (2013). Online Sept 2013 at #A67, http://www.westga.edu/~integers/cgi-bin/get.cgi.
BS J.M. Borwein and A. Straub, ``Mahler measures, short walks and logsine integrals.’’ Theoretical Computer Science. Special issue on Symbolic and Numeric Computation. 479 (1) (2013), 4-21. DOI: http://link.springer.com/article/10.1016/j.tcs.2012.10.025.
BSWZ J.M. Borwein, A. Straub, J. Wan and W. Zudilin (with an Appendix by Don Zagier), ``Densities of short uniform random walks.’’ Can. J. Math. 64 (5), (2012), 961-990. http://dx.doi.org/10.4153/CJM-2011-079-2.
L J.E. Littlewood, A mathematician's miscellany, London: Methuen (1953); Littlewood, J. E. and Bollobas, Bela, ed., Littlewood’s miscellany, Cambridge University Press, 1986.
This talk will highlight links between topics studied in undergraduate mathematics on one hand and frontiers of current research in analysis and symmetry on the other. The approach will be semi-historical and will aim to give an impression of what the research is about.
Fundamental ideas in calculus, such as continuity, differentiation and integration, are first encountered in the setting of functions on the real line. In addition to topological properties of the line, the algebraic properties of being a group and a field, that the set of real numbers possesses, are also important. These properties express symmetries of the set of real numbers, and it turns out that this combination of calculus, algebra and symmetry extends to the setting of functions on locally compact groups, of which the group of rotations of a sphere and the group of automorphisms of a locally finite graph are examples. Not only do these groups frequently occur in applications, but theorems established prior to 1955 show that they are exactly the groups that support integration and differentiation.
Integration and continuity of functions on the circle and the group of rotations of the circle are the basic ingredients for Fourier analysis, which deals with convolution function algebras supported on the circle. Since these basic ingredients extend to locally compact groups, so do the methods of Fourier analysis, and the study of convolution algebras on these groups is known as harmonic analysis. Indeed, there is such a close connection between harmonic analysis and locally compact groups that any locally compact group may be recovered from the convolution algebras that it carries. This fact has recently been exploited with the creation of a theory of `locally compact quantum groups' that axiomatises properties of the algebras appearing in harmonic analysis and does away with the underlying group.
Locally compact groups have a rich structure theory in which significant advances are also currently being made. This theory divides into two cases: when the group is a connected topological space and when it is totally disconnected. The connected case has been well understood since the solution of Hilbert's Fifth Problem in the 1950's, which showed that they are essentially Lie groups. (Lie groups form the symmetries of smooth structures occurring in physics and underpinned, for example, the prediction of the existence of the Higgs boson.) For a long time it was thought that little could be said about totally disconnected groups in general, although important classes of such groups arising in number theory and as automorphism groups of graphs could be understood using techniques special to those classes. However, a complete general theory of these groups is now beginning to take shape following several breakthroughs in recent years. There is the exciting prospect that an understanding of totally disconnected groups matching that of the connected groups will be achieved in the next decade.
In this talk I will discuss a class of systems evolving over two independent variables, which we refer to as "2D". For the class considered, extensions of ODE Lyapunov stability analysis can be made to ensure different forms of stability of the system. In particular, we can describe sufficient conditions for stability in terms of the divergence of a vector Lyapunov function.
People who study geometry like to ask the question: "What is the shape of that?" In this case, the word "that" can refer to a variety of things, from triangles and circles to knots and surfaces to the universe we inhabit and beyond. In this talk, we will examine some of my favourite gems from the world of geometry and see the interplay between geometry, algebra, and theoretical physics. And the only prerequisite you will need is your imagination!
Norman Do is, first and foremost, a self-confessed maths geek! As a high school student, he represented Australia at the International Mathematical Olympiad. He completed a PhD at The University of Melbourne, before working at McGill University in Canada. He is currently a Lecturer and a DECRA Research Fellow in the School of Mathematical Sciences at Monash University.
His research lies at the interface of geometry and mathematical physics, although he is excited by almost any flavour of mathematics. Norman is heavily involved in enrichment for school students, regularly lecturing at the National Mathematics Summer School and currently chairing the Australian Mathematical Olympiad Senior Problems Committee.
This event is run in conjunction with the University of Newcastle’s 50th year anniversary celebrationsSpace! For Star Trek fans it’s the final frontier - with all of vastly, hugely, mindboggling room it contains, it allows scientists and researchers of all persuasions to go where no one has gone before and explore worlds not yet explored. Like Star Trek fans, many mathematicians and statisticians are also interested in exploring the dynamics of space. From a statisticians point of view, our often data driven perspective means we are concerned with exploring data that exists in multi-dimensional space and trying to visualise it using as few dimensions as possible.
This presentation will outline the links between the analysis of categorical data, multi-dimensional space, and the reduction of this space. The technique we explore is correspondence analysis and we shall see how eigen- and singular value decomposition fit into this data visualisation technique. We shall therefore look at some of the fundamental aspects of correspondence analysis and the various ways in which categorical data can be visualised.
I will summarize the main ingredients and results on classical conjugate duality for optimization problems, as given by Rockafellar in 1973.
I shall review convergence results for non-convex Douglas-Rachford iterations.
A look into an extension on the proof of a class of normal numbers by Davenport and Erdos, as well as a leap into the world of experimental mathematics relating to the property of strong normality, in particular the strong normality of some very famous numbers.
Inspired by the Hadamard Maximal Determinant Problem, we investigate the possible Gram matrices from rectangular {+1, -1} matrices. We can fully classify and count the Gram matrices from rectangular {+1, -1} matrices with just two rows and have conjectured a counting formula for the Gram matrices when there are more than two rows in the original matrix.
We build upon the ideas of short random walks in 2 dimensions in an attempt to understand the behaviours of these objects in higher dimensions. We explore the density and moment functions to find combinatorial and analytical results that generalise nicely.
A history of Pi in the American Mathematical Monthly and the variety of approaches to understanding this stubborn constant. I will focus on the common threads of discussion over the last century, especially the changing methods for computing pi to high precision, to illustrate how we have progressed to our current state.
In this talk I will be exploring certain aspects of permutations of length n that avoid the pattern 1324. This is an interesting pattern in that it is simple yet defies simple analysis. It can be shown that there is a growth rate, yet it cannot be shown what that growth rate is; nor has a explicit formula been found to give the number of permutations of length n which avoid the pattern (whereas this has been found for every other non Wilf-equivalent length 4 pattern). Specifically, this talk will look at how an encoding technique (developed by Bona) of the 1324 avoiding permutations was cleverly used to obtain an upper bound for the growth rate of this class.
The fairness of voting systems has been a topic of interest to mathematicians since 1770 when Marquis de Condorcet proposed the Condorcet criterion, and particularly so after 1951 when Kenneth Arrow proposed the Arrow impossibility theorem, which proved that no rank-order voting system can satisfy all properties one would desire.
The system I have been studying is known as runoff voting. It is a method of voting used around the world, often for presidential elections such as in France. Each voter selects their favourite candidate, and if any candidate receives above 50% of the vote, then they are elected. If no one reaches this, then another election will be held, but this time with only the top 2 candidates from the previous election. Whoever receives more votes in this second round will be elected. The runoff voting system satisfies a number of desired properties, though the running of the second round can have significant drawbacks. It can be very costly, it can result in periods of time without government, and in it has been known to cause unrest in some politically unstable countries.
In my research I have introduced the parameter alpha, which varies the original threshold of 50% for a candidate winning the election in the first round. I am using both analytical methods and simulation to observe how the properties change with alpha.
As an extension of Copeland and Erdos' original paper of the same title, we present a clearer and more complete version of the proof that the number of integers up to $N$ ($N$ sufficiently large) which are not $\left(\eps,k\right)$ normal is less than $N^{\gd}$ where $\gd<1$. We also conjecture that the numbers formed from the concatenation of the increasing sequence $a_{1},a_{2},a_{3},\dots$ (provided the sequence is dense enough) are not strongly normal.
We consider the problem of scattering of waves by a string with attached masses, focussing on the problem in the time-domain. We propose this as a simple model for more complicated wave scattering problems which arise in the study of elastic metamaterials. We present the governing system of equations and show how we have solved them. Some numerical simulations are also presented.
The pooling problem is a nonlinear program (NLP) with applications in the refining and petrochemical industries, but also in mining. While it has been shown that the pooling problem is strongly NP-hard, it is one of the most promising NLPs to be solved to global optimality. In this talk I will discuss strengths and weaknesses of problem formulations and solution techniques. In particular, I will discuss convex and linear relaxations of the pooling problem, and show how they are related to graph theory, polyhedral theory and combinatorial optimization.
The Fourier Transform is a central and powerful tool in signal processing as well as being essential to Complex Analysis. However, it is limited to acting on complex-valued functions and thus cannot be applied directly to colour images (which have 3 real values worth of output). In this talk, I discuss the limitations of current methods then discuss several methods of extending the Fourier Transform to larger algebras (specifically the Quaternions and Clifford algebras). This informs a research plan involving the study and computer implementation of a particular Clifford Fourier Transform.
In this conference accessible to a large public and particularly to students, we will review the most important contributions of Leonhard Euler in mathematics. We will give a brief biography of Leonhard Euler and a broad survey of his greats achievements.
Random walks have been used to model stochastic processes in many scientific fields. I will introduce invariant random walks on groups, where the transition probabilities are given by a probability measure. The Poisson boundary will also be discussed. It is a space associated with every group random walk that encapsulates the behaviour of the walks at infinity and gives a description of certain harmonic functions on the group in terms of the essentially bounded functions on the boundary. I will conclude with a discussion of project aims, namely to compute the boundary for certain random walks in new cases and to investigate the order structure of certain ideals in $L^1(G)$ defined for each invariant random walk.
Supervisors: Prof. George Willis, Dr Jeff Hogan.
Power domination problem is a variant of the famous domination problem. It has its application in monitoring of electric power networks. In this talk, we give a literature review of the work done so far and the possible open areas of research. We also introduce two interesting variants of power domination– resolving power domination problem and the propagation problem. We present preliminary work and research plan for future.
Supervisors: Prof. Mirka Miller, Dr Joe Ryan, Prof. Paul D Manuel.
I will lecture on 32 proofs of a theorem of Euler posed by mistake by Goldbach regarding Zeta(3). See http://www.carma.newcastle.edu.au/jon/goldbach-talk10.pdf.
A look into infinity, a few famous problems, and a little bit of normality.
It is well known that there is a one-to-one correspondence between signed plane graphs and link diagrams via the medial construction. The relationship was once used in knot tabulations in the early time of study in knot theory. Indeed, it provides a method of studying links using graphs. Let $G$ be a plane graph. Let $D(G)$ be the alternating link diagram corresponding to the (positive) $G$ obtained from the medial construction. A state $S$ of $D$ is a labeling each crossing of $D$ by either $A$ or $B$. Making the corresponding split for each crossing gives a number of disjoint embedded closed circles, called state circles. We call a state which possesses maximum number of state circles a maximum state. The maximum state is closely related to the genus of the corresponding link, thus has been studied. In this talk, we will discuss some of the recent progress we have made on this topic.
When attacking various difficult problems in the field of Diophantine approximation the application of certain topological games has proven extremely fruitful in recent times due to the amenable properties of the associated 'winning' sets. Other problems in Diophantine approximation have recently been solved via the method of constructing certain tree-like structures inside the Diophantine set of interest. In this talk I will discuss how one broad method of tree-like construction, namely the class of 'generalised Cantor sets', can be formalized for use in a wide variety of problems. By introducing a further class of so-called 'Cantor-winning' sets we may then provide a criterion for arbitrary sets in a metric space to satisfy the desirable properties usually attributed to winning sets, and so in some sense unify the two above approaches. Applications of this new framework include new answers to questions relating to the mixed Littlewood conjecture and the $\times2, \times3$ problem. The talk will be aimed at a broad audience.
This is joint work with our former honours student Alex Muir. We look at the variety of lengths of cycles in Cayley graphs on generalized dihedral groups.
Consider a function from the circle to itself such that the derivative is greater than one at every point. Examples are maps of the form f(x) = mx for integers m > 1. In some sense, these are the only possible examples. This fact and the corresponding question for maps on higher dimensional manifolds was a major motivation for Gromov to develop pioneering results in the field of geometric group theory.
In this talk, I'll give an overview of this and other results relating dynamical systems to the geometry of the manifolds on which they act and (time permitting) talk about my own work in the area.
In celebration of both a special "big" pi Day (3/14/15) and the 2015 centennial of the Mathematical Association of America, we review the illustrious history of the constant $\pi$ in the pages of the American Mathematical Monthly.
This talk showcases some large numbers and where they came from.
A mixed formulation for a Tresca frictional contact problem in linear elasticity is considered in the context of boundary integral equations, which is later extended to Coulomb friction. The discrete Lagrange multiplier, an approximation of the surface traction on the contact boundary part, is a linear combination of biorthogonal basis functions. The biorthogonality allows to rewrite the variational inequality constraints as a simple set of complementary problems. Thus, enabling an efficient application of a semi-smooth Newton solver for the discrete mixed problems. Typically, the solution of frictional contact problems is of reduced regularity at the interface between contact to non-contact and from stick to slip. To identify the a priori unknown locations of these interfaces a posteriori error estimations of residual and hierarchical type are introduced. For a stabilized version of our mixed formulation (with the Poincare- Steklov operator) we present also a priori estimates for the solution. Numerical results show the applicability of the error estimators and the superiority of hp-adaptivity compared to low order uniform and adaptive approaches.
Ernst Stephan is a visitor of Bishnu Lamichhane.
This is joint work with our former honours student Alex Muir. We look at the variety of lengths of cycles in Cayley graphs on generalized dihedral groups.
This week I shall conclude my discussion of pancyclicity and Cayley graphs on generalized dihedral groups.
This presentation will explore the specificities of teaching mathematics in engineering studies that transcend the division between technical, scientific and design disciplines and how students of such studies are different from traditional engineering students. Data comes from a study at the Media Technology Department of Aalborg University in Copenhagen, Denmark. Media Technology is an education that combines technology and creativity and looks at the technology behind areas such as advanced computer graphics, games, electronic music, animations, interactive art and entertainment, to name a few. During the span of the education students are given a strong technical foundation, both in theory and in practice.
The presentation emerges from research of my PhD student Evangelia Triantafylloyu and myself. The study which will be presented here used performance tests, attitude questionnaires, interviews with students and observations of mathematics related courses. The study focused on investigating student performance and retention in mathematics, attitudes towards mathematics, and preferences of teaching and learning methods, including flipped classroom approach using videos produced by course teachers. The outcome of this study can be used to create a profile of a typical student and to tailor approaches for teaching mathematics to this discipline. Moreover, it can be used as a reference point for investigating ways to improve mathematics education in other creative engineering studies.
About the Speaker: Olga Timcenko joined Medialogy department of Aalborg University in Copenhagen in fall 2006, as an Associate Professor. Before joining the University, she was a Senior Technology Consultant in LEGO Business development, LEGO Systems A/S, where she worked for different departments of LEGO on research and development of multimedia materials for children, including LEGO Digital Designer and LEGO Mindstorms NXT. She was active in FIRST LEGO League project (world-wide robotic competition among school children), and Computer-clubhouse project. During 2003-2006, she was LEGO’s team leader in EU-financed Network of excellence in Technology enhanced learning called Kaleidoscope, and actively participated in several Kaleidoscope JERIPs and SIGs. She has a Ph.D. in Robotics from Suddansk University in Odense, Denmark and she is author / co-author of 40+ conference and journal papers in the field of robotics and children and technology, and 4 international patents in the field of virtual 3D worlds / 3D user interfaces for children. Her last project for LEGO was redesign of the Mindstorms iconic programming language for children (the product was launched world-wide in August 2006).
I will discuss a combinatorial problem coming from database design. The problem can be interpreted as maximizing the number of edges in a certain hypergraph subject to a recoverability condition. It was solved recently by the high school student Max Aehle, who came up with a nice argument using the polynomial method.
Dengue is caused by four different serotypes, where individuals infected by one of the serotypes obtain lifelong immunity to that serotype but not for the other serotypes. Individuals with secondary infections may attract the more dangerous form of dengue, called dengue hemorrhagic fever (DHF), because of higher viral load. Because of unsustainability of traditional measures, the use of the bacterium Wolbachia has been proposed as an alternative strategy against dengue fever. However, little research has been conducted to study the effectiveness of this intervention in the field. Understanding the effectiveness of this intervention is of importance before it is widely implemented in the real-world. In this talk, I will explain the effectiveness of this intervention and present mathematical models that I have developed to study the effectiveness of this intervention and how these models are different to the existing one. I will also present the effects of the presence of multiple strains of dengue on dengue transmission dynamics.
Supervisors: David Allingham, Roslyn Hickson (IBM), Kathryn Glass (ANU), Irene Hudson
We will talk on the validity of the mean ergodic theorem along left Følner sequences in a countable amenable group G. Although the weak ergodic theorem always holds along any left Følner sequence in G, we will provide examples where the mean ergodic theorem fails in quite dramatic ways. On the other hand, if G does not admit any ICC quotients, e.g. if G is virtually nilpotent, then we will prove that the mean ergodic theorem does indeed hold along any left Følner sequence.
Based on the joint work with M. Bjorklund (Chalmers).
We introduce a subfamily of additive enlargements of a maximally monotone operator $T$. Our definition is inspired by the seminal
work of Fitzpatrick presented in 1988. These enlargements are a subfamily of the family of enlargements introduced by Svaiter in 2000. For the case $T = \partial f$, we prove that some members of the subfamily are smaller than the $\varepsilon$-subdifferential enlargement. For this choice of $T$, we can construct a specific enlargement which coincides with the$\varepsilon$-subdifferential. Since these enlargements are all
additive, they can be seen as structurally closer to the $\varepsilon$-subdifferential enlargement.
Joint work with Juan Enrique Martínez-Legaz (Universitat Autonoma de Barcelona), Mahboubeh Rezaei (University of Isfahan, Iran), and Michel Théra (University of Limoges).
I will explain what an equation in a free group is, why they are interesting, and how to solve them. The talk will be accessible to anyone interested in maths or computer science or logic.
I have recently [2] shown that each group $Z_2^{2m}$ gives rise to a pair of bent functions with disjoint support, whose Cayley graphs are a disjoint pair of strongly regular graphs $\Delta_m[-1]$, $\Delta_m[1]$ on $4^m$ vertices. The two strongly regular graphs are twins in the sense that they have the same parameters $(\nu, k, \lambda, \mu)$. For $m < 4$, the two strongly regular graphs are isomorphic. For $m \geq 4$, they are not isomorphic, because the size of the largest clique differs. In particular, the largest clique size of $\Delta_m[-1]$ is $\rho(2^m)$ and the largest clique in $\Delta_m[1]$ has size at least $2^m$, where $\rho(n)$ is the Hurwitz-Radon function. This non-isomorphism result disproves a number of conjectures that I made in a paper on constructions of Hadamard matrices [1].
[1] Paul Leopardi, "Constructions for Hadamard matrices, Clifford algebras, and their relation to amicability - anti-amicability graphs", Australasian Journal of Combinatorics, Volume 58(2) (2014), pp. 214–248.
[2] Paul Leopardi, "Twin bent functions and Clifford algebras", accepted 13 January 2015 by the Springer Proceedings in Mathematics and Statistics (PROMS): Algebraic Design Theory and Hadamard Matrices (ADTHM 2014).
Supervisor: Murray Elder
Supervisor: Mike Meylan
Supervisor: Wadim Zudulin
In this talk I will present the main results of my PhD thesis (by the same name), which focuses on the application of matrix determinants as a means of producing number-theoretic results.
Motivated by an investigation of properties of the Riemann zeta function, we examine the growth rate of certain determinants of zeta values. We begin with a generalisation of determinants based on the Hurwitz zeta function, where we describe the arithmetic properties of its denominator and establish an asymptotic bound. We later employ a determinant identity to bound the growth of positive Hankel determinants. Noting the positivity of determinants of Dirichlet series allows us to prove specific bounds on determinants of zeta values in particular, and of Dirichlet series in general. Our results are shown to be the best that can be obtained from our method of bounding, and we conjecture a slight improvement could be obtained from an adjustment to our specific approach.
Within the course of this investigation we also consider possible geometric properties which are necessary for the positivity of Hankel determinants, and we examine the role of Hankel determinants in irrationality proofs via their connection with Padé approximation.
Computers are changing the way we do mathematics, as well as introducing new research agendas. Computational methods in mathematics, including symbolic and numerical computation and simulation, are by now familiar. These lectures will explore the way that "formal methods," based on formal languages and logic, can contribute to mathematics as well.
In the 19th century, George Boole argued that if we take mathematics to be the science of calculation, then symbolic logic should be viewed as a branch of mathematics: just as number theory and analysis provide means to calculate with numbers, logic provides means to calculate with propositions. Computers are, indeed, good at calculating with propositions, and there are at least two ways that this can be mathematically useful: first, in the discovery of new proofs, and, second, in verifying the correctness of existing ones.
The first goal generally falls under the ambit of "automated theorem proving" and the second falls under the ambit of "interactive theorem proving." There is no sharp distinction between these two fields, however, and the line between them is becoming increasingly blurry. In these lectures, I will provide an overview of both fields and the interactions between them, and speculate as to the roles they can play in mainstream mathematics.
I will aim to make the lectures accessible to a broad audience. The first lecture will provide a self-contained overview. The remaining lectures are for the most part independent of one another, and will not rely on the first lecture.
The seminar will provide a brief overview of the potential for CT to contribute to quantitative analysis in the Social Sciences. This will be followed by a description of CT as a "Rosetta Stone" linking topology, algebra, computation, and physics together. This carries over to process thinking and circuit analysis. Coecke and Parquette's approach to diagrammatic analysis is examined to emphasize the efficiency of block shifting techniques over diagram chasing. Baez and Erbele's application of CT to feedback control is the main focus of analysis and this is followed by a brief excursion into multicategories (cobordisms), before finishing up with some material on coalgebras and transition systems.
Have you ever tried to add up the numbers 1+1/2+1/3+...? If you've never thought about this before, then give it a go (and don't Google the answer!) In this talk we will settle this relatively easy question and consider how things might change if we try to thin out the sum a bit. For instance, what if we only used the prime numbers 1/2+1/3+1/5+...? Or what about the square numbers 1+1/4+1/9+...? There will be some algebra and integration at times, but if you can add fractions (or use a calculator) then you should follow almost everything.
Starting with a substitution tiling, such as the Penrose tiling, we demonstrate a method for constructing infinitely many new substitution tilings. Each of these new tilings is derived from a graph iterated function system and the tiles typically have fractal boundary. As an application of fractal tilings, we construct an odd spectral triple on a C*-algebra associated with an aperiodic substitution tiling. Even though spectral triples on substitution tilings have been extremely well studied in the last 25 years, our construction produces the first truly noncommutative spectral triple associated with a tiling. My work on fractal substitution tilings is joint with Natalie Frank and Sam Webster, and my work on spectral triples is joint with Michael Mampusti.
Computers are changing the way we do mathematics, as well as introducing new research agendas. Computational methods in mathematics, including symbolic and numerical computation and simulation, are by now familiar. These lectures will explore the way that "formal methods," based on formal languages and logic, can contribute to mathematics as well.
In the 19th century, George Boole argued that if we take mathematics to be the science of calculation, then symbolic logic should be viewed as a branch of mathematics: just as number theory and analysis provide means to calculate with numbers, logic provides means to calculate with propositions. Computers are, indeed, good at calculating with propositions, and there are at least two ways that this can be mathematically useful: first, in the discovery of new proofs, and, second, in verifying the correctness of existing ones.
The first goal generally falls under the ambit of "automated theorem proving" and the second falls under the ambit of "interactive theorem proving." There is no sharp distinction between these two fields, however, and the line between them is becoming increasingly blurry. In these lectures, I will provide an overview of both fields and the interactions between them, and speculate as to the roles they can play in mainstream mathematics.
I will aim to make the lectures accessible to a broad audience. The first lecture will provide a self-contained overview. The remaining lectures are for the most part independent of one another, and will not rely on the first lecture.
In this colloquium-style presentation I will describe these combinatorial objects and how they relate to each other. Time permitting, I will also show how they can be used in other areas of Mathematics. Joint work with Sooran Kang and Samuel Webster.
Computers are changing the way we do mathematics, as well as introducing new research agendas. Computational methods in mathematics, including symbolic and numerical computation and simulation, are by now familiar. These lectures will explore the way that "formal methods," based on formal languages and logic, can contribute to mathematics as well.
In the 19th century, George Boole argued that if we take mathematics to be the science of calculation, then symbolic logic should be viewed as a branch of mathematics: just as number theory and analysis provide means to calculate with numbers, logic provides means to calculate with propositions. Computers are, indeed, good at calculating with propositions, and there are at least two ways that this can be mathematically useful: first, in the discovery of new proofs, and, second, in verifying the correctness of existing ones.
The first goal generally falls under the ambit of "automated theorem proving" and the second falls under the ambit of "interactive theorem proving." There is no sharp distinction between these two fields, however, and the line between them is becoming increasingly blurry. In these lectures, I will provide an overview of both fields and the interactions between them, and speculate as to the roles they can play in mainstream mathematics.
I will aim to make the lectures accessible to a broad audience. The first lecture will provide a self-contained overview. The remaining lectures are for the most part independent of one another, and will not rely on the first lecture.
Arising originally from the analysis of a family of compressed sensing matrices, Ian Wanless and I recently investigated a number of linear algebra problems involving complex Hadamard matrices. I will discuss our main result, which relates rank-one submatrices of Hadamard matrices to the number of non-zero terms in a representation of a fixed vector with respect to two unbiased bases of a finite dimensional vector space. Only a basic knowledge of linear algebra will be assumed.
Advantages of EEG in studying brain signals include excellent temporal localization and, potentially, good spatial localization, given good models for source localization in the brain. Phase synchrony and cross-frequency coupling are two phenomena believed to indicate cooperation of different brain regions in cognition through messaging via different frequency bands. To verify these hypotheses requires ability to extract time-frequency localized components from complex multicomponent EEG data. One such method, empirical mode decompositions, shows increasing promise through engineering and we will review recent progress on this approach. Another potential method uses bases or frames of optimally time-frequency localized signals, so-called prolate spheroidal wave functions. New properties of these functions developed in joint work with Jeff Hogan will be reviewed and potential applications to EEG will be discussed.
This will be an informal talk from our UoN Engineering colleague Prof Bill McBride who recently visited some "Mid-West" Universities in the USA. Prof McBride will discuss what he saw and learnt, with reference to first year maths teaching for Engineering students.
Managing railway in general and high speed rail in particular is a very complex task which involves many different interrelated decisions in all three strategic, tactical, and operational phases. In this research two different mixed integer linear programing models are presented which are the literature's first models of their kind. In the first model a single line with two different train types is considered. In the second model a cyclic train timetabling and platforming assignment problems are considered and solved to optimality. For this model, methods for obtaining bounds on the first objective function are presented. Some pre-processing techniques to reduce the number of decision variables and constraints are also proposed. Proposed models' objectives are to minimize (1) the cyclic length, called Interval, and (2) the total journey time of all trains dispatched from their origin in each cycle. Here we explicitly consider the minimization of the cycle length using linear constraints and linear objective function. The proposed models are different from and faster than the widely-used Period Event Scheduling Problem (PESP).
In recent years, there has been quite a bit of interest in generalized Fourier transforms in Clifford analysis and in particular for the so-called Clifford-Fourier transform.
In the first part of the talk I will provide some motivation for the study of this transform. In the second part we will develop a new technique to find a closed formula for its integral kernel, based on the familiar Laplace transform. As a bonus, this yields a compact and elegant formula for the generating function of all even dimensional kernels.
I'll give an overview of some recent developments in the theory of groups of automorphisms of trees which are discrete in the full automorphism group of the tree and are locally-transitive. I'll also mention some questions which have been provoked by this work.
We generalize the Burger-Mozes universal groups acting on regular trees by prescribing the local action on balls of a given radius, and study the basic properties of this construction. We then apply our results to prove a weak version of the Goldschmidt-Sims conjecture for certain classes of primitive permutation groups.
We study maximal monotone inclusions from the perspective of (convex) gap functions.
We propose a very natural gap function and will demonstrate how this function arises from the Fitzpatrick function — a convex function used effectively to represent maximal monotone operators.
This approach allows us to use the powerful strong Fitzpatrick inequality to analyse solutions of the inclusion.
This is joint work with Joydeep Dutta.
Functions that are piecewise defined are a common sight in mathematics while convexity is a property especially desired in optimization. Suppose now a piecewise-defined function is convex on each of its defining components – when can we conclude that the entire function is convex? Our main result provides sufficient conditions for a piecewise-defined function f to be convex. We also provide a sufficient condition for checking the convexity of a piecewise linear-quadratic function, which play an important role in computer-aided convex analysis.
Based on joint work with Heinz H. Bauschke (Mathematics, UBC Okanagan) and Hung M. Phan (Mathematics, University of Massachusetts Lowell).
Mathematicians sometimes speak of the beauty of mathematics which to us is reflected in proofs and solutions for the most part. I am going to give a few proofs that I find very nice. This is stuff that post-grad discrete students certainly should know exists.
This completion talk is in two parts. In the first part, I will present a characterisation of the cyclic Douglas-Rachford method's behaviour, generalising a result which was presented in my confirmation seminar. In the second part, I will explore non-convex regularity notions in an application arising in biochemistry.
Amenability is of interest for many reasons, not least of which is its paradoxical decomposition into so many various characterisations, each equal to the whole. Two of these are the characterisation in terms of the cogrowth rate, and the existence of a Følner sequence. In exploring a known method of computing the cogrowth rate using a random walk, and by analyzing which groups seem to be pathological for this algorithm, we discover new connections between these properties.
Partitioning is a basic fundamental technique in graph theory. Graph partitioning technique is used widely to solve several combinatorial problems. We will discuss the role of edge partitioning techniques on graph embedding. The graph embedding includes some combinatorial problems such as bandwidth problem, wirelength problem, forwarding index problem etc and in addition includes some cheminformatics problems such as Wiener Index, Szeged Index, PI index etc. In this seminar, we study convex partition and its characterization. In addition, we also analyze the relationship between convex partition and some other edge partitions such as Szeged edge partition and channel edge partition. The graphs that induce convex partitions are bipartite. We will discuss the difficulties in extending this technique to non-bipartite graphs.
In either the inviscid limit of the Euler equations, or the viscously dominated limit of the Stokes equations, the determination of fluid flows can be reduced to solving singular integral equations on immersed structures and bounding surfaces. Further dimensional reduction is achieved using asymptotics when these structures are sheets or slender fibers. These reductions in dimension, and the convolutional second-kind structure of the integral equations, allows for very efficient and accurate simulations of complex fluid-structure interaction problems using solvers based on the Fast Multipole or related methods. These representations also give a natural setting for developing implicit time-stepping methods for the stiff dynamics of elastic structures moving in fluids. I'll discuss these integral formulations, their numerical treatment, and application to simulating structures moving in high-speed flows (flapping flags and flyers), and for resolving the complex interactions of many, possibly flexible, bodies moving in microscopic biological flows.
Existing of perfect matchings in regular graph is a fundamental problem in graph theory, and it closely model many real world problems such as broadcasting and network management. Recently, we have studied the number of edge disjoint perfect matching in regular graph, and using some well-known results on the existence of perfect matching and operations forcing unique perfect matchings in regular graph, we are able to make some pleasant progress. In this talk, we will present the new results and briefly discuss the proof.
Stability analysis plays a central role in nonlinear control and systems theory. Stability is, in fact, the fundamental requirement for all the practical control systems. In this research, advanced stability analysis techniques are reviewed and developed for discrete-time dynamical systems. In particular, we study the relationships between the input-to-state stability related properties and l¬2-type stability properties. These considerations naturally lead to the study of input-output models and, further, to questions of incremental stability and convergent dynamics. Future work will also outline several applications scenario for our theories including observer analysis and secure communication.
Supervisors: A/Prof. Christopher Kellett and Dr. Björn Rüffer
Noga Alon's Combinatorial Nullstellensatz, published in 1999, is a statement about polynomials in many variables and what happens if one of these vanishes over the set of common zeros of some others. In contrast to Hilbert's Nullstellensatz, it makes strong assumptions about the polynomials it is talking about, and this leads a tool for producing short and elegant proofs for numerous old and new results in combinatorial number theory and graph theory. I will present the proof of the algebraic result and some of the combinatorial applications in the 1999 paper.
After briefly describing a few more simple applications of Alon's Nullstellensatz, I will present in detail Reiher's amazing proof of the Kemnitz conjecture regarding lattice points in the plane.
Supervisors: Mirka Miller, Joe Ryan and Andrea Semanicova-Fenovcikova
We give some background to the labeling schemes like graceful, harmonious, magic, antimagic and irregular total labeling. Then we will describe why study of graph labeling is important by narrating some applications of graph labeling. Next we will briefly describe the methodology like Robert's construction to obtain completing separating systems (CSS) which will help us to determine the antimagic labeling of graphs and Alon's Combinatorial Nullstellensatz. We will illustrate an example from many applications of graphs labelling. Finally we will introduce reflexive irregular total labelling and explain its importance. To conclude, we add research plan and time line during candidature of research.
I will complete the proof of the Kemnitz conjecture and make some remarks about related zero-sum problems.
In this talk we will begin with a brief history of the mathematics of aperiodic tilings of Euclidean space, highlighting their relevance to the theory of quasicrystals. Next we will focus on an important collection of point sets, cut and project sets, which come from a dynamical construction and provide us with a mathematical model for quasicrystals. After giving definitions and examples of these sets, we will discuss their relationship with Diophantine approximation, and show how the interplay between these two subjects has recently led to new results in both of them.
Lift-and-Project operators (which map compact convex sets to compact convex sets in a certain contractive way, via higher dimensional convex representations of these sets) provide an automatic way for constructing all facets of the convex hull of 0,1 vectors in a polytope given by linear or polynomial inequalities. They also yield tractable approximations provided that the input polytope is tractable and that we only apply the operators O(1) times. There are many generalizations of the theory of these operators which can be used, in theory, to generate (eventually, in the limit) arbitrarily tight, convex relaxations of essentially arbitrary nonconvex sets. Moreover, Lift-and-Project methods provide universal ways of applying Semidefinite Programming techniques to Combinatorial Optimization problems, and in general, to nonconvex optimization problems.
I will survey some of the developments (some recent, some not so recent) that I have been involved in, especially those utilizing Lift-and-Project methods and Semidefinite Optimization. I will touch upon the connections to Convex Algebraic Geometry and present various open problems.
We propose new path-following predictor-corrector algorithms for solving convex optimization problems in conic form. The main structural properties used in our design and analysis of the algorithms hinge on some key properties of a special class of very smooth, strictly convex barrier functions. Even though our analysis has primal and dual components, our algorithms work with the dual iterates only, in the dual space. Our algorithms converge globally at the same worst-case rate as the current best polynomial-time interior-point methods. In addition, our algorithm have the local superlinear convergence property under some mild assumptions. The algorithms are based on an easily computable gradient proximity measure, which ensures an automatic transformation of the global linear rate of convergence to the locally superlinear one under some mild assumptions. Our step-size procedure for the predictor step is related to the maximum step size (the one that takes us to the boundary).
This talk is based on joint work with Yu. Nesterov.
We survey the literature on orthogonal polynomials in several variables starting from Hermite's work in the late 19th century to the works of Zernike (1920's) and Ito (1950's). We explore combinatorial and analytic properties of the Ito polynomials and offer a general class in 2 dimensions which as interesting structural properties. Connections with certain PDE's will be mentioned.
Given a finite presentation of a group, proving properties of the group can be difficult. Indeed, many questions about finitely presented groups are unsolvable in general. Algorithms exist for answering some questions while for other questions algorithms exist for verifying the truth of positive answers. An important tool in this regard is the Todd-Coxeter coset enumeration procedure. It is possible to extract formal proofs from the internal working of coset enumerations. We give examples of how this works, and show how the proofs produced can be mechanically verified and how they can be converted to alternative forms. We discuss these automatically produced proofs in terms of their size and the insights they offer. We compare them to hand proofs and to the simplest possible proofs. We point out that this technique has been used to help solve a longstanding conjecture about an infinite class of finitely presented groups.
In scanning ptychography, an unknown specimen is illuminated by a localised illumination function resulting in an exit-wave whose intensity is observed in the far-field. A ptychography dataset is a series of these observations, each of which is obtained by shifting the illumination function to a different position relative to the specimen with neighbouring illumination regions overlapping. Given a ptychographic data set, the blind ptychography problem is to simultaneously reconstruct the specimen, illumination function, and relative phase of the exit-wave. In this talk I will discuss an optimisation framework which reveals current state-of-the-art reconstruction methods in ptychography as (non-convex) alternating minimization-type algorithms. Within this framework, we provide a proof of global convergence to critical points using the Kurdyka-Łojasiewicz property.
We use random walks to experimentally compute the first few terms of the cogrowth series for a finitely presented group. We propose candidates for the amenable radical of any non-amenable group, and a Følner sequence for any amenable group, based on convergence properties of random walks.
The Hardy and Paley-Wiener Spaces are defined due to important structural theorems relating the support of a function's Fourier transform to the growth rate of the analytic extension of a function. In this talk we show that analogues of these spaces exist for Clifford-valued functions in n dimensions, using the Clifford-Fourier Transform of Brackx et al and the monogenic ($n+1$ dimensional) extension of these functions.
We consider monotone systems defined by ODEs on the positive orthant in $\mathbb{R}^n$. These systems appear in various areas of application, and we will discuss in some level of detail one of these applications related to large-scale systems stability analysis.
Lyapunov functions are frequently used in stability analysis of dynamical systems. For monotone systems so called sum- and max-separable Lyapunov functions have proven very successful. One can be written as a sum, the other as a maximum of functions of scalar arguments.
We will discuss several constructive existence results for both types of Lyapunov function. To some degree, these functions can be associated with left- and right eigenvectors of an appropriate mapping. However, and perhaps surprisingly, examples will demonstrate that stable systems may admit only one or even neither type of separable Lyapunov function.
A motion which is periodic may be considered symmetric under a transformation in time. A measure of the phase relationship these motions have with respect to a geometric figure which is symmetric under some transformation in space is presented. The implications this has on discretised patterns generated is discussed. The talk focuses on theoretical formalisms, such as those which display the fractal patterns of 'strange attractors', rather than group theory for symmetric transformations.
We study the family of self-inversive polynomials of degree $n$ whose $j$th coefficient is $\gcd(n,j)^k$, for a fixed integer $k \geq 1$. We prove that these polynomials have all of their roots on the unit circle, with uniform angular distribution. In the process we prove some new results on Jordan's totient function. We also show that these polynomials are irreducible, apart from an obvious linear factor, whenever $n$ is a power of a prime, and conjecture that this holds for all $n$. Finally we use some of these methods to obtain general results on the zero distribution of self-inversive polynomials and of their "duals" obtained from the discrete Fourier transforms of the coefficients sequence. (Joint work with Sinai Robins).
I will talk a bit about the benefits of a regular outlook.
We discuss problems of approximation of an irrational by rationals whose numerators and denominators lie in prescribed arithmetic progressions. Results are both, on the one hand, from a metrical and a non-metrical point of view, and on the other, from an asymptotic and also a uniform point of view. The principal novelty of this theory is a Khintchine-type theorem for uniform approximation in this setup. Time permitting some applications of this work will be discussed.
A dimension adaptive algorithm for sparse grid quadrature in reproducing kernel Hilbert spaces on products of spheres uses a greedy algorithm to approximately solve a down-set constrained binary knapsack problem. The talk will describe the quadrature problem, the knapsack problem and the algorithm, and will include some numerical examples.
We will be answering the following question raised by Christopher Bishop:
'Suppose we stand in a forest with tree trunks of radius $r > 0$ and no two trees centered closer than unit distance apart. Can the trees be arranged so that we can never see further than some distance $V < \infty$, no matter where we stand and what direction we look in? What is the size of $V$ in terms of $r$?'
The methods used to study this problem involve Fourier analysis and sharp estimates of exponential sums.
We consider the stability of a class of abstract positive systems originating from the recurrence analysis of stochastic systems, such as multiclass queueing networks and semimartingale reflected Brownian motions. We outline that this class of systems can also be described by differential inclusions in a natural way. We will point out that because of the positivity of the systems the set-valued map defining the differential inclusion is not upper semicontinuous in general and, thus, well-known characterizations of asymptotic stability in terms of the existence of a (smooth) Lyapunov function cannot be applied to this class of positive systems. Following an abstract approach, based on common properties of the positive systems under consideration, we show that asymptotic stability is equivalent to the existence of a Lyapunov function. Moreover, we examine the existence of smooth Lyapunov functions. Putting an assumption on the trajectories of the positive systems which demands for any trajectory the existence of a neighboring trajectory such that their difference grows linearly in time and distance of the starting points, we prove the existence of a $C^\infty$-smooth Lyapunov function. Looking at this hypothesis from the differential inclusions perspective it turns out that differential inclusions defined by Lipschitz continuous set-valued maps taking nonempty, compact and convex values have this property.
We consider identities satisfied by discrete analogues of Mehta-like integrals. The integrals are related to Selberg’s integral and the Macdonald conjectures. Our discrete analogues have the form
$$S_{\alpha,\beta,\delta} (r,n) := \sum_{k_1,...,k_r\in\mathbb{Z}} \prod_{1\leq i < j\leq r} |k_i^\alpha - k_j^\alpha|^\beta \prod_{j=1}^r |k_j|^\delta \binom{2n}{n+k_j},$$where $\alpha,\beta,\delta,r,n$ are non-negative integers subject to certain restrictions.
In the cases that we consider, it is possible to express $S_{\alpha,\beta,\delta} (r,n)$ as a product of Gamma functions and simple functions such as powers of two. For example, if $1 \leq r \leq n$, then $$S_{2,2,3} (r,n) = \prod_{j=1}^r \frac{(2n)!j!^2}{(n-j)!^2}.$$
The emphasis of the talk will be on how such identities can be obtained, with a high degree of certainty, using numerical computation. In other cases the existence of such identities can be ruled out, again with a high degree of certainty. We shall not give any proofs in detail, but will outline the ideas behind some of our proofs. These involve $q$-series identities and arguments based on non-intersecting lattice paths.
This is joint work with Christian Krattenthaler and Ole Warnaar.
The use of GPUs for scientific computation has undergone phenomenal growth over the past decade, as hardware originally designed with limited instruction sets for image generation and processing has become fully programmable and massively parallel. This talk discusses the classes of problem that can be attacked with such tools, as well as some practical aspects of implementation. A direction for future research by the speaker is also discussed.
I am going to discuss a construction of functional calculus $$f\mapsto f(A,B),$$ where $A$ and $B$ are noncommuting self-adjoint operators. I am going to discuss the problem of estimating the norms $\|f(A_1,B_1)-f(A_2,B_2)\|$, where the pair $(A_2,B_2)$ is a perturbation of the pair $(A_1,B_1)$.
We'll answer the question "What's a wavelet?" and discuss continuous wavelet transforms on the line and connections with representation theory and singular integrals. The focus will then turn to discretization techniques, including multiresolution analysis. Matrix completion problems arising from higher-dimensional wavelet constructions will also be described.
Firstly, from [1] we consider a mixed formulation for an elliptic obstacle problem for a 2nd order operator and present an hp-FE interior penalty discontinous Galerkin (IPDG) method. The primal variable is approximated by a linear combination of Gauss-Lobatto-Lagrange(GLL)-basis functions, whereas the discrete Lagrangian multiplier is a linear combination of biorthogonal basis functions. A residual based a posteriori error estimate is derived. For its construction the approximation error is split into a discretization error of a linear variational equality problem and additional consistency and obstacle condition terms.
Secondly, an hp-adaptive $C^0$-interior penalty method for the bi-Laplace obstacle problem is presented from [2]. Again we take a mixed formulation using GLL-basis functions for the primal variable and biorthogonal basis functions for the Lagrangian multiplier and present also a residual a posteriori error estimate. For both c