Friday September 11
|Title||Modularity and Graph Algorithms|
|Speaker||Dr. Joe McCloskey|
|National Security Agency|
A number of graph partitioning algorithms are based on the concept of modularity. In particular Clauset, Newman and Moore (CNM) have developed a greedy agglomerative graph partitioning algorithm that scales well but is known to have several flaws. Fortunato and Barthelemy have performed a rigorous analysis of the CNM algorithm that elucidates it problems. More recently Berry, Hendrickson, Laviolette, and Phillips have derived a weighted variant of CNM that performs much better in practice. This talk will focus on a different version of the parent CNM algorithm based on a statistical re-interpretation of CNM that also addresses some of the issues with the original algorithm.
Friday September 18
|Title||The Gibbs Phenomenon: The Saga|
|Speaker||Dr. James Alexander|
|Department of Mathematics|
|Case Western Reserve University and NSF|
This is a historical talk. The Gibbs phenomenon is a feature of the behavior of Fourier series of a discontinuous function. The story of its elucidation at the turn of the 20th century illustrates that mathematical research is very much a human activity – including such things as misattribution, disputed claims of priority, rhetorical long knives, vague (and not so vague) insults, uncritical parroting, etc. Josiah Williard Gibbs was only one person in the story. Other luminaries include Albert Michelson, A.E.H. Love, Maxine Bôcher (of the AMS Bôcher prize), Leopold Fejér, Thomas Grönwall, and Henri Poincaré. In this lecture, we review this history from the late 1800s through the 1910s, together with sequels and prequels.
Friday September 25
|Title||Generalized High Order Compact Schemes|
|Speaker||Dr. Bill Spotz, Program Manager|
|Office of Science, Advanced Scientific Computing Research|
|US Department of Energy|
“High order compact” (HOC) finite difference methods refer to methods that obtain an order of accuracy that is higher than typically obtained on a given stencil of grid points. As one example, the standard central difference expressions for the first and second derivatives on three-point stencils are second order, but can be improved to fourth order on the same stencil by changing from an explicit to an implicit formulation. The compact, high order schemes in this talk obtain their higher accuracy by including information from the governing partial differential equation (either the equation itself or its derivatives) in the formulation. This idea dates back to Lax-Wendroff, but the approach has been expanded from time derivatives to spatial derivatives in 1D, 2D and 3D, for the Poisson equation and convection-diffusion, as well as other PDEs of interest. The advantages include not only higher order accuracy, but also the serendipitous suppression of artificial oscillations. The disadvantages include great algebraic complexity that is prone to errors and limited flexibility in the grids that can be used, thus limiting the class of problems that can be solved. This talk will focus on the Generalized High Order Compact (GHOC) scheme, which attempts to address these disadvantages of the HOC scheme. It is an extension of the Generalized Finite Difference Method, which works on unstructured grids, and simplifies the algebra by transferring tedious elements of the method derivation to the computer. I will cover the derivation of GHOC, as well as error analysis and the results of numerical experiments.
Friday October 02
|Title||Ergodicity of Many-Server Queues with Abandonment|
|Speaker||Dr. Weining Kang|
|Department of Mathematics and Statistics|
We consider a queuing system with N identical servers serving a single class of customers in the first-come-first-serve fashion. Customers in the queue will abandon and leave the system when they are out of patience. Existence of stationary distributions of the state descriptor of such a system is established. Moreover, under the uniqueness assumption on the invariant state of the associated fluid model solution, we show that, as the number of servers goes to infinity, the sequence of stationary distributions for the fluid scaled system state descriptors converges to the unique invariant state. We also show some counterexamples when the uniqueness assumption fails to hold.
Friday October 09
No talk today due to a special Student Recruitment Event
Friday October 16
No talk today
Friday October 23
|Title||Statistical Properties of the Regularized Least Squares Functional, a Hybrid LSQR Newton Method for Finding the Regularization Parameter and Application in Image Deblurring and Signal Restoration|
|Speaker||Dr. Rosemary A. Renaut|
|Computational Sciences, National Science Foundation, and School of Mathematical and Statistical Sciences Arizona State University|
Image deblurring or signal restoration can be formulated as a data fitting least squares problem, but the problem is severely ill-posed and regularization is needed, hence introducing the need to find a regularization parameter. I will review the background on finding the regularization parameter dependent on the properties of the regularized least squares functional
|| Ax − b ||Wb2 + || D(x − x0) ||Wx2
for the solution of discretely ill-posed systems of equations. It was recently shown to follow a χ2 distribution when the a priori information x0 on the solution is assumed to represent the mean of the solution x. But of course for image deblurring, we don’t wish to assume knowledge of a prior image to obtain the image. On the other hand, it is possible to obtain statistical properties of the given image, hence given the mean value of the right hand side, b, the functional is again a χ2 distribution, but one that is non-central. These results can be used to design a Newton method, using a hybrid LSQR approach, for the determination of the optimal regularization parameter λ when the weight matrix is Wx = λ2 I. Numerical results using test problems demonstrate the efficiency of the method, particularly for the hybrid LSQR implementation. Results are compared to another statistical method, the unbiased predictive risk (UPRE) algorithm. The method has potential for efficient image deblurring, and current work is aimed at extending the method for determining local regularization parameters. Results are illustrated for image deblurring.
Friday October 30
|Title||Implications of the constant rank constraint qualification|
|Speaker||Dr. Shu Lu|
|Department of Statistics and Operations Research|
|University of North Carolina at Chapel Hill|
We consider a parametric set defined by finitely many equality and inequality constraints under the constant rank constraint qualification (CRCQ). The CRCQ generalizes both the linear independence constraint qualification (LICQ) and the polyhedral case, and is also related to the Mangasarian-Fromovitz constraint qualification (MFCQ) in a certain way. It induces some nice properties of the set when the parameter is fixed, and some nice behavior of the set-valued map when the parameter varies. Such properties are useful in analysis of Euclidean projectors onto the set and variational conditions defined over the set.
Friday November 06
|Title||Nonlinear problems in EFG and Dewetted Bridgman crystal growth processes|
|Speaker||Dr. Liliana Braescu|
|Department of Computer Science|
|Faculty of Mathematics and Computer Science West University of Timisoara, Romania|
The major problem with which crystal growth researchers have been confronted was the development of techniques capable to monitor and control the external shape of melt-grown crystals, and simultaneously to improve the crystal structures. These requirements have imposed techniques in which crystals are grown without contact with the container walls: Czochralski, floating-zone, edge-defined film-fed growth (EFG), and Dewetted Bridgman (DW).
The goal of this talk is to prove the great potential of the mathematical and computational modeling in deeper understanding of two growth processes: EFG and DW. Toward this aim, starting from a nonlinear boundary problem of the meniscus surface determined by the Young-Laplace equation, qualitative and numerical studies of the nonlinear systems of ordinary differential equations which allow the prediction of the crystal shape are presented for both crystal growth techniques. Also, using nonlinear partial differential equations, numerical studies on the compositional uniformity for crystals grown by EFG technique are performed.
Numerical results are given and are compared to experimental data.
Friday November 13
|Title||Joint Spectral Radius, Its Control Implications and Computation Using Generating Functions|
|Speaker||Dr. Jianghai Hu|
|School of Electrical and Computer Engineering|
The spectral radius of a single matrix can be used to characterize the stability of a discrete-time linear system as it describes the maximum rate at which the powers of the matrix grow/decay exponentially. As its generalization, joint spectral radius describes the maximum exponential growth rate of matrix products drawn from a finite set of matrices. Joint spectral radius has important implications in the stability study of switched linear systems and a variety of practical applications.
The computation of the joint spectral radius, however, is a challenging task: it has been known to be an NP-hard problem. In this talk, the concept of generating functions is proposed for this purpose. It will be shown that generating functions and quantities derived from them can not only fully characterize the joint spectral radius, but also yield efficient numerical algorithms for its computation. In addition, other variants of the joint spectral radius, including its dual version, can be studied under this unified framework of generating functions.
Friday November 20
|Title||A Functional Model of Neuronal Networks|
|Speaker||Dr. James Lo|
|Department of Mathematics and Statistics|
In his 1958 book, The Computer and the Brain, John von Neumann said: “We require exquisite numerical precision over many logical steps to achieve what brains accomplish in very few short steps.” After more than 50 years, the natural question, “What are those very few short steps?” still remains to be fully answered.
This talk presents a functional model of neuronal networks resulting from my attempt to answer the question. The model, called probabilistic associative memory (PAM), has many functions that have not been explained satisfactorily or jointly by other models. The purpose of the talk is to solicit comments and suggestions on PAM and stimulate discussions on the above question.
PAM is ready for testing as a learning machine for application to recognition and understanding of handwriting, face, speech, radiograph, license plate, video, etc.
Friday November 27
Friday December 04
|Title||Mesoscale theory of texture evolution in polycrystals|
|Speaker||Dr. Maria Emelialenko|
|Dept. of Mathematical Sciences|
|George Mason University|
Microstructure evolution is a problem of central importance in materials science. Grain boundary engineering techniques allow to design and manufacture materials predisposed to certain behavior under external conditions. This type of work relies heavily on statistical analysis of grain boundary evolution and of an interplay between micro- and macroscopic properties. We present an account of recent developments in mesoscale theory of texture evolution in polycrystalline materials. The talk will demonstrate how a carefully chosen combination of simulation, theory and modeling techniques can bring a deeper understanding to coarsening processes governed by the motion of triple junctions. By means of modulated stochastic process characterization and kinetic evolution equations, we aim at quantifying the rates of grain boundary disappearance events and predicting the influence of these events on macro-level materials properties. Comparison with large-scale 2D and 3D simulations will provide the necessary validation and reveal similarities and differences between these approaches.
Friday December 11
No talk today