Friday February 4
No talk today
Friday February 11
|Title||Multilevel Optimization Methods for Engineering Design and PDE-Constrained Optimization|
|Department of Systems Engineering and Operations Research|
|George Mason University|
Many large nonlinear optimization problems are based upon a hierarchy of models, corresponding to levels of discretization or detail in the problem. This is true for many engineering design problems, as well as for optimization problems that include partial differential equations as constraints. Optimization-based multilevel methods – that is, multilevel methods based on solving coarser approximations of an optimization problem – are designed to solve such multilevel problems efficiently by taking explicit advantage of the hierarchy of models. The methods are generalizations of more traditional multigrid methods for solving partial differential equations. However, the optimization approach admits a richer class of models and has better guarantees of convergence. The optimization-based multilivel methods also generalize model-management approaches for solving engineering design problems. These multilevel methods are a powerful tool, and can dramatically out-perform traditional optimization algorithms.
Biosketch: Stephen Nash received a B.Sc. (Honours) degree in mathematics in 1977 from the University of Alberta, Canada; and a Ph.D. in computer science in 1982 from Stanford University. He is a Professor of Systems Engineering and Operations Research in the School of Information Technology and Engineering. Prior to coming to George Mason University, he taught at The Johns Hopkins University. From 2005-2008 he served as a program officer at the National Science Foundation. He has also had professional associations with the National Institute of Standards and Technology, and the Argonne National Laboratory. His research activities are centered in scientific computing, especially nonlinear optimization. He has been a member of the editorial boards of Operations Research, Mathematical Methods of Operations Research, Computers in Science & Engineering, the SIAM Journal on Scientific Computing, and the Journal of the American Statistical Association. Homepage: http://seor.gmu.edu/faculty/nash.html
Friday February 18
|Title||Optimization and Control Approaches to Constrained Spline Estimation|
|Department of Mathematics and Statistics|
Estimating functions whose shape and/or dynamics are subject to inequality constraints finds broad applications in statistics, engineering, and science. In this talk, we discuss recent optimization and control approaches to constrained Penalized splines (P-splines) and constrained smoothing splines as well as their applications. We first study monotone P-spline estimation and its asymptotic analysis. The optimality conditions for spline coefficients yield a size-dependent complementarity problem. The uniform Lipschitz property of optimal spline coefficients is established via piecewise linear and polyhedral theories, leading to uniform stochastic boundedness and consistency. By approximating the estimator by a differential equation subject to boundary conditions, we develop asymptotic normality using a Green’s function. As an application, we use the monotone P-splines to estimate a nondecreasing regulatory function in a dynamical genetic network. Finally, we discuss an optimal control approach to constrained smoothing splines. Optimality conditions are derived, and their computational issues are discussed.
Friday February 25
|Title||Two-parameter heavy-traffic limits for infinite-server queues|
|The Harold and Inge Marcus Department of Industrial and Manufacturing Engineering|
|Pennsylvania State University|
For infinite-server queues with iid general non-exponential service-time distributions and general arrival processes, possibly with time-varying arrival rates, I will discuss heavy-traffic limits for two-parameter stochastic processes, Q^e(t, y) and Q^r (t, y) representing the number of customers in the system at time t that have elapsed service times less than or equal to time y, or residual service times strictly greater than y. The two-parameter stochastic-process limits are in the space D([0,8),D) of D-valued functions in D and the variability of service times is captured by the Kiefer process with second argument set equal to the service-time cdf. I will also discuss heavy-traffic limits for these processes when the service times are weakly dependent and their implications.
Friday March 4
|Title||Multivariate Magnetic Resonance Evaluation of Cartilage: The White Stuff at the End of Bones that Goes Bad in Arthritis|
Noninvasive magnetic resonance imaging (MRI) analysis of cartilage may play an important role in early detection of, and development of therapeutic protocols for, osteoarthritis. Correlations between MRI parameters and cartilage matrix integrity have been established in many studies, but the substantial overlap in values observed for normal and degraded cartilage greatly limits the quality of these analyses. We therefore applied established multivariate analyses to the MRI outcome measures (T1, T2, km, ADC), where T1 and T2 are longitudinal and transverse relaxation times, km is the magnetization transfer rate and ADC is the apparent diffusion coefficient. Multiparametric k-means clustering led to no improvement over univariate analysis based on T1, with a maximum sensitivity and specificity in the range of 60-70% for the detection of degradation using T1, and in the range of 80% sensitivity but only 36% specificity using the parameter pair (T1, km). In contrast, model-based analysis using more general Gaussian clusters resulted in markedly improved classification in validation sets, with sensitivity and specificity reaching levels of 80-90% using the pair (T1, km). This clearly illustrates the importance of accounting for data structure when selecting a multivariate approach. Further improvement was achieved through use of the support vector machine formalism, which requires even less restrictive assumptions regarding data structure. Finally, fuzzy, or continuous, clustering techniques were implemented which may be more appropriate to the continuum of degradation seen in degenerative cartilage disease. Mapping such fuzzy classification provides a novel form of MRI contrast.
Friday March 11
|Title||Information aggregation and social learning in networks|
|University of Pennsylvania|
Over the past few years there has been a rapidly growing interest in analysis, design and optimization of various types of collective behaviors in networked dynamic systems. Collective phenomena (such as flocking, schooling, rendezvous, synchronization, and formation flight) have been studied in a diverse set of disciplines, ranging from computer graphics and statistical physics to distributed computation, and from robotics and control theory to sociology and economics. A common underlying goal in such studies is to understand the emergence of consensus from local interactions. In this talk, I will discuss some of the recent advances in this area and their applications to information aggregation in social networks. I present a model of social learning in which agents try to discover an unknown state of the world by aggregating their private observations with the beliefs of their neighbors. Each agent’s belief is updated as a weighted average of her Bayesian posterior (with respect to her own private observations) and the neighbors’ beliefs. When the underlying social network is strongly connected, I will show that weak and strong learning in the sense of Kalai and Lehrer occur under surprisingly mild assumptions, i.e., that information is correctly aggregated and the agents reach consensus on their belief of the correct state. Furthermore, I will show that when the learning occurs, the rate of convergence of beliefs is exponential Joint work with Pooya Molavi (Penn), Alireza Tahbaz-Salehi(MIT) and Alvaro Sandroni(Kellogg)
Biosketch: Ali Jadbabaie received his BS degree (with High honors) in Electrical Engineering from Sharif University of Technology in 1995. After a brief period of working as a control engineer, he received a Masters degree in Electrical and Computer Engineering from the University of New Mexico, Albuquerque in 1997and a Ph.D. degree in Control and Dynamical Systems from California Institute of Technology in 2001. From July 2001-July 2002 he was a postdoctoral associate at the department of Electrical Engineering at Yale University. Since July 2002 he has been at the University of Pennsylvania, Philadelphia, PA, where he is currently the Skirkanich Associate Professor of Innovation in Electrical and Systems Engineering and Computer & Information Science, and the founding co-director of the Singh Program in Market and Social Systems Engineering, a new undergraduate program that blends Electrical Engineering with operations research, economics, and computer science. He is a recipient of an NSF Career award, an ONR Young Investigator award, the O. Hugo Schuck Best Paper award of the American Automatic Control Council, and the George S. Axelby Outstanding Paper Award of the IEEE Control Systems Society. His students have been recipients and finalists of best student paper awards in the American Control Conference and IEEE Conference on Decision and Control. His research is broadly in the interface of control theory and network science, specifically, analysis, design and optimization of networked dynamical systems with applications to sensor networks, multi-robot formation control, opinion aggregation, social learning and other collective phenomena.
Friday March 18
|Title||Nonlinear Solvers in Large-Scale Computational Science: Challenges and Opportunities|
|Speaker||Lois Curfman McInnes|
|Argonne National Laboratory|
Parallel implicit solution strategies have proven robust and efficient in resolving challenging nonlinearities in many large-scale PDE-based simulations. We discuss the use of preconditioned Newton-Krylov methods in the PETSc library for parallel applications in coupled core-edge plasma and multiphase reactive flow, and we introduce new work on capabilities for the solution of differential variational inequalities as motivated by heterogeneous materials modeling. We also discuss challenges and opportunities in developing robust, scalable, and extensible algorithms and software to support multimodel and multiphysics simulations on emerging high-performance architectures.
Friday March 25
Friday April 1
No talk today
Friday April 8
|Title||Optimal Semistable Control and Linear-Quadratic Semistabilizers|
|Texas Tech University|
This talk presents a new H_2 control framework for semistable linear systems with two basic formulations: optimal semistable control (OSC) for deterministic linear systems and linear- quadratic semistabilizers (LQS) for stochastic linear systems. This new control problem is motivated by three engineering problems: mass-damper systems, consensus with imperfect information, and linear-quadratic Gaussian heat engines. In this talk, we discuss necessary and sufficient conditions based on the new notions of semicontrollability and semiobservability to guarantee semistability of linear systems by using the developed semistable Lyapunov equations. Unlike the standard H_2 control problem which has a unique solution under certain regularity conditions, a complicating feature of the OSC and LQS control problems is that the semistable Lyapunov equations can admit multiple solutions, leading to nonunique solutions to OSC and LQS. With this new result, we further present a new optimization-based framework to design OSC and LQS controllers for linear systems by converting the original semistable control problems into a convex optimization problem. Some open problems are also discussed in the talk.
Friday Aril 15
No talk today
Friday Aril 22
No talk today
Friday Aril 29
No talk today
Friday May 6
|Title||Feature Selection over Distributed Data Streams|
Monitoring data streams in a distributed system has attracted considerable interest in recent years. The task of feature selection (e.g., by monitoring the information gain of various features) requires a very high communication overhead when addressed using straightforward centralized algorithms.
While most of the existing algorithms deal with monitoring simple aggregated values such as frequency of occurrence of stream items, motivated by recent contributions based on geometric ideas we present an approach based on robust stability techniques. The proposed approach enables monitoring values of an arbitrary threshold function over distributed data streams through a set of constraints applied separately on each stream. We report experimental results on a real-world data, and compare the the results to those recently reported in the literature.