LOG IN TO MyLSU
Home
Optimization-based modeling - an overview
Pavel Bochev, Sandia National Laboratories

Optimization-based modeling (OBM) is a “divide-and-conquer” strategy that decomposes mutiphysics, multiscale operators into simpler constituent compo- nents and separates preservation of physical properties such as a discrete max- imum principle, local bounds, or monotonicity from the discretization process. In so doing OBM relieves discretization from tasks that impose severe geometric constraints on the mesh, or tangle accuracy and resolution with the preservation of physical properties.

In a nutshell, our approach reformulates a given mathematical model into an equivalent multi-objective constrained optimization problem. The optimiza- tion objective is to minimize the discrepancy between a target approximate so- lution and a state, subject to constraints derived from the component physics operators and the condition that physical properties are preserved in the opti- mal solution.Three examples will illustrate the scope of our approach: (1) an optimization-based framework for the synthesis of robust, scalable solvers for multiphysics problems from fast solvers for their constituent physics components; (2) an optimization-based Atomistic-to-Continuum (AtC) coupling method; and (3) optimization-based methods for constrained interpolation (remap) and conser- vative, monotone transport.

This talk is based on joint work with Denis Ridzal, Kara Peterson, Mitch Luskin, Derek Olson, Alex Shapeev, and Misha Shashkov. This research is sup- ported by the Applied Mathematics Program within the Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR). The work on AtC coupling methods is supported by the ASCR as part of the Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4).



Modeling Angiogenesis in the Eye: The Good and the Bad
Yi Jiang, Georgia State University

Angiogenesis, or blood vessel growth from existing ones, is an important physiological process that occur during development, wound healing, as well as diseases such as cancer and diabetes. I will report our recent progress in modeling angiogenesis in the eye in two scenarios. The good refers to healthy blood vessel growth in the retina in mouse embryos, which is a perfect experimental model for understanding the molecular mechanism of normal angiogenesis. The bad is the pathological blood vessel growth in age related macular degeneration, which is the leading cause of vision loss in the elderly and a looming epidemic in our aging society. We develop cell-based, multiscale models that include intracellular, cellular, and extracellular scale dynamics, and show that biomechanics of cell-cell and cell-matrix interactions play crucial roll in determining the dynamics of blood vessel growth initiation as well as vascular network formation. Such models show great potential as in silico Petri-dishes for predictive modeling studies.



Many-core Algorithms for High-order Finite Element Methods: when time to solution matters
Tim Warburton, Department of Computational and Applied Mathematics, Rice University


The ultimate success of many modeling applications depends on time to solution. I will illustrate the critical nature of time to solution by describing a joint project between my group at Rice University and Dr David Fuentes at the MD Anderson Cancer Center. The project goal is to evaluate the role and viability of using finite element modeling as part of the treatment planning process for MR Guided Laser Induced Thermal Therapy. The success of this project will depend in great part on the ability to model individual treatments with calculations that take mere seconds.

Modern many-core processing units, including graphics processing units (GPU), presage a new era in on-chip massively parallel computing. The advent of processors with O(1000) floating point units (FPU) raises new issues challenging conventional measures of “optimality” of numerical methods. The ramp up in FPU counts for each new generation of GPU over the past four years has been accompanied by a slower increase in the the memory capacity of the GPU. For example, a few hundred US dollars currently buys a parallel computer that is capable of performing O(4 · 1012) floating point operations per second, but only of reading O(5 · 1010) values from memory per second. From the point of view of numerical analysis, this means that the traditional approach of comparing optimality of alternative numerical methods based on their floating point operation count per degree of freedom has become mostly irrelevant. Claims of optimality derived from this measure therefore need to be reevaluated and the formulation of numerical methods in general need to be revisited given the changing computational landscape. The presentation will touch on several important and inter-linked issues that impacted the development of high-order finite-element methods based solvers for many-core architectures that are rapidly evolving. We will discuss on-chip scalability, multi-GPU scalability, inter-generational GPU scaling, and specialization for element internal structure.

Finally, I will introduce the OCCA API that my team is developing as a thin portability layer to enable our simulation codes to be threaded using OpenMP, OpenCL, or CUDA as selected dynamically at runtime. This additional flexibility enables us to include the threading model as an additional search direction when we optimize the simulation codes for a given processor. I will give comparisons of the performance of the simulator using OCCA on multiple different vendor devices using different threading models.