## Upcoming Events

## Decomposition Techniques for Parallel Solution of NLP Problems Based on Nonlinear Interior-Point Methods

Description: Nonlinear programming has proven to be an efficient tool for important large-scale inverse problems like optimization of dynamic systems, parameter estimation, and decision making under uncertainty. However, engineering and scientific needs continue to push the boundaries, and problems can become prohibitively large for a single workstation. Furthermore, computer chip manufacturers are focusing on parallel computing architectures, and future performance improvements demand algorithms that are capable of utilizing these modern parallel architectures.

Fortunately, most large-scale mathematical programming problems are inherently structured, and this structure can often be exploited to solve the problems more efficiently in parallel. Our work focuses on the development of internal decomposition strategies built upon nonlinear interior-point methods that exploit problem structure by parallelizing the linear algebra operations required by the algorithm.

First, focusing on block-structured problems with complicating variables only (e.g., multi-scenario problems), we show timing results for the explicit Schur-complement method where the Schur-complement is formed column-by-column and solved by direct factorization. Building on this approach, we also present an implicit Schur-complement decomposition that improves on this approach for problems with more significant coupling. Here, the Schur-complement is never explicitly formed, but rather is solved iteratively using a preconditioned conjugate gradient method. Second, we look at the inherent structure induced when optimizing systems governed by differential equations.

Through the process of transcription (using a discretize-then-optimize approach), the dynamic optimization problem is formulated as a large-scale algebraic nonlinear programming problem where finite elements are coupled through continuity equations. In this presentation, we present a Schur-complement decomposition for this problem structure, show that significant parallel speedup is possible for problems with fewer state variables than algebraic variables, and point out directions of future research for these problems.

Finally, acceptable parallel efficiency requires that we support parallel evaluation of the model in addition to the linear algebra operations. Coupling these algorithms with Pyomo allows parallel construction and evaluation of the model. Furthermore, our algorithm for dynamic optimization algorithm is currently integrated with the Modelica/Optimica- and Python-based JModelica.org modeling framework to provide users an effective modeling framework. This platform transforms high-level Modelica/Optimica model descriptions into executables suitable for linking with numerical solvers using compiler technology, symbolic manipulation, and code generation