Optimization Research
Research at the intersection of big data, optimization, and explainability
Summary
Typal Academy's research efforts1 focus on open-source development of optimization-based tools. Our primary specialty is in creating optimization models and algorithms that are tunable, thereby enabling high performance on particular applications (when training data is available). Below we provide accessible easy-to-use materials (e.g. slides, code, animations) for academics and practitioners.
Research Funding
Please share this resource with your students.
Learning to Optimize
Key Ideas
Data-driven optimization is able to leverage powerful tools from both machine learning and optimization. In this setting, models "learn to optimize" (L2O).
Why implicit L2O?
In many of the works below, you will find uses of implicit models. This is distinct from the explosion of L2O models constructed by unrolling an optimization algorithm for a fixed, finite number of steps. Standard feedforward networks prescribe a finite sequence of actions to perform. However, when defining an inference in terms of an optimization model, the inference is defined implicitly by optimality conditions rather than explicitly by actions to perform. This is significant because 1) it enables many options for computing inferences and 2) it enables strong guarantees to be provided on outputs (since they can inherit any desired properties from optimization theory).
L2O Papers
L2O Videos
Zero-Order Optimization
Key Ideas
Recently, we found a way to approximate proximals for weakly convex functions using direct oracle sampling. This enables a new class of optimization problems to be solved by embedding zero-order schemes inside optimization algorithms. Additionally, by using sufficient sampling, we can approximately minimize functions globally.
ZOO Papers
Global Solutions to Nonconvex Problems by Evolution of HJ PDEs
ZOO Videos
-
We are in the process of rebranding "Typal Research" as a part of "Typal Academy." ↩