University of Birmingham > Talks@bham > Optimisation and Numerical Analysis Seminars > Optimization with expensive and uncertain data - challenges and improvements

Optimization with expensive and uncertain data - challenges and improvements

Add to your list(s) Download to your calendar using vCal

  • UserCoralia Cartis (University of Oxford)
  • ClockThursday 15 November 2018, 12:00-13:00
  • HouseNuffield G13.

If you have a question about this talk, please contact Sergey Sergeev.

Real-life applications often require the optimization of nonlinear functions with several unknowns or parameters – where the function is the result of highly expensive and complex model simulations involving noisy data (such as climate or financial models, chemical experiments), or the output of a black-box or legacy code, that prevent the numerical analyst from looking inside to find out or calculate problem information such as derivatives. Thus classical optimization algorithms, that use derivatives (steepest descent, Newton’s methods) often fail or are entirely inapplicable in this context. Efficient derivative-free optimization algorithms have been developed in the last 15 years in response to these imperative practical requirements. As even approximate derivatives may be unavailable, these methods must explore the landscape differently and more creatively. In state of the art techniques, clouds of points are generated judiciously and sporadically updated to capture local geometries as inexpensively as possible; local function models around these points are built using techniques from approximation theory and carefully optimised over a local neighbourhood (a trust region) to give a better solution estimate. In this talk, I will describe our improvements and implementations to state-of-the-art, model-based trust-region, methods. In the context of the ubiquitous data fitting/least-squares applications, we have developed an approach that uses flexible local models in terms of number of points and evaluations needed to construct them; this allows us to make progress in the algorithm from very little problem information when the latter is preciously expensive. Furthermore, it employs restart strategies in the presence of noisy evaluations, that are an inexpensive alternative to sample averaging and regression, with superior performance. I will also prove convergence of these methods, even when the noise is biased or when sampling may not be sufficiently accurate, hence when we only have accurate local models occasionally. Despite derivative-free optimisation methods being able to only provably find local optima, we illustrate that, due to their construction and applicability, these methods can offer a practical alternative to global optimisation solvers, with improved scalability. This work is joint with Lindon Roberts (Oxford), Katya Scheinberg (Lehigh), Jan Fiala (NAG Ltd) and Benjamin Marteau (NAG Ltd).

This talk is part of the Optimisation and Numerical Analysis Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


Talks@bham, University of Birmingham. Contact Us | Help and Documentation | Privacy and Publicity.
talks@bham is based on from the University of Cambridge.