University of Birmingham > Talks@bham > Optimisation and Numerical Analysis Seminars > Using second-order information in training large-scale machine learning models.

Using second-order information in training large-scale machine learning models.

Add to your list(s) Download to your calendar using vCal

  • UserKatya Scheinberg (Lehigh University, Pennsylvania, USA)
  • ClockWednesday 14 June 2017, 12:00-13:00
  • HouseNuffield G22.

If you have a question about this talk, please contact Sergey Sergeev.

We will give a broad overview of the recent developments in using deterministic and stochastic second-order information to speed up optimization methods for problems arising in machine learning. Specifically, we will show how such methods tend to perform well in convex setting but often fail to provide improvement over simple methods, such as stochastic gradient descent, when applied to large-scale nonconvex deep learning models. We will discuss the difficulties faced by quasi-Newton methods that rely on stochastic first order information and Hessian-Free methods that use stochastic second order information. We will then give an overview of some recent theoretical results for optimization methods based of stochastic information.

This talk is part of the Optimisation and Numerical Analysis Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


Talks@bham, University of Birmingham. Contact Us | Help and Documentation | Privacy and Publicity.
talks@bham is based on from the University of Cambridge.