University of Birmingham > Talks@bham > Theoretical computer science seminar > Reparametrizing gradient descent

Reparametrizing gradient descent

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dan Ghica.

abstract: In this talk, we propose an optimization algorithm which we call norm-adapted gradient descent. We will start with an overview of the main optimization algorithms used in machine learning, then describe norm-adapted descent, and present experimental evidence to its efficacy. The new algorithm is similar to other gradient-based optimization algorithms like Adam or Adagrad in that it adapts the learning rate of stochastic gradient descent at each iteration. However, rather than using statistical properties of observed gradients, norm-adapted gradient descent relies on a first-order estimate of the effect of a standard gradient descent update step, much like the Newton-Raphson method in many dimensions. Based on the experimental results, it appears norm-adapted descent is particularly strong in regression settings but is also capable of training classifiers.

Join Zoom Meeting https://bham-ac-uk.zoom.us/j/87640578230?pwd=L0ZWcm9EZEcvRmNYVmxSRFFlTW5DQT09

Meeting ID: 876 4057 8230 Passcode: 280754

This talk is part of the Theoretical computer science seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Talks@bham, University of Birmingham. Contact Us | Help and Documentation | Privacy and Publicity.
talks@bham is based on talks.cam from the University of Cambridge.