![]() |
![]() |
University of Birmingham > Talks@bham > Optimisation and Numerical Analysis Seminars > On the worst-case performance of the optimization method of Cauchy for smooth strongly convex functions.
On the worst-case performance of the optimization method of Cauchy for smooth strongly convex functions.Add to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Sergey Sergeev. We consider the Cauchy (or steepest descent) method with exact line search applied to a strongly convex function with Lipschitz continuous gradient. We establish the exact worst-case rate of convergence of this scheme, and show that this worst-case behavior is exhibited by a certain convex quadratic function. We also give worst-case complexity bound for a noisy variant of gradient descent method. Finally, we show that these results may be applied to study the worst-case performance of Newton’s method for the minimization of self-concordant functions. The proofs are computer-assisted, and rely on the resolution of semidefinite programming performance estimation problems as introduced in the paper [Y. Drori and M. Teboulle. Performance of first-order methods for smooth convex minimization: a novel approach. Mathematical Programming, 145(1-2):451-482, 2014]. Joint work with F. Glineur and A.B. Taylor. This talk is part of the Optimisation and Numerical Analysis Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCentre for Systems Biology Coffee Mornings analysis School of Chemistry SeminarsOther talksUltrafast, all-optical, and highly efficient imaging of molecular chirality Quantum simulations using ultra cold ytterbium Disorder relevance for non-convex random gradient Gibbs measures in d=2 Modelling uncertainty in image analysis. Geometry of alternating projections in metric spaces with bounded curvature Provably Convergent Plug-and-Play Quasi-Newton Methods for Imaging Inverse Problems |