University of Birmingham > Talks@bham > Data Science and Computational Statistics Seminar > On the choice of loss functions and initializations for deep learning-based solvers for PDEs

On the choice of loss functions and initializations for deep learning-based solvers for PDEs

Add to your list(s) Download to your calendar using vCal

  • UserAnastasia Borovykh (Imperial College London)
  • ClockTuesday 22 November 2022, 14:00-15:00
  • HouseTBC.

If you have a question about this talk, please contact Hong Duong.

In this talk we will discuss several challenges that arise when solving PDEs with deep learning-based solvers. We will begin with defining the loss function of a general PDE and discuss how this choice of loss function, and specifically the weighting of the different loss terms, can impact the accuracy of the solution. We will show how to choose an optimal weighting that corresponds to accurate solutions. Next, we will focus on the approximation of the Hamilton-Jacobi-Bellman partial differential equation associated to optimal stabilization of the NonLinear Quadratic Regular Problem. It is not obvious that the neural network will converge to the correct solution with just any type of initialisation; this is particularly relevant when the solution to the HJB -PDE is non-unique. We will discuss a two-step learning approach where the model is pre-trained on a dataset obtained from solving a state-dependent Riccati equation and we show that in this way efficient and accurate convergence can still be obtained.

This talk is part of the Data Science and Computational Statistics Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


Talks@bham, University of Birmingham. Contact Us | Help and Documentation | Privacy and Publicity.
talks@bham is based on from the University of Cambridge.