Introduces iterative methods for linear equations, convergence criteria, gradient of quadratic forms, and classical force fields in complex atomistic systems.
Explores explicit stabilised Runge-Kutta methods and their application to Bayesian inverse problems, covering optimization, sampling, and numerical experiments.
Explores gradient descent methods for smooth convex and non-convex problems, covering iterative strategies, convergence rates, and challenges in optimization.
Covers the concept of gradient descent in scalar cases, focusing on finding the minimum of a function by iteratively moving in the direction of the negative gradient.
Introduces Newton's method for solving non-linear equations iteratively, highlighting its fast convergence but also its potential failure to converge in some cases.