Covers optimization techniques in machine learning, focusing on convexity, algorithms, and their applications in ensuring efficient convergence to global minima.
Explores explicit stabilised Runge-Kutta methods and their application to Bayesian inverse problems, covering optimization, sampling, and numerical experiments.
Explores gradient descent methods for smooth convex and non-convex problems, covering iterative strategies, convergence rates, and challenges in optimization.
Explores convex optimization, emphasizing the importance of minimizing functions within a convex set and the significance of continuous processes in studying convergence rates.
Explores Stochastic Gradient Descent with Averaging, comparing it with Gradient Descent, and discusses challenges in non-convex optimization and sparse recovery techniques.