Covers quantile regression, focusing on linear optimization for predicting outputs and discussing sensitivity to outliers, problem formulation, and practical implementation.
Discusses optimization techniques in machine learning, focusing on stochastic gradient descent and its applications in constrained and non-convex problems.
Presents a new algorithm for optimal transport problems, showing speed and performance improvements, with applications in domain adaptation and generative models.
Introduces Lasso regularization and its application to the MNIST dataset, emphasizing feature selection and practical exercises on gradient descent implementation.
Covers the basics of optimization, including historical perspectives, mathematical formulations, and practical applications in decision-making problems.
Explores the trade-off between complexity and risk in machine learning models, the benefits of overparametrization, and the implicit bias of optimization algorithms.