Covers the concept of gradient descent in scalar cases, focusing on finding the minimum of a function by iteratively moving in the direction of the negative gradient.
Explores optimization methods like gradient descent and subgradients for training machine learning models, including advanced techniques like Adam optimization.
Discusses optimization techniques in machine learning, focusing on stochastic gradient descent and its applications in constrained and non-convex problems.
Discusses Stochastic Gradient Descent and its application in non-convex optimization, focusing on convergence rates and challenges in machine learning.
Covers optimization in machine learning, focusing on gradient descent for linear and logistic regression, stochastic gradient descent, and practical considerations.
Explores perception in deep learning for autonomous vehicles, covering image classification, optimization methods, and the role of representation in machine learning.