Introduces the fundamentals of regression in machine learning, covering course logistics, key concepts, and the importance of loss functions in model evaluation.
Explores loss functions, gradient descent, and step size impact on optimization in machine learning models, highlighting the delicate balance required for efficient convergence.
Explores optimization methods like gradient descent and subgradients for training machine learning models, including advanced techniques like Adam optimization.
Explores the impact of model complexity on prediction quality through the bias-variance trade-off, emphasizing the need to balance bias and variance for optimal performance.
Explores Kernel Ridge Regression, the Kernel Trick, Representer Theorem, feature spaces, kernel matrix, predicting with kernels, and building new kernels.