Explores data augmentation as a key regularization method in deep learning, covering techniques like translations, rotations, and artistic style transfer.
Explores the time-varying Kalman filter, state estimation, challenges in conditioning on measured outputs, and the importance of affine transformations.
Explores the theory and applications of convex optimization, covering topics such as log-determinant function, affine transformations, and relative entropy.