Explores perception in deep learning for autonomous vehicles, covering image classification, optimization methods, and the role of representation in machine learning.
Provides an overview of Natural Language Processing, focusing on transformers, tokenization, and self-attention mechanisms for effective language analysis and synthesis.
Introduces a functional framework for deep neural networks with adaptive piecewise-linear splines, focusing on biomedical image reconstruction and the challenges of deep splines.
Covers the foundational concepts of deep learning and the Transformer architecture, focusing on neural networks, attention mechanisms, and their applications in sequence modeling tasks.
Explores the learning dynamics of deep neural networks using linear networks for analysis, covering two-layer and multi-layer networks, self-supervised learning, and benefits of decoupled initialization.
Explores neural networks' ability to learn features and make linear predictions, emphasizing the importance of data quantity for effective performance.