Neural Networks: Multilayer PerceptronsCovers Multilayer Perceptrons, artificial neurons, activation functions, matrix notation, flexibility, regularization, regression, and classification tasks.
Universal Approximation Theorem: MLPCovers Multi-Layer Perceptrons (MLP) and their application from classification to regression, including the Universal Approximation Theorem and challenges with gradients.
Neural Networks OptimizationExplores neural networks optimization, including backpropagation, batch normalization, weight initialization, and hyperparameter search strategies.
Regularized Cross-Entropy RiskExplores the regularized cross-entropy risk in neural networks, covering training processes and challenges in deep networks.
Deep Learning FundamentalsIntroduces deep learning fundamentals, covering data representations, neural networks, and convolutional neural networks.
MLPs: Multi-Layer PerceptronsIntroduces Multi-Layer Perceptrons (MLPs) and covers logistic regression, reformulation, gradient descent, AdaBoost, and practical applications.
Deep Learning FundamentalsIntroduces deep learning, from logistic regression to neural networks, emphasizing the need for handling non-linearly separable data.