Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Course
EE-411: Fundamentals of inference and learning
Graph Chatbot
Lectures in this course (30)
Fundamentals of Inference and Learning
Covers the theory of statistics, inference, and machine learning with practical exercises in Python.
All of Probability: Notations and PDF
Covers the fundamentals of probability theory, including notations for joint and conditional distributions.
Statistical Learning Theory: Conclusions on Deep Learning
Covers the conclusions on deep learning and an introduction to statistical learning theory.
Statistical Learning Theory: Crash Course
Offers a crash course on statistical learning theory, covering PAC-learning and loss functions.
All of probability: generating functionals and cumulants
Covers moment generating function, cumulants, Laplace transform, and Fourier transform in probability theory.
Conclusions on Statistical Learning Theory
Explores conclusions from statistical learning theory, emphasizing function complexity, generalization, and the bias-variance trade-off.
All of Probability: Basic Bounds, LLN & CLT
Introduces basic bounds, LLN, and CLT in probability theory, emphasizing convergence to normal distribution.
Unsupervised learning: Young-Eckart-Mirsky theorem and intro to PCA
Introduces the Young-Eckart-Mirsky theorem and PCA for unsupervised learning and data visualization.
All of Probability: LLN, CLT, Chernoff and PAC bound
Covers the Law of Large Numbers, Central Limit Theorem, Chernoff bounds, and PAC bounds in probability theory.
Maximum Likelihood, MSE, Fisher Information, Cramér-Rao Bound
Explains maximum likelihood estimation, MSE, Fisher information, and Cramér-Rao bound in statistical inference.
PCA and Kernel PCA
Explains how PCA eliminates dimensions by finding principal components with most variation and compares PCA with Kernel PCA.
Supervised Learning Intro: MaxL Efficiency
Covers supervised learning efficiency, MaxL, unbiased estimators, MSE calculation, and large datasets.
Denoising with Auto-encoder & Gaussian Mixture Model
Covers denoising with auto-encoders and Gaussian mixture models for noise reduction.
Gaussian Mixture Clustering: Expectation-Maximization Algorithms
Explores the Expectation-Maximization algorithm for Gaussian Mixture Clustering and its challenges and practical implementation.
Validation and k-Nearest Neighbors Method
Introduces supervised learning concepts and the k-Nearest Neighbors method for classification and regression tasks.
Generative Models: Crash Course
Offers a crash course on generative models, covering exponential families, sampling methods, and Metropolis algorithm.
Supervised Learning with kNN: Regression Model
Covers a simple mathematical model for supervised learning with k-nearest neighbors in regression.
Crash course on ensemble methods: Bagging and Boosting
Covers ensemble methods, AdaBoost, and setting weights for classifiers.
Gradient Descent
Covers the concept of gradient descent in scalar cases, focusing on finding the minimum of a function by iteratively moving in the direction of the negative gradient.
Proximal gradient descent and intro linear models
Covers proximal gradient descent and linear models in regression analysis.
Previous
Page 1 of 2
Next