Explores linear regression from a statistical inference perspective, covering probabilistic models, ground truth, labels, and maximum likelihood estimators.
Covers estimation, shrinkage, and penalization in statistics for data science, emphasizing the importance of balancing bias and variance in model estimation.
Explores overfitting, cross-validation, and regularization in machine learning, emphasizing model complexity and the importance of regularization strength.
Explores computing density of states and Bayesian inference using importance sampling, showcasing lower variance and parallelizability of the proposed method.