Explores the integration of machine learning into discrete choice models, emphasizing the importance of theory constraints and hybrid modeling approaches.
Explores learning latent models in graphical structures, focusing on scenarios with incomplete samples and introducing the notion of distance among variables.
Covers Principal Component Analysis for dimensionality reduction, exploring its applications, limitations, and importance of choosing the right components.
Covers PCA and LDA for dimensionality reduction, explaining variance maximization, eigenvector problems, and the benefits of Kernel PCA for nonlinear data.