Introduces Hidden Markov Models, explaining the basic problems and algorithms like Forward-Backward, Viterbi, and Baum-Welch, with a focus on Expectation-Maximization.
Explores methods for information extraction, including traditional and embedding-based approaches, supervised learning, distant supervision, and taxonomy induction.
Covers the basics of Natural Language Processing, including tokenization, part-of-speech tagging, and embeddings, and explores practical applications like sentiment analysis.