Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Efficient Methods in Natural Language Processing
Graph Chatbot
Related lectures (30)
Ethics in Natural Language Processing: Addressing Bias and Misinformation
Covers ethical considerations in NLP, focusing on bias, toxicity, and misinformation.
Prompting and Alignment
Explores prompting, alignment, and the capabilities of large language models for natural language processing tasks.
Modern NLP and Ethics in NLP
Delves into advancements and challenges in NLP, along with ethical considerations and potential harms.
Ethical Considerations in Natural Language Processing
Explores ethical challenges in NLP systems, including biases, toxicity, privacy, and disinformation.
Ethics in NLP
Discusses the ethical implications of NLP systems, focusing on biases, toxicity, and privacy concerns in language models.
Scaling Language Models: Efficiency and Deployment
Covers the scaling of language models, focusing on training efficiency and deployment considerations.
Words Tokens: Lexical Level Overview
Explores words, tokens, and language models in NLP, covering challenges in defining them, lexicon usage, n-grams, and probability estimation.
Pretraining Sequence-to-Sequence Models: BART and T5
Covers the pretraining of sequence-to-sequence models, focusing on BART and T5 architectures.
Data-Driven Insights: NLP and AI Applications
Explores building OS for heterogeneous hardware, data movement efficiency, AI advancements, and NLP challenges.
Language Models: Fixed-context and Recurrent Neural Networks
Discusses language models, focusing on fixed-context neural models and recurrent neural networks.
Data Annotation: Collection and Biases in NLP
Addresses data collection, annotation processes, and biases in natural language processing.
BERT: Pretraining and Applications
Delves into BERT pretraining for transformers, discussing its applications in NLP tasks.
Sequence to Sequence Models: Overview and Applications
Covers sequence to sequence models, their architecture, applications, and the role of attention mechanisms in improving performance.
Model Analysis
Explores neural model analysis in NLP, covering evaluation, probing, and ablation studies to understand model behavior and interpretability.
Modern NLP: Introduction
By Antoine Bosselut introduces Natural Language Processing and its challenges, advancements in neural models, and course goals.
Neuro-symbolic Representations: Commonsense Knowledge & Reasoning
Delves into neuro-symbolic representations for commonsense knowledge and reasoning in natural language processing applications.
Introduction to Modern Natural Language Processing
Introduces the course on Modern Natural Language Processing, covering its significance, applications, challenges, and advancements in technology.
Contextual Representations: ELMO and BERT Overview
Covers contextual representations in NLP, focusing on ELMO and BERT architectures and their applications in various tasks.
Modern NLP: Data Collection, Annotation & Biases
Explores data annotation in NLP and the impact of biases on model fine-tuning.
Natural Language Processing: Understanding Transformers and Tokenization
Provides an overview of Natural Language Processing, focusing on transformers, tokenization, and self-attention mechanisms for effective language analysis and synthesis.
Previous
Page 1 of 2
Next