Skip to main content
Graph
Search
fr
en
Login
Search
All
Categories
Concepts
Courses
Lectures
MOOCs
People
Practice
Publications
Startups
Units
Show all results for
Home
Lecture
Controlled Stochastic Processes
Graph Chatbot
Related lectures (30)
Dynamic Programming: Optimal Control
Explores Dynamic Programming for optimal control, focusing on stability, stationary policy, and recursive solutions.
Diffusion Models
Explores diffusion models, focusing on generating samples from a distribution and the importance of denoising in the process.
Introduction to Reinforcement Learning: Key Concepts and Applications
Introduces reinforcement learning, covering its definitions, applications, and theoretical foundations, while outlining the course structure and objectives.
Markov Decision Processes: Foundations of Reinforcement Learning
Covers Markov Decision Processes, their structure, and their role in reinforcement learning.
Optimal Transport: Kantorovich Duality
Covers optimal transport and Kantorovich duality in real-life distribution problems.
Approximation Algorithms
Covers approximation algorithms for optimization problems, LP relaxation, and randomized rounding techniques.
Controlled Stochastic Processes
Explores controlled stochastic processes, dynamic programming, and the Machine Replacement Problem.
Generalization Error
Explores generalization error in machine learning, focusing on data distribution and hypothesis impact.
Set Cover: Integrality Gap
Explores the integrality gap concept in set cover and multiplicative weights algorithms.
Optimization Methods: Theory Discussion
Explores optimization methods, including unconstrained problems, linear programming, and heuristic approaches.
Sobolev Spaces in Higher Dimensions
Explores Sobolev spaces in higher dimensions, discussing derivatives, properties, and challenges with continuity.
Dynamic Programming: Optimal Decision Making
Explores dynamic programming for optimizing decision-making processes over time, using real-world examples like oil extraction and stock trading.
Value Iteration Acceleration: PID and Operator Splitting
Explores accelerating the Value Iteration algorithm using control theory and matrix splitting techniques to achieve faster convergence.
Sparsest Cut: Bourgain's Theorem
Explores Bourgain's theorem on sparsest cut in graphs, emphasizing semimetrics and cut optimization.
Infinite-Horizon LQ Control: Solution & Example
Explores Infinite-Horizon Linear Quadratic (LQ) optimal control, emphasizing solution methods and practical examples.
Deep Learning Modus Operandi
Explores the benefits of deeper networks in deep learning and the importance of over-parameterization and generalization.
Finite Element Method: Weak Solutions
Covers weak solutions in the finite element method, emphasizing continuity and the Cauchy-Schwarz inequality.
Nonlinear Model Predictive Control
Explores Nonlinear Model Predictive Control, covering stability, optimality, pitfalls, and examples.
Markov Chains: Transition Probabilities
Explores Markov chains, transition matrices, distribution, and random walks.
Asset Selling: Optimal Revenue Policy
Explores asset selling dynamics, optimal revenue policy, acceptance thresholds, and commodity price impact.
Previous
Page 1 of 2
Next