Explores training robots through reinforcement learning and learning from demonstration, highlighting challenges in human-robot interaction and data collection.
Explores perception in deep learning for autonomous vehicles, covering image classification, optimization methods, and the role of representation in machine learning.
Explores deep learning for autonomous vehicles, covering perception, action, and social forecasting in the context of sensor technologies and ethical considerations.
Explores trajectory forecasting in autonomous vehicles, focusing on deep learning models for predicting human trajectories in socially-aware transportation scenarios.
Delves into training and applications of Vision-Language-Action models, emphasizing large language models' role in robotic control and the transfer of web knowledge. Results from experiments and future research directions are highlighted.
Explores predictive models and trackers for autonomous vehicles, covering object detection, tracking challenges, neural network-based tracking, and 3D pedestrian localization.
Delves into physical and social factors in human-robot interaction, covering topics like joint overloading torque estimation and adaptive control strategies.
Delves into using simulations for Human-Robot Interaction, learning from human expertise and preferences, user models, system models, simulation results, and assisting drone landings.
Explores challenges and opportunities in vision-based robotic perception, covering topics like SLAM, place recognition, event cameras, and collaborative visual intelligence.