Loading video player...
In this video, we train Multi-agent Navigation AI agents to collaborate in complex obstacle courses. We learned the basics of creating custom Reinforcement Learning environments, how to design observation spaces, action spaces, and reward spaces, as well as the basics of LCS (local coordinate systems) in agentic systems. We then talk about Actor Critic methods like A2C and PPO, and how to train agents using them. We discuss two multi-agent RL algorithms, namely Independent PPO (I-PPO) and the more advanced Multi Agent PPO (MA-PPO). MA-PPO is inspired by MA-DDPG, which is a Centralized Training Decentralized Execution (CTDE) RL method. We learn why CTDE methods are great at training multi-agent RL environments and why they can promote cooperative and emergent behaviours in RL agents. The GitHub repo: https://github.com/avbiswas/navigation-mappo-rl The longer code explainer video is available for Patreon members: https://www.patreon.com/posts/multi-agent-rl-145270524 Follow me on Twitter: https://x.com/neural_avb To join our Patreon, visit: https://www.patreon.com/NeuralBreakdownwithAVB Members get access to everything behind-the-scenes that goes into producing my videos - including code. Plus, it supports the channel in a big way and helps to pay my bills. #machinelearning #reinforcementlearning #programming #devlog Relevant videos: Intro to Reinforcement Learning - https://youtu.be/Qpx6WD0qekQ GRPO and reasoning LLMs - https://youtu.be/yGkJj_4bjpE RL Playlist - https://www.youtube.com/playlist?list=PLGXWtN1HUjPfays8_pu4nQOW47Q6pzaGP Useful papers: - An Introduction to Centralized Training for Decentralized Execution in Cooperative Multi-Agent Reinforcement Learning (https://arxiv.org/abs/2409.03052) - PPO paper (https://arxiv.org/pdf/1707.06347) - MARL in Pytorch (https://docs.pytorch.org/rl/main/tutorials/multiagent_ppo.html) - MA-DDPG (https://arxiv.org/abs/1706.02275) Timestamps: 0:00 - Intro 2:17 - Creating RL environments 6:23 - Local Coordinate Systems 8:30 - Rewards 10:24 - Actor Critic Methods 12:36 - Training single agent RL 13:38 - Independent PPO 15:40 - Non stationary environments 16:40 - Centralized Training Decentraized Execution (CTDE) 17:36 - Multi agent PPO (MA-PPO) 19:25 - Results!