Loading video player...
One of the most important things in AI has been the MCP protocol. But 6 months later, it has become a huge problem for us. When we started, people only had two to three MCP servers running locally. But MCP has evolved into so much more. Now, people literally have hundreds of MCP servers with thousands of tools at a time, and it has become a huge problem. As you know, Cloudflare noticed this first and Claude followed along by posting a research paper about this
problem. But Docker actually came up with a solution for this and solved one of the most critical problems for MCPs by coming up with an entirely new way to use them. A dynamic mode which allows you to save so many tokens, speed up your agents and make entirely new sorts of automations that I personally am really looking forward to. So Docker actually released an article on this in which they basically urge us to stop hard- coding our agents environment. Now
what do they mean by that? First of all, which MCP servers do we actually trust? The second one is how do we avoid filling our context with tool definitions that we might not even use. For example, if you have a thousand tools, you might only use two or three in a single chat. The third one is how do agents discover, configure, and then use these tools efficiently and autonomously. But I want you to focus on the second one, which is how do we avoid filling our context with tool
definitions that we might not even use. For example, if you have a thousand tools, you might only use two or three in a single chat. Anthropic also released a post about this which we covered in one of our previous videos and we got a really positive response from people who wanted the implementation and Docker actually went ahead and implemented this. Now before we move further, you need to know that Docker actually set up the whole infrastructure for this way before it
even became a problem. And for that you need to know about their MCP catalog in which they've listed verified MCP servers that you can actually trust. And it's really easy to connect to. You just connect them here in Docker. For example, I've connected notion here. You can see that right now I have two servers and my MCP client which most of the time is claude code only connects to docker and then docker basically manages all my MCP servers. So this entirely
solves the first problem about which MCP servers we actually trust. Now, to actually enable our agents to use these MCPS dynamically, they've implemented this MCP gateway that already has pre-built tools to use the MCP servers inside the catalog and use them autonomously. So, essentially what happens is you only connect one MCP and this MCP has all the context of which tools it's connected to in the catalog. I've been connected to two and it knows which tool definitions to actually bring
into the context window. So, your context window does not get bloated. Now, for this to actually work, they added some new tools which include MCP find, add, and remove, which basically find MCP servers in the catalog by name or description, and as I'll show you, guide you through how to add them correctly. So, for example, I'm using my GitHub MCP right here, and I'm telling it that I want to search for some interesting repos. After specifying what kind of repos, it doesn't actually call
the tool itself, but rather uses the Docker MCP server, which then calls the tool with the correct information and obviously returns all the results. Now, I want you to notice one thing. The LLM is returning everything about the repos. It's returning the link. It's returning the stars. It's returning the description. And it even knows the date on which these repos were posted. I just want you to remember this because it's going to be an important thing moving
forward. Now, moving on to dynamic tool selection. This is the most important part of the article. And this is what I was talking about when I mentioned a new way of using MCP servers. Again referencing the Claude article, they talk about how Claude or any AI agent actually uses more tokens. One is the tool definitions in the context window and the second is the intermediate tool results. This is where we talk about the raw results that are actually returned from MCP tool calls. So all of this
detail that we searched using the GitHub tool was returned into the context window. That's why Claude knows every small detail about the repos while I only wanted the description and the link of the repo. In this way, it only takes a few tool calls, like in my case, maybe it was 20 tool calls before the whole context window is actually filled. This is one thing they've improved in the MCP gateway project where they only give the tools that are actually useful. So, for
example, in my case, one way that the context could be saved is to only give me the search repo tool and not the other 40 tools that come inside this GitHub MCP because in this session, I only want to use the search repo tool. But again, once you do start selecting tools this way, it also opens up a new range of possibilities and that leads us into code mode. Cloudflare basically outlined how we've been using MCP wrong and that this is not the actual way. And this is where Docker is actually the
first one to implement this new solution. I've played around with it a lot and I must say I'm really surprised by how the execution turned out. So they say that by making it possible for agents to code directly by using MCP tools, meaning they take the tools and implement them in code, they can provide the agents with these code mode tools that use the tools in a completely new way. So what does code mode do? It creates a JavaScript enabled tool that can call other MCP tools. This might
seem really simple, but the examples I'm going to show you will hopefully clear this up. Now, before we dive into an implementation, there are other things to consider. First of all, since this is code written by an agent, obviously it's not tested and not secure. So, Docker has planned for this to actually run in a sandbox. And since they already provide Docker containers, this was pretty much a no-brainer for them. This approach ends up offering three key
benefits. First of all, it's completely secure because that's the main benefit of sandboxing. It doesn't do any actual damage to your system. Then there's all the token and tool efficiency that we've been talking about where the tool it uses does not have to be sent to the model on every request. The model just needs to know about one new code mode tool. So without code mode, if you're only using let's say these three tools and it's running these repeatedly, the
context of those 47 other tools is also going alongside the three tools that we're actually using. But with code mode, what happens is the agent writes the custom analyze my repos tool using only the tools that we actually need to use. And every time it just references that one code mode tool and in this way it saves all that other context by not sending the tools that we don't actually need to use. And then we have state persistence as well in which the volumes
manage how the data is saved between these tool calls and they're not actually sent to the model. A very simple example of this can be a data processing pipeline. So let's just say that we want to download a data set. The data set is downloaded and returned but it's actually saved to the volume and the model only gets to know that it was downloaded successfully. The model doesn't get flooded with 5 GB of data. Then if we want to process the first 10,000 rows, the model can just read
from the volume where the data is stored and return the actual summary. In this way, only the data that should go to the model such as final results, summaries, any error messages or answers to questions is transferred to the model and the context window remains clean. Now, the reason I was searching these GitHub repositories is so that I can discover new open-source tools to actually put into my videos. And what I normally do is run multiple calls using the find GitHub repo tool. I just write
different keywords to search for tools. So I presented this to claude code and it combined all those different tool calls into a single tool that searches repos based on whatever keywords I give it. You can see that even here without code mode docker actually runs multiple queries and that's what I wanted to fix. The tool it made was called multiarch repos and after creating the tool it used the mcp exec tool to actually run it. It basically gave me 29 unique repos
by searching with six different keywords. But the results were just returned directly in the response and in the terminal, meaning that all of the results were being returned inside the context window. To fix this, I told it that it should write everything to a file and the model should just get the description of the repos. No need to give any stars or anything else for that matter. and it changed the tool and wrote all of those results to a text file in my repository so that if I
wanted to look up something specific about one repository, I could do so by referencing the text file. Now, there is one thing that I'd like to see implemented, a way to save and reuse this tool. Right now, the only option is to manually save it as a file. After that, I asked it to search for the notion MCP. Once connected, I asked if it could make a tool that outputs the GitHub search results directly into Notion instead of using a text file. And again, using code mode, it actually made
the GitHub to notion tool that would allow me to paste the results into notion. And after it ran that, there were basically some little problems that I had to fix. But essentially, I now have this database in Notion. It's basically hard-coded. So, whatever query I provide, it'll just go ahead and input the results into this database according to the different fields. And it'll even include the date so that I can filter through them easily and only search for the results that I actually want. The
model only gets to know the name and the description of the GitHub repository it gets at a time. It doesn't get anything else, but the rest of the information is all saved here. Honestly, if you just go through this catalog, you'll get at least a single idea of the MCPS that you could chain together to make these amazing workflows. And at the same time, you're saving tokens and preserving the performance of your own AI agent. Getting started with them is honestly
pretty easy. You do need to update your Docker version, but if you still don't have it, they might be disabled in beta features. So, do make sure that this Docker MCP toolkit is enabled. Other than that, you'll have your catalog and these new features are enabled by default. So, all you have to do is connect your client and you can pretty much start with them. That brings us to the end of this video. If you'd like to support the channel and help us keep making videos like this, you can do so
by using the super thanks button below. As always, thank you for watching and I'll see you in the next one.
Try the Docker MCP catalog for FREE here: https://dockr.ly/44m0J5R AI coding just changed forever with the best coding AI as Claude Code gets smarter, MCP becomes dynamic, and Cursor AI workflows speed up using secure MCP tools. The future of AI coding is changing fast, and this video breaks down exactly why. With Docker’s new dynamic approach to the Model Context Protocol (MCP), agents can finally use tools the way they were meant to—lightweight, efficient, and fully autonomous. Instead of bloating your context window with hundreds of unused tools, dynamic MCP lets your agent load only what it needs, when it needs it. We explore how modern AI tools like Claude, Cursor, Docker, GitHub integrations, and even Figma MCP unlock a brand-new style of building automations. This is where vibe coding meets serious engineering: agents write clean code, execute it securely in sandboxes, persist state, and chain MCP servers together to create powerful workflows. From GitHub search pipelines to Notion database writers, your agent can now build your own custom tools on the fly—without ever touching your system directly. You’ll see real demos using GitHub MCP, dynamic tool selection, and code-mode tool creation, allowing AI to call tools inside tools while preserving your context for reasoning. If you're experimenting with AI coding tools, Docker MCP, Claude Code, or advanced automations, this is the upgrade you’ve been waiting for. Whether you're a webdev enthusiast building your first AI script or a power user connecting multiple MCP servers, this video gives you everything you need to understand how this shift will impact artificial intelligence, coding workflows, and modern agent design. And because this topic is exploding right now, it's also perfect for trending shorts, dev breakdowns, or deeper long-form tutorials. In short: dynamic MCP isn’t just an upgrade—it’s the blueprint for the future of coding with AI.