Loading video player...
With the way that AI coding is going, so many things are becoming automated. What wrong with another thing going out of our hands? LLM's got tools and just like that, so much of what humans did was automated. With Puppeteer MCP, we saw automated UI testing. Now, Ingest just gave us a monitoring layer that lets your coding agents become live debuggers of the code they generate. They're doing this by releasing their MCP for the Injest dev server, which is basically a
local version of their cloud platform. The platform lets you test all the functions you've built inside your agent and provides a visual interface for everything along with the different events that run. With this, you can directly ask your AI agents like cloud code or cursor to do all the automated testing. If Versel had something like this, their deployment and debugging would only require a single prompt. For those who don't know, Injest is an open-source workflow orchestration
platform that lets you build reliable AI workflows and takes care of so many problems that come with it. I've been using it to build agentic workflows in our company and the developer experience is really good with the MCP server. It's gotten even better. These workflows are built with async functions [music] and there are some problems with testing and debugging them. Most of them are triggered by external events. They run asynchronously with multiple steps. For those of you who don't know what
asynchronous means, these are functions that can pause and wait for something to finish and then continue without blocking everything else. These functions are part of larger workflows which makes debugging even harder. This usually leads you to manually trigger these events or [music] you might need to continuously switch between your code editor and your browser from time to time. You might even have to dig through the logs to understand what actually happened with that single function or
why it might have failed or anything else. Or you might even need to recreate complex events or trigger them yourself to actually test the function. But now [snorts] with the MCP integration, your AI agent can handle all of this automatically. They also had this context engineering in practice paper where they explained how they actually built an AI research agent. I'll be using this agent to show how the MCP works. In the agent, they implemented context engineering inside it rather
than using it to just build it both in its context retrieval phase and its context enrichment phase. They also explain the difference between context pushing and context pulling really well. It's a really interesting article as well and I might be making a video on this. So if you're interested in that, do comment below. The agent is completely open- source. I copied the link, cloned it, installed the dependencies and initialized claude code. I had it analyze the codebase and
create the claude.md. The article also specifies why we should use different models for their different strengths and they've implemented agents with separate LLMs for different roles in the research agent. They're using the AI gateway with Versal which gives you access to 100 plus models. I wanted to use a single model using the claude.md. It updated the codebase and switched it to use OpenAI's API. After editing, it just told me which files it had changed. After that, [snorts] I copied the
configuration for claude code, created a MCP.json file, pasted it in, started the next.js app, and then started the ingest dev server, which you've already seen. After that, I restarted cloud code and checked that the MCP was connected. Inside the MCP, you have event management where it can basically trigger functions with test events and get run ids along with other functions that allow it to list and invoke functions as well. You have monitoring tools which allow it to get the status
and documentation access too. So, if something does go wrong with the ingest functions, I no longer have to dig around manually to find out what's wrong with my agent. These tools can automatically tell Claude what went wrong and it can fix it for me. It used the send event tool to query the main research function with the question, what is context engineering. After that, it pulled the run status, which basically means it asked over and over again whether the run was complete or
not. Then it tested it again and saw that all of them were using the correct model name and the workflow was still executing nicely. In their own words, this represents a fundamental shift in how they're building and debugging serverless functions. Instead of functions being black boxes that the AI model just reads from the outside, AI can now work in the proper execution [music] and provide real-time insight. And hopefully we'll see this happening with other tools as well where we're
giving AI more autonomy. And I'm pretty excited for it. That brings us to the end of this video. If you'd like to support the channel and help us keep making videos like this, you can do so by using the super thanks button below. As always, thank you for watching and I'll see you in the next one.
Try Inngest Today: https://innge.st/yt-ailab-2 Inngest just released an MCP server for their dev server that lets AI agents like Claude Code and Cursor directly trigger, test, and debug serverless functions in real-time. No more manual log digging or switching between your editor and browser - your agent can now see inside function execution and fix issues automatically. š LINKS š Website: https://ailabs393.com/ š Book a Call: https://autometa.dev/booking š§ Business: hello@autometa.dev š¤ COMMUNITY š¬ Discord: https://discord.gg/S2hD7b28Qy š¦ X: https://x.com/ai_labs393 š Facebook: https://facebook.com/profile.php?id=61567670378819 šø Instagram: https://instagram.com/ailabs.yt