Loading video player...
Not only have I built hundreds of AI agents myself, I've seen other people build thousands for every use case under the sun. And those who are the most successful are the ones who don't over complicated. And in this video, I want to show you how that can be you as well. Cuz here's the thing, and I see this all of the time. When people first think about building their AI agents, their perfectionism kicks in and they worry about creating the perfect system prompt, defining the perfect tools,
thinking about the LLM they want to use. They consider the context and observability and latency and security and deployment. They get overwhelmed with everything. And that might be you as well. So, what I have to say to you right now is take a deep breath. That is why I'm here. Honestly, you can learn 90% of what you need to build AI agents from just this video. And that, my friend, is what I have for you in this video. I want to cover each of the core components of building agents like
system prompts, tools, security, and context. And I want to break down what you should focus on to build the first 90% of your agent. Basically, creating that proof of concept. And then, honestly, even more importantly, I want to talk about what you shouldn't focus on at first because otherwise, you're over complicating it. the kinds of things you will need to look into at some point when you want to specialize your agents and move to production. But that's what my other content is for
right now. Whether you're new to building agents or you just want to build them faster, I want to help you focus on the first 90% to make things dead simple. Oh, and by the way, this agent that you're looking at right here is the mascot for the new Dynamis Agentic coding course. So, if you want to master building systems around AI coding, check out the link in the description. All right. So, the first thing I want to cover with you is the four core components of any AI agent,
which quick recap, an AI agent is any large language model that is given the ability to interact with the outside world on your behalf through tools. And so, it can do something like book a meeting on your calendar, search the internet for you. That's the first part of agents is these tools. It's the functions that we give it that it can call upon to perform actions. And then the brain for our AI agent is the large language model. It processes our requests and it decides based on the
instructions we give it which tools to use. And speaking of those core instructions, that is our agent program, aka the system prompt. It's the highest level set of instructions we give to any AI agent at the start of any conversation that instructs it on its persona, goals, how to use tools. We'll cover the different core components of system prompts in a little bit. And then last but not least, we have memory systems. That's the context we have from our conversations, both the short-term
and long-term memory. We'll talk about this a bit more when we get into context as well. And so, as we go through each of these core components, I'm going to move pretty quickly because I just want to cover the basics with you, but I'll also link to different videos on my channel throughout this video if you want to dive deeper into anything. And when building an AI agent, it is really simple to get started. And I'll show you an example in code in just a bit here so
you can really see what I'm talking about. So when you're building the very core of your AI agent, it's really just three steps. You need to pick a large language model, write a basic system prompt as the agent instructions, and then add your first tool because you need a tool otherwise it's really just a regular large language model, not an agent. And so for picking a large language model, I would highly recommend using a platform called Open Router because it gives you access to pretty
much any large language model you could possibly want. And so Claude Haiku 4.5 is the general one that I use just as I'm prototyping my AI agents, but you could use GPT 5 Mini. You could use an open source model like DeepSeek, for example. Like all of them are available on this platform. And then when creating your system prompt, you just want to define your agents role and behavior. And you can refine this over time as well. just starting really simple and then adding your first tool. Like you
can give it access to search the web. You can give it the ability to perform mathematical computations with a calculator tool. Like literally whatever it is, just start simple and then once you have this foundation, that's when you can build on more capabilities and integrations. And I want to show you more than just theory as well. Like let's actually go and build an AI agent right now so you can see practically how dead simple it really is. And I'll have a link to this repo in the description
as well if you want to dive into this extremely basic agent that's covering all of the components in this video, even some things we'll talk about in a bit like observability. So you can get this up and running yourself, even use this as a template for your first agent if you want. And so I'm going to build it from scratch with you right now, like show you line by line how simple this really is. It's going to be less than 50 lines in the end, just like I promised
in the slide. And so first I'm going to import all of the Python dependencies. I'm using Pantic AI since it's my favorite AI agent framework, but it really doesn't matter the one that you use. The principles that I'm covering in this video applies no matter how you're building your agents, even if it's with a tool like N8N because what I'm focusing on here is just defining our four core components. LLM, tools, memory, and a system prompt. And so the first thing I'm going to do is define
the large language model that I want to leverage. And just like I talked about a little bit ago, I'm using open router. So right now I'm going to use cloud haiku 4.5 as my model. But literally just changing this line or just changing my environment variable here. A single line change. I can swap to any model I want like Gemini or DeepSeek or OpenAI. It's that easy. After I have my LLM defined, now I define the agent itself including the system prompt, the highlevel instructions. And so I'm
importing this from a separate file. I'll just show you a very very basic example of a system prompt here and then more on this in a little bit. The core components that I generally include including the persona goal tool instructions, the output format like how it communicates back to us and then also any other miscellaneous instructions I want to include. So I have this saved here. Now this is a part of my agent that I've defined. And so the next thing that we need to add is a tool to really
turn it from an LLM or a chatbot into a full-fledged agent. And the way that you do that with most AI Asian frameworks is you define a Python function like very simply and then you add what is called a decorator. This signals to paid AI that this function right here I want to take and attach to my agent as a capability that it can now invoke. And so the agent defines these parameters when it calls the tool. So like in this case this is a very basic tool to add two numbers
together because large language models as token prediction machines actually suck at math. interesting fact. And so it defines these parameters and it leverages this dock string as it's called like this comment is included as a part of the prompt to the LLM because it defines when and how to leverage this tool which in this case the functionality is very basic just adding two numbers together. But this could be a tool to search the web based on a query it defines create an event in our
calendar based on a time range and title that it defines right like all those things are parameters and then we perform the functionality for the agent based on that. That is the tool that we got for the agent. And that is really good. We've created our agent and added the tools. The only thing we have to do now is set up a way to interact with it. So I'm going to create a very basic command line interface here. We start with empty conversation. This is where
we'll add memory, which is the fourth component of agents. And so in an infinite loop here, we're getting the input from the user. Uh and we're exiting the program if they say exit. Otherwise, we are going to call the agent. So it's very simply agent.run run with the user's latest message and passing in for short-term memory the conversation history so it knows what we said to each other up until this point and then I'm going to add on to the conversation history everything that we
just said and then print out the agents latest response. Take a look at that. And then even after we call our main function here, we are still below 50 lines of code. It is that easy to define our agents. And obviously there's so many more things that we have to do to really get our agent to the point where it's production ready. But again, I just want to focus on making it dead simple for you right now. And I know that a lot of this might be review for you if you
built agents in the past. But especially if you have built a lot of AI agents already, you're probably like me where a lot of times you just overcomplicate things cuz you know how much can go into building agents. That's what I'm trying to do is just draw you back to the fundamentals because you need to keep things simple when you're first creating any agent really any software at all. And so yeah, we can go into the terminal now and interact with our agent. So I'm
going to run agent.py here. Everything that we just built, I can say hello to get a super simple response back here. And then I can say for example, what is and I'll just do a couple of bigger numbers that I want to add together. And so here it knows thanks to the tool description that it should use the add numbers tool that we gave it to produce this sum. There we go. Take a look at that. And I can even say did you use the tool, right? And it should say yes. Like
it actually recognizes based on the conversation history that it used the ad numbers tool. Okay, perfect. So we got this agent with conversation history. It knows when to use this tool. And now at this point we can start to expand the tools that we give it. We can refine our system prompt, play around with different LLMs. and I want to talk about that as well. Now, starting with large language models, choosing your LLM, like I was saying when I was building the
agent, Claude Haiku 4.5 is the one that I recommend just a cheap and fast option that's really good for building proof of concepts when I don't want to spend a lot of money on tokens as I'm iterating on my agent initially. And then Claude Sonnet 4.5 is generally the best all-around right now. This might change in literally a week and people have different opinions. The main thing that I want to communicate here is don't actually worry about picking the perfect
LLM up front, especially when you're using a platform like Open Router where it makes it so easy to swap between LLMs. Even if you're not using Open Router, it still is really easy. And then if you want a local model for privacy reasons or you want to be 100% free running on your hardware, then Mistl 3.1 Smaller Quen 3 are the ones that I recommend right now. And if you haven't ever tried Open Router or a tool like it that really just routes you between the different LLM providers, I
would highly recommend trying one because it makes it so easy to iterate on the LLM for your agent, giving you instant access to take a look at this. We got Grock, Anthropic, Gemini, we've got the GPT models, we've got uh Quen 3, all the open- source ones. No matter what you want to experiment with, you've got it here. And so just use this as your tool to iterate on the LM very quickly and just not have to think about it that much. And then for the system prompt component, I promised I would
dive a little bit more into the different categories that I have. So that's what I want to talk about very quickly. It can be especially easy to overthink the system prompt because it's just such a broad problem to solve of like what should the top level instruction set be for my agent? And so I like to keep things simple by working off of a template that I use for all of my AI agents at least as a starting point. I always have persona and goals, tool instructions and examples, output
format, and miscellaneous instructions. And what you shouldn't worry about at this point is setting up elaborate prompt evaluations or split testing your system prompts. You can get into that when you really want to refine your agent instructions. But right now, just keep it simple and refine at a high level as you are manually testing your agent. And if you want to see that system prompt template in action, I've got you covered. I'll have a link to this in the description as well. It's a
real example of me filling out those different sections, creating a system prompt for a task management agent. So, I have my persona defined here. I'm defining the goals for the task management agent. The tool instructions like how I can use different tools together to manage tasks in my platform. The output format, just specifying ways that I want it to communicate back to me or things to avoid. Some examples. Now, this applies more to more complex agents and system prompts where you actually
want to kind of give an example of a workflow of chaining different tools together, so it doesn't really apply here. And then the last thing is just miscellaneous instructions. This is also the place to go to add in extra instructions to fix those little issues you see with your agent that doesn't necessarily fit into all the others. So, a catchall to make sure that there's a place to put anything as you're experimenting with your agent and refining your system prompt. And then as
far as tools go for your AI agents, there's just a few things I want to cover quickly to help you keep things simple and focused. The first is that you should keep your tools to under 10 for your AI agents, at least when starting out. And you definitely want to make sure that each tool's purpose is very distinct. Because if your tools have overlapping functionality or if you have too many, then your large language model starts to get overwhelmed with all the possibilities of its capabilities
and it'll use the wrong tools. It will forget to call tools. uh and it's just a mess. Like definitely keep it to under 10. And then also MCP servers are a great way to find preackaged sets of tools you can bring into your an agent when you're, you know, creating something initially and you just want to move very quickly. And so definitely based on what you're building, you'll probably be able to find an MTP server that gives you some functionality right out of the box for your agents. And then
the last thing I'll say is a lot of people ask me, "What capabilities should I focus on learning first when I'm building agents?" and I want to give them tools and rag is always the answer that I have for them. Giving your AI agent tools that allows it to search your documents and knowledge base. That's what retrieval augmented generation is. And so really, it's giving your agents the ability to ground their responses in real data. And I would say that probably over 80% of AI
agents running out in the wild right now, no matter the industry or niche, are using rag to some extent as part of the capabilities for the agent. And then continuing with our theme here, what not to focus on when building tools is don't worry about multi- aent systems or complex tool orchestration through that yet. When you have a system that starts to have more than 10 tools, that is generally when you start to split into specialized sub aents and you have routing between them. Those kinds of
systems are powerful and necessary for a lot of applications, but definitely overengineering when you're just getting started creating your agent or a system. Also, if you want to learn more about rag and building that into your agents, check out the video that I'll link to right here. I cover that all of the time on my channel because it is so important. And so with that, moving on to the next thing, we have our security essentials because it is important to think about security when you're
building any software upfront. But I don't want you to over complicate it yet, right? Like don't become a security expert overnight. There are existing tools out there to help us with security. So we can still move quickly as we're building our agent initially. We'll definitely want to pay more attention to security when we're going into production. But at first there are a couple of tools that I want to call out here. And then just some general principles to follow. Like for example,
don't hardcode your API keys, right? Like you don't want to have your OpenAI or anthropic API key just sitting there right in your code or your end workflow for example. You always want to store that in a secure way through things like environment variables. And then also when we think about building AI agents in particular, there's a lot of security that we want to implement through what are called guard rails, right? So limiting what kind of information can
come into the large language model and then also limiting the kinds of responses that the agent can give and having it like actually retry if it produces any kind of response that isn't acceptable for us. And there's a super popular open source repository that I lean on all the time to help with guardrails and very creatively called guardrails AI. And so it's a Python framework because I always love building my AI agents with Python that helps build reliable AI applications by giving
you both the input and output guard rails that I'm talking about. So limiting what goes in and limiting what the agent can produce. And they provide a lot of different options for guardrails. Like for example, one thing that you want to avoid quite often is inserting any kind of PII, personally identifiable information into a prompt to an LLM, especially when it's going out to some model in the cloud like anthropic or Gemini instead of a local LLM. So limiting that kind of thing,
maybe detecting any vulgar language that's outputed from an LLM because they will do that sometimes. Like those are just some examples of input and output guard rails. And it is very easy to install this as a Python package and bring these guards right into your code as you are interacting with your agents like we saw earlier when I had that, you know, simple command line tool to talk to the agent. Like I could just add a guard before or after that call to the
agent. So yeah, guardrails don't have to be complicated. There are tools like this, even completely open- source ones like guardrails AI that make it very easy. Okay, so we've talked about guardrails and I gave you one example of best practices for security in our codebase. But what about the other million different vulnerabilities we have to account for in our codebase and the dependencies we're bringing into our project? We can't expect ourselves to become a security expert overnight. And
so it's important to learn these things, but also we can lean on existing tools to help us with this vulnerability detection. There are a lot of options out there for this, but Sneak Studio is one that I've been leaning on a lot recently. And they also have an MCP server within the studio to help us handle vulnerability detection automatically right within our coding process. So like always, I'm trying to focus on open- source solutions for this video, but there's really no open-
source alternative to Sneak that I know about. This platform is incredible. So in the Sneak Studio, we can set up these different projects and integrations. We can have it analyze our codebase and dependencies for vulnerabilities in our GitHub repositories. They have a CLI. We can do things locally. They have the MCP server that I'm going to show you in a little bit. I'll link to all this in the description. But yeah, the MCP server in particular is super cool to me because
we can have vulnerability detection built right into our AI coding workflows. Now, so take a look at this. I have the Sneak MCP server connected directly to my cloud code after I went through the Sneak authentication process in the CLI. And you can connect this to literally any AI coding assistant or MCP client. So now within cloud I could build this into a full AI coding workflow which is very cool. I'm going to show you a simple demo right now. I'll just say you know use the sneak MCP
to analyze my code and dependencies for vulnerabilities. And so it's able to leverage different tools within the MCP server to check for both right like it's a very robust solution here. And so I'll let it go for a little bit. I'll pause and come back once it has run the vulnerability detection. Okay, this is so cool. Take a look at this. So, within my basic agent repository, first it used the sneakc server to analyze for any vulnerabilities with my dependencies,
things like paidantic AI, for example. And then it does a code scan. So, this would also detect things like if I had my environment variables hardcoded like the example that I gave earlier. So, it found three issues with my dependencies and nothing with my code, which I'm very proud of. I got no issues with my code. And not only does it do the analysis, but it gives me a summary and lists the actions I can take to remedy things. Like here are the uh just medium
severity vulnerabilities that I have within a few of my dependencies. Nothing in my code. And then it gives me recommendations to fix fix things. And so I can go and say yes action on this now. And it's going to update my requirements.ext fix these things. And I could even run the sneak MCP server again. And you can definitely see how you'd build this kind of thing directly into the validation layer of your AI coding workflow. Very, very neat for any AI agent or really any software you want
to build at all. Moving on, I want to talk about memory. Now, managing the tokens that we're passing into the LLM calls for our agents. And this really is a hot topic right now, especially with all the rate limiting that people are getting with AI coding assistants like Claude Code. It really is important to manage our context efficiently, only giving to our agents the information it actually needs and not completely bloating our system prompts with thousands of lines of instruction and
tools that it doesn't actually need. That's what you want to avoid. And so, just a couple of simple tips here going along with our theme. The first one is to keep your prompts very concise. both your system prompts and then also the tool descriptions that describe to your agent when and how to use tools like I showed in the code earlier. You don't need to over complicate it. That's why I have these templates for you like the one for the system prompt, right? Like
you have your goal just a couple of sentences, your persona just a couple of sentences. Keep it very organized and keeping it organized also helps you keep it quite concise. You don't need to overthink it. And so keeping your system prompts to just a couple of hundred lines at most is generally what I recommend. Some solutions might need more, but that's when I'd start to question like could you really make that more concise or split it into different specialized agents so each agent still
has a simple system prompt. Another thing you can do for agents that have longer conversations is you can limit kind of in a sliding window to the 10 or 20 most recent messages, for example, that you actually include in the context. And going back to the code, I'll even show you what that looks like here. Like right now when we call our agent, we run it, we're passing in the entire conversation history. But in Python, if I wanted to include just the last 10 messages, I could do something
like this. And so now maybe like, you know, all previous messages aren't really as relevant anymore. We just want to include the most recent 10. That's how we can do that. So that's another really popular strategy. Also, tools like N8N have that as an option baked directly into their short-term memory nodes. So very useful to know. And then also when you start to have so much information about a single user that you don't want to include it in the short-term memory, that's when you can
look at long-term memory. But also, don't build it from scratch. Again, don't over complicate it. There are tools that you can use just like with security to help us with long-term memory, and mem is one of those. Mem is a completely open-source long-term memory agentic framework. And so I'll show the GitHub in a second here, but yeah, when you have so much information about a user that you can't just include it all in context, you need some way to search through a longer term set of
memories and bring only the ones in that are relevant to the current conversation, which actually does use rag under the hood, by the way. So again, another example why it's such an important capability. Um, but yeah, basically you're able to pull core memories from conversations and store it to be searched later. That's what Memzero offers us. And it's so easy to include in our Python code to just like guardrails AI. I'll show you an example really quickly in their quick start. You
install it as a Python package and then you basically have a function to search for memories like performing rag to find memories related to the latest message and then you have a function to add memories. And so it'll use a large language model to extract the key information to store to be retrieved later. And so this definitely solves the context problem because now you're able to basically have infinite memory for an agent, but you don't have to give it all
to the LLM at once. It just retrieves things as needed. And of course, the last thing I want to hit on for context is what not to focus on when you're first building your agent. Do not worry about advanced memory compression techniques. There's a lot of cool things that Enthropic especially has been doing research on, but like don't worry about that. Don't worry about specialized sub agents. These are both solutions to handle the memory problem when it starts to get really really technical. But
right now, just start simple and you can always optimize things as you're starting to expand your agent and go to production and you hit some limits. But right now, focusing on these things up front is all you need to go the first 90% probably even beyond depending on how simple your agents are. And context was the last of the four core components of agents. So, we've covered the core four and security. Now, I want to talk a bit about observability and deployment.
Getting our agent ready for production. And I will say that security, observability, and deployment definitely go a lot more into the last 10% of building an agent. But I want to touch on them here because there are some ways to design stuff up front very simply, especially with observability. I want to introduce you to Langfuse right now. And I covered this on my channel already. Link to a video right here on Langfuse if you want to dive more into observability. But we can set up the
ability to watch for the actions that our agent is taking, view them in a dashboard. We can do things like testing different prompts for our agents. It is a beautiful platform and it's actually super easy to incorporate into our code. And so I did this very sneakily already when I built the agent with you, but I have this function here called setup observability. And all it does is it initializes langfuse based on some environment variables that I have set
here. And I cover all that in my YouTube video on Langfuse if you're curious. But you basically just connect to your Langfuse instance. And then after you set up the connection and instrument your agent, your Pantic AI agent for observability, that is all you have to do. Literally no more code in here for Langfuse. And it's going to watch for all of our agent executions, even getting a sense of the tool calls that it's making under the hood. So take a look at this. So I'm in the Langfuse
dashboard now where I can view that execution that we had from our test earlier where it used the add numbers function and we have all of this very rich data around the number of tokens that it used the latency. We can view the tools and also look at the different parameters that we have like the tool arguments like for the numbers to add. We can view the system prompt that was leveraged here based on that template we have defined. We have all this observability that also really helps for
monitoring our agents in production when other users are leveraging the agent. So we can't just like look at our chat and see how the agent is performing. And there's so many other things within langu as well that I don't want to get into right now like eval for your agent. It is a totally open- source platform just like me zero and guardrails AI. So again focusing on open source a lot in this video. There are other solutions for this kind of observability like
Heliconee and Langmith for example, but Langfuse is the one that I love using. And I know I didn't cover it too much in the code, but it really is as simple as what I showed you. And so you can use the repository that I have linked below as your template to like start an agent with observability baked right in if you're interested. And then the very last component that I want to at least touch on right now is how you can configure your agent upfront to work
well for deployment when you're ready to take your agent into production. Now, obviously that's going to be part of the last 10%. Not something I'm going to talk about a lot in this video, but the one big golden nugget that I want to give you here is you should always think about how you can build your AI agent to run as a Docker container. Docker is my method for packaging up any application especially AI agents that I want to deploy to the cloud and also I will say
that AI coding assistants are very good at setting up docker configuration like your docker files and docker compose u files. Yeah. So leverage those and then you can add you know like a simple streamllet application with Python or build a react front end to create a chat interface for your agent if it is a conversationally driven agent or otherwise what I like to do for more you know like background agents that run on a data set periodically I'll run it just
as a serverless function so it's kind of like background agent run it as serverless in a docker container conversational agent you run it in a docker container also with a front-end application that's pretty much like the two tracks I have for any agent that I want to deploy. So yeah, just think like Docker native. Have that in your mind from the get-go when you're building your agent. What you don't want to focus on for observability and deployment and everything production ready is
Kubernetes orchestration, extensive LM evals or prompt AB testing. Like some of the things we have in Langfuse that are very powerful when you want to super refine your agent tools and system prompt and everything like don't even worry about that yet. You can definitely get there and like I said core part of the last 10%. But right now also don't even think about like the infrastructure that much because unless you're running local large language models, you don't
really need heavy infrastructure for your agents at all. Like obviously it depends on the amount of usage of your agent. But for most use cases, just like a couple of vCPUs and a few gigabytes of RAM is all you need to run an AI agent even if you have a front-end application as well. very very lightweight as long as you are calling a third party for the large language model like open router or you know anthropic or gemini whatever that might be. So there you go that's
everything that I have for you today helping you just keep it simple which will not just help you build better agents even when you have to scale complexity but it'll also just help you get over that hurdle of motivation because I'm giving you permission to not be perfect at first. you just start with the foundations like I showed you and then build on top and iterate as you need. And so I hope that inspires you to just go and build your next AI agent right now because it can be super simple
to start. And so with that, if you appreciate this video and you're looking forward to more things on building AI agents and using AI coding assistants, I'd really appreciate a like and a subscribe. And with that, I will see you in the next
Not only have I built hundreds of AI agents myself, I've seen other people build thousands of AI agents for every use case under the sun. The people who are the most successful are the ones who don't overcomplicate it - and I want that to be you too. It's easy to think building AI agents is super complicated, but honestly you can learn 90% of what you need to know (and what to focus on) from this video. No matter how you're building your agents, I'll show you here what you need to think about, and more importantly what you shouldn't worry about when you're first creating your agent. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Platforms mentioned in this video (most are open source!): - Synk for automated AI-native app security: https://snyk.plug.dev/xgmYQhO - Guardrails AI for agent guardrails: https://github.com/guardrails-ai/guardrails - Pydantic AI for the AI Agent framework: https://ai.pydantic.dev/ - Mem0 for long term memory: https://mem0.ai - Langfuse for agent observability: https://langfuse.com/ - Docker for deployment: https://www.docker.com/ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - If you want to master AI coding assistants and learn how to build systems for reliable and repeatable results, check out the new Agentic Coding Course in Dynamous: https://dynamous.ai/agentic-coding-course - Here is the repo for the super simple AI Agent we built in this video: https://github.com/coleam00/ottomator-agents/tree/main/ai-agent-fundamentals - Here is the system prompt template I covered: https://docs.google.com/document/d/1-OB4ZMg20pIRVmLw50TCclzQWCATaezV_cx73Muw5wU ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Thanks to Snyk for working with me on the security portion of this video! It's been a pleasure and I really do believe Snyk is on the forefront of software/agent/AI coding security especially with their new MCP server: https://docs.snyk.io/integrations/snyk-studio-agentic-integrations#local-and-remote-mcp-server-support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:00 - Master the First 90% of Building AI Agents 01:38 - The 4 Core Components of AI Agents 03:06 - The First 3 Steps of Building an Agent 04:09 - Building a Basic AI Agent Together Live 09:17 - Choosing Your LLM 10:34 - Crafting Your System Prompt 12:20 - Creating Your Tools (Agent Capabilities) 14:26 - AI Agent Security 15:32 - Guardrails AI 16:45 - Snyk MCP Server 19:45 - Managing Agent Context (Memory) 22:05 - Mem0 for Long Term Agent Memory 23:53 - Agent Observability (with Langfuse) 26:28 - Agent Deployment (with Docker) 28:41 - Final Thoughts ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Join me as I push the limits of what is possible with AI. I'll be uploading videos weekly - at least every Wednesday at 7:00 PM CDT!