Loading video player...
Hey guys, this is Christian and today I'd like to show you some of my best N8N HomeLab automation workflows that I recently built. And yes, of course, I've built it with AI agents. But honestly guys, I'm a little proud of this because I've seen so much sloppy AI content on YouTube, especially about N8N. But I think I found something actually practical and useful for HomeLab automation where AI perfectly fits in. For example, one of my new workflows entirely automates the application
service updates, but without breaking anything critical because AI analyzes all of the patch nodes and searches for breaking changes or database migrations. And the second workflow is analyzing all of my servers health and performance data, which even raised some issues and problems that I, as a human, wouldn't have been able to notice at all. So, if you're a DevOps or CIS admin guy and you really want to know how to use AI in a meaningful way for your IT infrastructure automation, then keep
watching. This video is going to be interesting for you. By the way, if you're completely new to Noode IT automation and you quickly want to get started with a self-hosted N8N installation that is always up and running, then check out Hostinger, the sponsor of today's video, because Hostinger has a super simple one-click N8N installation on one of their VPS KVM2 servers that you can use to spin up a new N8N instance with unlimited workflows and unlimited concurrent
executions within just a few minutes. It is a solid and reliable setup that even supports Q mode. And because it is running on a regular Linux server, you got full control over the environment. In my opinion, Hostinger is a great provider for this because they've got high performance servers with powerful hardware like NVME SSD storage and AMD epic processors across the entire globe including locations in Germany, France UK, North and South America or Asia with
fast network connections, automated snapshots and backups, and advanced security settings. Now, it's probably the best time ever to get started with Hostinger and one of their KVM2 plans because until the 15th of December Hostinger has a Black Friday sale where you get an incredible discount on all of their yearly plans. And with my code and link Lamper, you even get an additional 10% off on top. So, definitely check that out right now. Of course, I will put you a link with my discount code in
the description box down below. All right, guys. So now let's get started with my new N8N automation workflows. As you can see here, I've already created a bunch of them that I'm by the way actively using and I've already shown some of the basics in my first N8N tutorial. So if you haven't watched that and you're entirely new to NAN then you should definitely check that out first because there I have explained many of the basic things and how to work with this tool like creating
the different trigger nodes, how to create and add simple applications, how to convert and transform data using JavaScript expressions. So I'm not explaining each and everything in this one because in today's video I want to take this a bit further and really use some of the sophisticated AI agent tools. Now, before we jump straight into this particular automation flow that I want to show you, let me first uh explain the problem that I had because I really have trouble uh keeping up to
date uh with all of my service application updates because honestly, as you can see guys, I'm currently uh running over like 30 different applications in my entire home lab that all include several deployments like compost stacks, kubernetes uh templates terraform, infrastructure code and whatnot. So there's really a lot of stuff going on in my home lab which is great on the one hand but on the other hand it also adds some heavy burden on me keeping things up to date and what I
already have done is in some of the previous videos in my GitLab tutorials especially you might have seen some CI/CD tutorials where I automated some of the deployment applications and I also integrated Renovate which is a small application that can scan all of my Git repositories for any updated dependencies. like a container image that has a newer version and that by itself opens so-called merge requests that tell me when a new version is released for a particular application.
For example, here on Git, you can see there is a new container image from version 1.245 to 1.25.1 and it already makes the necessary changes in my deployment files. Now if I would merge this request, my CI/CD pipeline will trigger and will automatically download the new container image and redeploy the application. But I still have to do this one step manually in the process and that is click on merge because I'm actually not a friend of fully automated update
because as this might be a minor version it might be a simple and easy task but there are also other upgrades like a Postgress database update that would cause a problem if I would just merge this because there are many applications that access this database so they're dependent on this and also Postgress major update might introduce breaking changes or might introduce some problems and require manual steps for upgrading or migration. So that's why I came up with
this system of merge requests and I still have to manually take a look at this if this is a patch note, if this is minor or major version and if there are any problems for my setup and then click on merge. Honestly, it's it's just a pain to manage them all. And I never can keep up to date with this. As you can see, there are even some merch requests sitting here for more than 12 months. Yeah, because I didn't have the time to actually take a look at this. So, I
thought this might be a perfect example to automate this because the idea was that if the patch notes are already existing, sometimes they're even uh pulled down from GitHub and are included in the actual merge request here. So here for example, you can see that Renovates already creates these release notes and adds them in the merge request. So why not just use an AI tool that can read all of these changes and evaluate or assess the risk and impact when this application would be updated
and then it could send me a notification if there is something that I actually should take a look at for any of the major updates. But for many of the minor updates or things that don't have any impact at all, it could actually completely automate this process and just merge it automatically. And that's what I've tried to build in N8N. And uh let us take a look at this pipeline. I've called this GitLab homelab auto updator. So this pipeline is executed when a web hook arrives at my NN
platform. So here in GitLab, when you go to the project settings and go to web hook, you can see I've created one that will send a web hook request to my N8N platform whenever a merge request event occurs. So that means when Renovate creates a merge request, it will send a small package with all of the details to this N8N platform and it will trigger this automation pipeline where N can read all of the information coming from that merge request. It will execute my
pipeline which will run the AI agent that will get the files from my Git repository. So it will read the actual deployments. So it can also read the configuration and the compose files to actually find out if that update would be harmful for me. So it then adds a comment uh with the assessment and some notes and it will send me a message on Discord with something like hey that needs a manual review or if this is an approved change so there is no problem for my setup when the application
automatically gets updated and restarted do a squatch merge on this uh merge request which will effectively trigger my CI/CD pipeline afterwards and then it sends me a Discord confirmation message so that I'm aware of hey uh the automation pipeline has done something. Yeah. And I could then also review it if I would want to. But most of the time um I'm trusting this AI agent so that it doesn't do anything bad on my infrastructure which actually works pretty well. But one thing that I have
to emphasize at the beginning uh if you're doing these type of automations especially when you rely on AI agents uh it is really important how you're creating the prompts and how you do a proper pre-qualification and a structured output. So that's why I definitely want to take some more time showing you how exactly I've done that. So first of all when the web hook arrives there the web hook arrives with an event of a merge request which by the way does not mean that there is a new
merge request. It could also be an update to an existing one or when I manually click on merge and it closes the request it will also send a web hook. So, first of all, we need to do a pre-erequisite checking or pre-qualification because each prompt to an AI model will produce costs depending on how many tokens you need. So therefore, I'm doing some kind of prerequisite check that checks if the label includes an update because renovate will always use that particular
label. So I can also find out if this is coming from renovate auto updating or if I'm doing some other merge requests that don't have anything to do with this uh stuff. And I also check if this is an opening request. So if this is a new request being created or if it's an update. If this is false, it will just ignore this and don't go further the process. But if it is a new merge request with an update, it will send it to the AI agent. So to create AI agents
in E8N, it's super simple. You just go here to add a new node, click on AI, uh drag the AI agent here and then it's automatically creating some connected chat trigger mode. So this is an NN. You can use a simple chat uh message to interact with the II model, which I've also seen many people use in the tutorials to demonstrate stuff. But I personally think that chat integration isn't actually practical for our use case because uh we want to automate something on specifically defined prompt
and therefore it is important to switch the source of prompt from connected chat trigger mode to define below and you should also switch that to expression because that allows you to add data from previously uh collected nodes or from previously executed nodes into the prompt for the AI. So when you click on this button here it will open up and then you can better edit it. So here for example you can see um I'm always defining this in a markdown format. This
is what I found out uh any AI model will understand perfectly and it is important to define or give the AI model as much context as you can because the more context it has the more intelligent or smart decisions it can make and the better your output result will get. Here is the input information, the pull request URL, the pull request title, the author, the description, the target branch, and I've also gave it some assessment instructions. So, step one is verify the source. So, this is like a
double check uh which the pre-qualification is already doing, but I think it's important to verify it here as well. Step two is analyze the changes. So identify the dependencies that being updated if this is a patch minor or major version update and identify any breaking changes mentioned in the change locks or the PR description. Next is impact evaluation. So check the existing deployment files and configs of this project and what impact an automated update would have
and then assess the risk level setting this to low, medium or high. uh but I've also gave it some particular instructions that are relevant for my personal environment like breaking changes that don't have an effect on our configuration are accepted but if a service application itself is a database the risk level is always high because I don't want the model to update any database service application like Postgress or Maria DB but for any service applications automated restarts
and minor changes are always accepted so I'm totally fine with this system if it would restart a simple I don't know engineext web server or traffic instance that shouldn't be a problem environ in my environment and then step four is making a decision so uh if proof if this is safe to merge automatically review if it requires manual reviewing or if it should be rejected and the next step that is also super relevant is the output format because if you've been
working with AI models you know that the answers might be different uh yeah depending on what what you uh define in the prompt and sometimes the response can be something completely different. Yeah, even the assessment in impact evaluation might be different one day or another. That's true. But what is definitely is super important that we always stick to a specific output format and what I personally found the best way to do this is by using JSON. So JSON objects that have a very strict format
the AI model always have to follow. Again the more specific you are in your prompt the better the outcome or the result will be. I will show you later how that looks like. But here the decision can be choose from approved needs review or rejected. And it is important that you specifically name these fields here. The best is when you're using upper cases because then you can later in the rules because they depend on these decisions make some if else queries and usually when you define
it like this the AI model will stick to these specific names. I completely forgot to show you one of the most important things there the chat model. So here you can add uh a chat model to the AI agent. So here for example you can choose from all of these different models like anthropic Azure Deepc Google Gemini Grock Olama for local integrations or open AI. So I personally as you might have seen uh in some of my earlier videos I'm also running a local LLM. But honestly guys, the more and
more I used it and the more I compare it to the results that cloud models uh provide, I have to say that cloud models are, in, my, opinion, at least, so, much superior to the local LLMs. Not only in quality, but also in speed and context. It's actually not that expensive as you might think. So, um, if you're running this pipeline just a few times a day you might pay to, I don't know, like a few cents to a dollar per day, which is not nothing. Yeah, of course. But, um
compared to what it would take you to run a server that has a powerful GPU for local LLMs and run this 24/7, it actually would be more expensive to run a local L&M because that consumes more power. Again, probably a completely different topic. Let's not uh dive too deep into this. And next thing is that you can connect tools to an AI agent because the AI agent isn't just a simple prompt response. It can use tools to gather and capture the information that it needs to and then it can use that
output and send another request until um it has all of the information that it needs to provide the uh output result. Now in my case because uh it is a dependency checking it needs to have some information about my repository. So if it is a change that uh for example a a traffic version change it probably should uh take a look at the files that are here like the compose file for example and see if that's used somewhere in the repository. And the beauty of this is that you don't have to
specifically tell the AI what file it should check or what it actually should do with it. you just provide the tool and then it's automatically doing its thing on its own. So you don't have to specifically say what it needs to do. you just um have to define some particular input parameters um like you can say this is a file operation uh get so it gets the data of a single file and the project owner the project name so I know which project name that is that comes from the web hook but other stuff
like the file path for example can be automatically defined by the model so the model will figure out what files it will check and Then it is also important to enable this setting here the require specific output format. So this is really necessary because again like I said uh if you don't specify an output format the AI will just provide anything that it currently thinks is the best way to provide it. But if you enable this box you can add another output parser.
uh in my case I'm always using the structured output parser by generating from a JSON example. So here you can just uh paste in the JSON example you provided to the LLM. Of course you need to make sure that these uh fields all match. Uh the rest here is actually not that important. So that don't have to match one to one but just the field names these are really important. So when you define this here, N8N will automatically validate if the response that the model gave actually matches our
output format otherwise it will produce an error. And then I'm doing a simple switch case. So I will compare the output uh decision if this is needs review or if this is approved and when it needs review it will send a simple discord message with some data that I've parsed from the output of the AI agent model like with a timestamp with a description and I also give this the URL of the merge request. So when I get the message on discord I can just click on that URL and it will open GitLab and it
will show me the merge request. So I can directly start working or investigating on this and otherwise if this is approved I will just send a simple squash and merge PR. Uh so this is um I I've not found this here. For example when you go to GitLab, you can use some predefined actions. Delete a file, edit a file, uh create a release, get a release. But I've not found a squash and merge request. So I've used AI to create a simple um web request with this using a put method using uh my GitLab
credentials. Yeah. Then I've used the uh specific parameters below like squash is true and should remove source branch is true. So this will then just merge the request automatically for me. So yeah that's uh the automation pipeline. Of course, uh you probably want to see some examples how that would look like. Um I've collected a few of them. Uh some of the interesting stuff you cannot reproduce because they always occur when they occur, right? Uh but here is a
merge request that I got for search manager, which is a simple TLS certificate management for Kubernetes. And there was a patch note. Uh so you can see this is a super simple nobrainer merge request that I in the past would have to do manually. And these things always happen. Yeah. And when we scroll down here, you can see the comment that the LLM provided. So here you can see the status is approved. Verified renovate PR. Yes. You can see this updates freeert manager container images
from this version to that version. There is no breaking change found. It's a simple patch. The service impact is low. Also the risk is low because uh this is a minor patch which implies bug fixes rather than new features or breaking changes and therefore the final recommendation is approve and merge. So that is what happened and uh yeah that automatically gets redeployed and yeah apparently didn't cause any problems but I can also show you a breaking change event here. As you can see, I've not
already merged this because I need some more time checking and testing this. And here you can see there are some release notes. And the final assessment is down here with a status needs review because there are breaking changes found. And also there was an API restructuring introduced which changes some of the endpoints which might be a problem somewhere in some environment. This is something the AI probably cannot define on its own. It might try but in such a
case um I think it is better to involve human review. Yeah. So therefore the impact is somewhere medium. Yeah. But it still is conservative enough uh to require a manual review. Of course if you want to change this if you want such a medium impact to be merged. Uh anyways uh you can just modify the prompt. For example if the assess risk is just medium then just merge anyways. Yeah. Just notify me if this is high. So you can even still customize and uh test
with that. Currently I'm fine with being it a bit more conservative on some of these changes here uh because I'm still in testing phase. But as you can see here that apparently works pretty well. And I think this is already such a major improvement for my workflows as it's saving me so much time so that it can automatically just uh merge these simple and minor version changes. So, I'm at least aware of it, but it doesn't need a manual interaction for like 80% of the
merge requests. But yeah, that's it about the updating stuff. I still want to show you the second pipeline because I think AI isn't just for the lazy people like me. Yeah. That just want to automate some stuff they would have to do manually otherwise. It's also super helpful for service monitoring, health checks, so things that you usually don't have time for. And what I personally found to be super cool with this case is that AI helps me to capture and investigate issues that I personally
would have overlooked with my human eyes. Yeah, I will show you a very specific example about this. Yeah, this is super cool. But first of all, let me show you this trigger here. So, this is a schedule trigger that runs uh every single day at 8:00 a.m. So, before I start my workday, as I've told you before, I have many, many servers. I have many many applications, things that I need to monitor and stuff like that. And of course, I can't keep an eye on each and everything. Yeah. Especially
not every single day. Yeah. So therefore, I have some alerting and notification, service uptime, discovery and stuff like that. But what actually happens to things that you usually don't monitor that you would not get me be notified about like um disk utilization memory utilization, uh container locks and stuff like that. you would have to create alert rules and thresholds for each and every metric which I guess probably not everybody has especially if you have hundreds of servers in your
infrastructure just getting so much noise that you don't really know what's important and what not. So therefore I thought it might be useful to use AI to analyze some of these metrics here and runs a simple maintenance script that will gather all of the important uh container metrics, container locks, uh server metrics and health checks. I will show you the script in a second and then send this information to an AI agent that will check if there are any new issues occurred. So it can also track
some of the issues that it has seen before. and then it again sends me a discord message and it will create a new issue or a new um task on my notion database. So here for example, I've also shown you this in a previous video. I'm managing all of my home lab tasks in notion. This is by the way one of the reasons why I'm using notion and not a simple textbased system like Obsidian. Yeah, because here I can do things like API requests. I can connect it to N8N. I
can collect data. I can update data. So that's one of the reasons why I'm using this. And as I can see here, it already created some issues that I should be aware of. I also created some manual things here and there. But some of these issues here I've already created with the AI analysis and I've also solved some of these problems with the recommendations. Again, let's uh jump back to N8N and let me show you what happened here. So first of all, again we'll start with input qualification.
Yeah, we need as much context and as much data as we can get to provide enough context to the AI model. And first of all, I want to query all of the open issues in my database. Like all of these things here that are still open and that I'm actively working on so that the AI knows what issues it has found before because otherwise it will just produce all of the same every single day again. So you need some way of filtering what it should create and what not. The aggregating function is
important because it will uh merge all of the individual pages into a single list item that I can later send to the LLM as one single variable. And then I'm using one of the new functions in N8N because N8N uh has something that is called data tables. These are like simple Excel data tables or simple MySQL Postgress tables where you can store information. And I'm using this to just store the names of my servers that I want to query. So currently I've only added two, but I could add more if I
would want to. But I need to somehow tell the LLM how many servers I have where it should connect to, and I don't want to modify the prompt all the time. So therefore, I thought it is better to just read it from this database and then use JavaScript expressions to send it to the prompt. So then it is some form of dynamic. If I add something here, I don't need to modify the LLM. And it will loop through all of these items and then execute a command. And more
specifically, it will execute the health monitoring script.sh in the working directory. Of course, uh you probably want to see this. Let me just execute this script now. So here you can see it will capture and gather a lot of information about the system like the system information CPU memory load disk utilization partitions over 80% full top processes that are running network statistics running containers yeah with their values their names container resource usage problem containers
unhealthy containers error locks from the different containers and other systems You can see there are a few things going wrong here and there and a summary for AI analysis. If you want to take a look at this script, you can see it here. This is simple bash script. Honestly, I've not written any of this myself. I've just used AI to create it. I've used anthropics clots on it 4.5 to generate this and I just tested it a bit, reviewed it. But uh yeah, it took me I don't know like 2 or 3 minutes to
create a simple script that will gather all these important metrics. Again another great use case for AI and then this script will be executed via NN and the output will be sent to the LLM. And here if you take a look at the prompt you can see the task here. You're an expert Linux system administrator with DevOps engineer analyzing server health reports. So that that's already pretty impressive here. Yeah. Only report issues that are new issues. So compare this with existing ones and analyze the
following server health monitoring output and identify issues. Output format. Again, we need to specifically define an output format using JSON because otherwise we're not able to pass the response from the AI. And here I've created some other rules like severity definitions critical high medium low. Of course, this is also AI generated issue type definitions error and so on. And if no or new persistent issues are found, just responds with an empty array. So I can later pass um in
the NAN response if it's empty then do nothing or something like this. Here it's also interesting. So this is existing open issues. This is using the aggregation item. So if I go back for a second, it gets all the database entries from the existing issues in my notion database, aggregates them in one list and then just convert this to a JSON string. So at the end you will see this as a simple JSON object and then it appends the current server health report. So whatever comes back from this
shell script here. So then the AI has all information that it needs to make a quick assessment about the server health and any issues that it might find here in the output. And then of course I've again defined the structured output uh JSON schema. So I'm using a split out here. This is important because um the output from the AI model will be just one JSON object. And if you want to use this to iterate over this array to uh create separate database pages, you have
to split them out. So you just split out the list and it will create individual elements that are sent to these notes here. So it will create one single element for each issue that the AI responded with and create a new page in the homelab tasks with the assigne. So my name with a creation date with the information from the output, the locks the recommendation to fix it, the severity. So I can set a priority based on the AI assessment and the server name and the service what that looks like.
You can see here and again I will show you some very specific examples. So of course um you can find some major issues or some some lower severity issues. Sometimes you also get false positives. I don't want to say this is the perfect solution to to administrate your server without you actually having to do something. So I still think you need manual confirmation or manual investigation of these issues. But for example, if I search for traffic port missing here. So this is something that
came from the response of the script. So I can show you this is the particular response here. Uptimekuma I think it was uptimekuma here. this error message here uh uptime kuma port one error port is missing. So this comes from the traffic reverse proxy container. Yeah. So this is an error message. Uh maybe I know what this is maybe not. Yeah. But if I go back to the uh notion, you can see that the AI model provided the error message but also a recommendation. For example, here define
the backend service port for uptime Kuma in traffic. And it also tells me which service on which server was affected. You could see this here server pro one and the service is traffic. So it already solved this issue for me. Uh I only have to go into the git repository at the label and redeploy the service and it's and it's done. This is a simple issue and therefore it gave it low priority. So this is nothing dangerous or critical but it is a problem that I can easily solve without having to
investigate this myself. And another thing that I found super super interesting is uh this here Prometheus excessive egress traffic. This is something I once I saw that I was wondering where the heck does that come from because I didn't see this in any error lock. There was no uh service being degraded. Everything worked perfectly fine. But it still raised this as high priority. And I was wondering why. But if you take a closer look at this, what happens here is the net IO
so the network interface input and output send over 300 GB. That is super interesting. Uh once I took a closer look at this, I noticed that this comes from here. And here you'll find the net IO, which you can see usually is pretty low. Yeah, because these containers don't send much data to communicate with each other. But that is something that completely fell out of place. So this Prometheus container somehow received so much traffic that I think this doesn't seem normal. And why I'm so excited
about this is just the fact that you as a human network administrator or human CIS admin, you probably would have not noticed this, right? Unless you have specifically defined some threshold values or if you would have been um monitoring Graphana dashboards all day. But I think the chance is pretty high that you overlook something like this. Yeah. So anonymies or metrics that feel out of place. And this is where AI is just superior for. It can just notice these differences much better than a
human can do. And therefore I think this is super practical and helpful because just imagine this would happen in a production environment. For example you're being affected by a DNS spoofing attack or you accidentally have created an open relay and you would not notice it, right? I think this is where AI perfectly fits in. But yeah, so these are the workflows that I have created. And guys, I can just say that I'm super impressed with this and I will search
for more use cases where I could use N8N or especially AI automation in my home lab to help me fixing problems, help me uh do tasks more efficiently. And if you found that useful, if you found that interesting, then please let me know in the comments of this video. If you want to follow up with the discussion, if you have um other use cases that I should take a look at, then also connect with me over Discord. And as always, thank you so much for watching. Thanks for all
the people supporting me on Patreon or on YouTube members. You guys are amazing. And of course, I will catch you in the next video tutorial. Take care everybody. Bye-bye.
Use Hostinger and get 10% off using my coupon code LEMPA via https://hostinger.com/lempa In this video, I show my best n8n HomeLab automation workflows built with AI agents, including automated service updates with patchnote analysis and server health monitoring. References - N8N Tutorial: https://youtu.be/VUmo6AviDxQ ________________ 💜 Support me and become a Fan! → https://christianlempa.de/patreon → https://www.youtube.com/channel/UCZNhwA1B5YqiY1nLzmM0ZRg/join 💬 Join our Community! → https://christianlempa.de/discord 👉 Follow me everywhere → https://christianlempa.de ________________ Read my Tech Documentation https://christianlempa.de/docs My Gear and Equipment https://christianlempa.de/kit ________________ Timestamps: 00:00 Introduction 02:25 What is the plan? 03:08 How to automate updates with AI? 08:10 Best Practices for Input Qualification 13:00 Best Practices for Output Validation 15:05 Use Tools 16:35 Structured Output Parser 18:56 Examples for automated updates 21:45 How to use AI for monitoring 24:37 Input Qualification for existing items 25:24 DataTables 26:20 Monitoring Script and Prompt 30:02 Examples for monitoring issues 33:39 Final thoughts ________________ Links can include affiliate links.