Loading video player...
If you've always wanted to build your
own MCP server, but were intimidated at
the idea of doing it, this video is for
you. I'm going to show you how we can
spin up an MCP server that you can
generate using Claude code that you can
import and use in Claw Desktop
immediately. In my last video, I walked
through conceptually the three ways that
you can use MCPs the easiest right now.
But in this one, I'll actually give you
what you need to go and start building
your own. The thought of building your
own server might sound intimidating, but
once you see my optimized workflow for
getting something up and running in a
very short amount of time, you'll be
building tons of MCP servers by the end
of this video. If you're curious, let's
dive right in. All right, so this is
Cloud Code and all I've done is create a
brand new folder called my MCP up here.
And you'll notice I have three files. I
have one that's called MCP 101. I have
another one that's called MCP complete
implementation guide. This is something
that I put together. And I have one
existing markdown file that's called
lazy calculator. So let me explain
really quickly what's in each file,
where I got it from, and why we're even
using it. So the very first file here,
if I double click, is called MCP 101. So
this is actually coming from an online
resource, a free resource that basically
acts as a language model cheat sheet of
how to use MCPS and where they can or
can't be used. So, if you go into Google
and you type in model context LLM's full
text, you'll get something like this,
which is a full text file. If you click
on it, it's literally plain text
explaining everything. And you can zoom
in right here. It has examples of
different clients or different platforms
and exactly what can or can't be done
using MCP servers. So, it goes through
methodology, it goes through logic, it
goes through prompt engineering. So
giving this to a language model, you can
see right here different features like
built-in MCP servers can quickly be
enabled and disabled. It gives a really
good primer for the language model to go
from having a bachelor's degree in
Python to having a master's in using MCP
servers. And you can see if we do a
control find and we go to claude code,
you can see in cloud code, it knows
exactly what it is, where it's sourced.
If we zoom into this, it tells it what
cloud code is. and you can find
something specifically referencing in
cloud code what can be generated using
that tool. So this file is the first
piece of the pie, but the second one is
the complete implementation guide where
I created this using an AI agent to go
scrape the web for documentation,
YouTube videos, transcripts, and
anything that was highly rated related
to MCP servers. And this goes through
what an MCP server is, three pillars of
MCP, different ways to build it,
different frameworks to use to build it.
So this gives, if not a cheat sheet, a
full implementation guide, and when you
get stuck, instead of having to go and
refer to the web and having cloud code
do web search, you can go and just tell
it go refer to these two files and try
to see what might be happening. And my
last file here is called the lazy
calculator. Now, I showed this as a
small example in my last video at the
very end, but basically it is a
glorified really bad calculator. And why
I include it here is as you build one
MCP server, you can use it as long as it
works as an example for the next ones
you want to build. So, let's say I had
five MCP servers. I could throw in
another two or threeMD files. And what I
usually do is I ask claude code after
building a server go and summarize
everything that we went through to get
to this point so I can use it as like a
pointer guide for the next agent that I
spin up to build another MCB server. And
using this cheat code will let you go
from hours to minutes because you have a
clear-cut example of what has worked on
your specific device. And I say your
specific device because when you build
an MCP server and you want it to run
locally for let's say cloud desktop, you
might be on a Windows, you might be on
Linux. I'm personally on a Mac from
2023. So each system is going to have
small permutations where something does
or doesn't work. Once you finally get
something to work, make sure you reverse
prompt and document it as an artifact so
your system can always refer to it to
avoid making the same mistakes over and
over again. So in that case, let me show
you how we can create another MCP server
using all this documentation to help us
out. So I'm going to use my shortcut
command. This is my shortcut command for
using clawed dangerously in YOLO mode. I
don't recommend this if you're a newbie,
but I'm just doing this for speed of
this tutorial. Then it spins up. And
what I'm going to do is I'm going to do
slashinitialize.
And what slashinitialize does is it will
allow claude code to become familiar
with all the resources I've laid out in
this codebase and create what's called a
claude MD file which will act as the
command center brain for operating this
entire chat moving forward. So I'll
click on this and we'll come back to
when it's done. All right. So after a
few minutes it's completed its analysis.
It's created this MD file to walk itself
through all the files we have here and
when it can refer to them. You can see
right now it's referencing the project
structure outlined in the lazy
calculator. And now that we have this, I
can go back up and let's say we want to
make a brand new function. And this
function is going to help become a
prompt auditor. So we send a request and
it goes to a language model. In this
case, we can use something like GPT5
Nano and it takes that request and makes
it into a proper prompt. Now, could you
do this in a language model itself? Yes.
But for the purpose of showing you the
tutorial, we're going to go with this.
So, what I'll do is I'll go into
planning mode. I'll do shift tab, go to
plan mode. I'll tell exactly what I'm
trying to do and I'm going to provide it
with the ability to look at the
documentation of GBD5. So, let me grab
from the OpenAI website using GBD5.
There's an example of exactly how to
call their API. So, I'm just going to
copy that over and then I'll keep that
handy for after I dictate.
Okay. Okay, so I'm trying to create a
brand new MCP server that I want to call
prompt assistant. And the whole point of
this MCP server is when someone submits
a query and invokes this function, it
will take the prompt specified in their
query that should be basically provided
with a variable called prompt equals,
then the prompt after will be the part
we want to audit. We want to take that
and make it into a much more robust
version of the prompt using a language
model that you're not aware of because
your cutoff training was in 2023. It's
called GBT5 Nano. I'm going to provide
you the documentation on how we're
planning on using this. Obviously,
you're going to have to spin up
something like an environment file for
me to put my API key. But once it's
there, I want you to use this function
or this way of calling a language model
specifically. You might default to GP4
because that's what you know, but you're
going to use this no matter what. So,
come up with a plan of creating the
system that sends the initial prompt
from wherever we're using this MCP
server to this language model and then
create a prompt in this language model
to take an incoming prompt and act as a
prompt engineer and make it a way better
prompt and then return that prompt in
Markdown. So, we'll send that over.
There we go.
It'll take few seconds. Now I'll just
paste right after it the documentation.
I'll say here's the GPT5 nano way to
call the API. Then I'll paste this and
I'll keep it in plan mode so that I can
audit what it comes back with in terms
of a possible solution. So it comes back
with a full structure for the project. I
can see it's using GPT5.
This is the structure of the server. So
we have the prompt assistant server. We
have the run prompt assistant server. We
have the test prompts.python
file. So this looks decent to me. So
I'll click on bypass permissions and
then it will go and put together a draft
of this. And then the next part is the
lazy part where instead of going into
cloud desktop and testing it myself, I'm
going to go into cloud. So I'll pull
this up while this is running. So let me
pull up claude. And you'll see when it
pops up, we're going to have a normal
box here. What I want to go into is I
want to click on allow. And then we want
to go into settings and then go to
developer and then we can go to edit
config. And in this case you get routed
to the clawed folder. So what you can do
here big cheat code is you can
rightclick and then copy the path. So in
my case I'd have to click option and
then when you copy the path to this
specific file you can also copy the path
to the logs file in claude. And what the
logs does is it stores all the errors.
So instead of you manually opening
Claude, closing it, restarting it,
praying it works, you could give it the
two file paths and tell it, you know
what, once you think you're done, go and
open Claude yourself, see if you get
errors in the logs. If you get errors in
the logs, go and fix those errors and
try again until it works. So now you
create this feedback loop where Claude
code will have its own stimuli to
interact with Claude desktop until it's
ready to go. And this is a cheat code in
terms of time. So, I'm already going to
cue up my next prompt here where I tell
it, "Cool, you said you're done."
Because it's probably going to come back
and say, "I'm done." And it works just
like it always does. I'm going to paste
the two paths. I'm going to tell it open
cloud desktop. Test it yourself until
you see there's no errors whatsoever.
So, it comes back saying it's done. And
naturally, I don't trust it. So, I did
put my API key in this environment file
right here. And then once I did that, I
also said go and open up Claw Desktop
and test it yourself using these file
paths. If you get errors, navigate them
in the logs folder and fix those
mistakes until it opens and runs
error-free. Then I give it the link to
both of my paths. So I'm going to send
this over and it should autonomously
open up Claude from the command line and
see whether or not there error. So
whilst doing this it should be able to
open up cloud desktop on its own just
using the terminal checking whether or
not there are any errors in the logs
associated with opening it up and if
there are it should be able to go into
that feedback loop until it assesses
that at least there aren't any errors.
Now whether or not it will functionally
work we'll have to test it ourselves. So
you can see here it's gone into my cloud
desktopconfig.json.
It's edited the MCP servers that it has.
It's adding the new MCP server and then
it's closing the file and opening the
brand new environment. All right. So now
it says the server started without any
errors. Now it's going to verify and
restart Cloud Code and check the logs
again. So it sees the MCP servers are
active. It's going to check whether or
not the prompt assistant now pops up in
the logs or if it's somehow ignoring it
completely. All right. And after it
found its own errors, it was able to
restart Claude Code. And when I go into
claw desktop, I can now see right here
we have a few tools. One of them is our
brand new prompt assistant. So let's
take it for a try. Let me click on this.
Let me see what tools we have. So
enhanced prompt and test connection. So
I'm going to say, okay, I want to be
able to adjust this prompt and make it
better. So I'm going to write that. I'm
going to say prompt equals,
can you write me an essay about why AI
is cool? Okay. And I send that over. And
I send this as a request. Fingers
crossed. It should invoke this function.
Asks me for permission. And then we'll
be able to click on always allow. There
we go. Allow always. And we'll see what
we get. All right. And it seemed to
work. So first it used the enhance
prompt. It sent the query that we had
right here which was can you write an
essay about YI is cool. It comes back
with the enhanced prompt in markdown
like we asked. It's using GD5 behind the
scenes nano specifically. Then it
transforms it uses Claude to basically
go through the response and we come back
with a better version of the prompt that
we originally asked for. Now one small
thing is that the MCP server did return
back the response but Claude didn't give
it back to me physically. So I just told
it, can you just output what you
outputed in that small little dropdown?
And then we have the adjusted version of
that prompt right here. So in under what
10 15 minutes, we were able to go from a
singular prompt, have it go through a
feedback loop to check its own work to
having a functional MCP server that you
own and you own the IP for and the
infrastructure on your local computer.
And that's pretty much it. This was
meant to be a quick and snappy tutorial
to unlock a bunch of limiting beliefs
that you can do this even if you're not
technical and even if you're not a
developer. So, what I'll do to make this
as easy as possible for you is I'll
provide you with the files here on the
MCP 101, the implementation guide, as
well as the lazy calculator in the
second link in the description below to
get you started and on your journey to
building your own MCP empire. And if you
like the way I break down concepts into
digestible hacks, then you'll love the
infinite amount of hacks that I share
with my early AI adopters community.
Check out the first link in the
description below and maybe I'll see you
inside. I'll see you in the next one.
Join My Community to Level Up ➡ https://www.skool.com/earlyaidopters/about 🚀 Gumroad Link to Assets in the Video: https://bit.ly/46lB8d5 📅 Book a Meeting with Our Team: https://bit.ly/3Ml5AKW 🌐 Visit Our Website: https://bit.ly/4cD9jhG 🎬 Core Video Description Want to build your first Model Context Protocol (MCP) server without getting lost in docs? In this practical 13-minute build, I show you exactly how to spin up a working MCP server with Claude Code and use it in Claude Desktop immediately. You’ll see my optimized workflow end-to-end: seeding Claude with a lightweight “MCP 101” cheat sheet and a full implementation guide, bootstrapping a new project with /initialize to create a claude.md “command center,” planning in “Plan Mode,” and wiring a real tool (Prompt Assistant) that calls GPT-5 Nano to upgrade any prompt—then validating it with a log-driven feedback loop so Claude tests and fixes its own setup. By the end, you’ll have a reusable template, a reverse-prompting artifact for your specific machine, and the confidence to ship more MCP servers in minutes, not hours. ⏳ TIMESTAMPS: 00:00 – Intro: Build an MCP server even if you’re not a dev 00:14 – What we’re building: Generate in Claude Code, use in Claude Desktop 00:38 – Project setup: my-mcp folder & the 3 starter files 01:04 – MCP 101 cheat sheet: Why and how to feed Claude context 01:41 – Touring the MCP 101 resource: Clients, limits, and examples 02:21 – Complete Implementation Guide: Docs, videos, transcripts → one source 02:58 – “Lazy Calculator” example: Reverse-prompting artifacts for your machine 03:45 – OS quirks & why to document what actually worked locally 04:32 – Kicking off the build: YOLO shortcut + /initialize 04:58 – Claude’s claude.md “command center” explained 05:18 – New server plan: Prompt Assistant using GPT-5 Nano API 06:24 – Plan Mode output: Project structure & files to generate 06:44 – Automating tests: Claude Desktop config + logs feedback loop 08:18 – Add server, restart, verify: Reading logs until clean start 09:10 – Live test in Claude Desktop: Enhance Prompt & connection check 09:50 – Result: Markdown-ready upgraded prompt (and small display fix) 10:30 – Recap: From idea → working MCP in ~10–15 minutes 11:30 – Resources: MCP 101, implementation guide, and example server 11:45 – CTA: Join Early AI-dopters + outro #MCP #ModelContextProtocol #ClaudeCode #ClaudeDesktop #Anthropic #PromptEngineering #AIEngineering #APIDevelopment #Automation #LocalFirstAI #GPT5Nano #OpenAI #DevTools #AIAgents #Python