Loading video player...
As you all probably know by now, I kind
of love bullying Anthropic. As great as
their models can be, they have their
weird behaviors and things that I just
don't love about how the company
operates. One of those things in
particular is the way they treat open
source. I've never been super fond of
it. But they recently acquired Bun, the
JavaScript runtime, that I do dearly
love, both the framework and the runtime
itself, as well as the team building it.
And it seems like with this acquisition,
a new focus on open source is happening,
which is why this announcement, as crazy
as it is, is actually quite cool. And
Thorback just donated the model context
protocol to the new Agentic AI
Foundation. If you're curious what this
is, spoiler, it's a new fund that was
just formed under the Linux Foundation.
Yes, really. The Linux Foundation are
now the owners of the model context
protocol. This is a fascinating new
development. both the creation of this
new agentic group within Linux
Foundation, but also MCP becoming a more
open standard that exists in an external
entity instead of just being part of
Anthropic. This is super super exciting
stuff. Projects like this are only
possible because wellunded companies
like Anthropic Block, OpenAI, Google,
Microsoft, and Amazon support them. But
we need support too. So let's do a quick
word from today's sponsor. Today's
sponsor is different and I want to give
you guys a bit of history on them. They
started their journey as the best
possible API for giving context on your
codebase to different AI agents. So if
you're trying to build your own AI that
deeply understands your codebase, their
product built around like modern grap.
It turns out that's useful for a lot of
other things though. In particular, code
review. If an AI agent knows everything
about your codebase, it can actually
give you really good feedback when
you're trying to change it. The sponsor
is Grapile and I want to show you what
this looks like in practice. This is a
code review on a real repo. It's Physics
Nemo by Nvidia. like actual real big
legit project. Grab Tile left a review
here. Gave it a low confidence score, a
three out of five. But god, I love this
feature so much. The way that it will
tell you how it feels overall, it's just
so clear and easy to know like, do I
need to give this another look or not?
Confidence score is a three, but the
reason why is where it gets interesting.
Gravile called out that there is a
precision calculation issue on this
particular line. The piece that's
interesting here is it can only know
this with that codebase context. that
the way that these values are calculated
other places is to put both of these
values underneath when it's doing this
division. One of the maintainers claimed
otherwise saying that the true positives
over predicted positives can be greater
than one which is correct in their
understanding. GPile responded which is
what makes it so different. Looking at
your concern, I need to clarify the
issue. The formula on line 200 is indeed
wrong but true positives over predicted
positives cannot be greater than one
when calculated correctly. The problems
on line 196. It's calculating the count
of all true class zero labels which
should be called actual positives not
true positives or precision true
positives is the same. You get the idea.
It's a really good fellow code reviewer
and as we could see here Kelvin
confirmed as much. Turns out I was wrong
and overlooked the true positives as the
intersection. I fixed this and also
flipped the positive as label one. There
you go. If all of your code reviews are
still being done by humans, you're
wasting all of those humans time. Pull
in Gravile today at soyv.link/graphile.
donating the model context protocol and
establishing the Agentic AI Foundation.
Today we're donating the MCP to the
Agentic AI Foundation, AIF, a directed
fund under the Linux Foundation, which
is co-founded by Anthropic, Block, and
OpenAI with support from Google,
Microsoft AWS Cloudflare and
Bloomberg. One of these things is not
like the others. Bloomberg's code
discipline in particular is actually
really good. Their TypeScript stuff is
incredible. I thought MCP was a little
more than a year old. Is that really
right? No, 24. Wow. Yeah, it's one year
later. That's crazy. Almost to the dot.
One year ago, Anthropic introduced MCP
is a universal open standard for
connecting AI applications to external
systems. Notably, connecting and
external systems, implying that the
server that you're connecting to is
expected to be live and maintain a
connection during the entire session,
which makes it really annoying to deploy
and adopt. So, a couple cool new
companies working on figuring out
solutions to that now. But yeah, the the
way MCP is implemented is really focused
on making the agent side as easy as
possible, not on making the developer
implementation side on the other end as
easy as possible or really viable at
all. MCP's achieved incredible adoption.
There are over 10,000 active public MCP
servers covering everything from
developer tools to Fortune 500
deployments. Across platforms, MCP has
been adopted by ChatGBT, Cursor, Gemini,
Copilot, Visual Studio Code, and other
popular AI products. Enterprise grade
infra now exists with deployment support
for MCP from providers including AWS
Cloudflare, Google Cloud, and Azure. It
does make sense that the big clouds
would want to be part of this. It's also
nice having them supporting Linux
Foundation more. You're not already
familiar with Linux Foundation. No,
they're not like the people who make
Linux. It's a little more complex than
that. The Linux Foundation is an attempt
to take a lot of like open- source work
and orchestrate it while also getting
money in memberships from big companies
to try and make sure the direction of
all of these important big open source
projects continues to make sense for how
everybody is using them. They have a
fascinating set of projects including of
course the Linux kernel, risk 5 and
pietorch. So the things underneath them
are really important and fascinating.
You guys remember the chaos around
Reddus? Remember when they changed the
license and suddenly all the other
providers that hosted it had to like not
do that or use an old version? When that
happened, both AWS and Microsoft put a
lot of money in in order to fund the
development of a fork called Valky that
included a bunch of the original
maintainers of Reddus as well as people
maintaining it at Azure and at AWS. They
then instead of trying to run this
independently and separately, donated it
to the Linux Foundation alongside I'm
sure more donations. The result being
the Linux Foundation now owning a good
open standard for Reddit style use cases
that is truly open and will stay open. I
don't think there's any example of
something that went to the Linux
Foundation to be a well-maintained open
source project then locking down which
is a big reason why Anthropic would be
interested in this. There's always fear
that an open standard could get closed
and the way you counter that is going
really far alongside it which is what
happened here. Since there's now concern
about Reddus potentially changing its
license and the rugpull that can occur,
the alternative to that isn't another
thing that can be rugpulled, it's to
throw it at Linux Foundation so it will
be there hypothetically speaking
forever. A lot of the things Linux
Foundation maintains are things that
just have existed forever and probably
will as well. It's also worth noting
that the Linux foundation has other sub
foundations that they help operate as
well like the cloudnative computing
foundation, openjs foundation, the open
source security foundation and more.
There's a lot of these foundations that
they help manage and run is open search
their attempt to do uh elastic search
because I know that they had an elastic
search alternative at some point as
well. Regardless, a new one has just
been dropped which is the agentic AI
foundation which is one of the Linux
foundation projects. The AIF provides a
neutral open- source foundation to
ensure the critical capability evolves
transparently, collaboratively, and in
ways that advance the adoption of
leading open source AI projects. It is
kind of funny that the Open Linux
Foundation Agentic AI foundation is
funded by the companies doing all of the
closedweight models. Like, has Anthropic
released any openweight anything in the
last like 3 to four years? I'm pretty
sure no. Opening just recently did one,
but you would expect a Meestrol or
DeepSeek to be here, but they also don't
have the funding to justify it. It is
cool to see though. They now have MCP as
well as Goose, which is an open source
accessible AI agent that goes beyond
code suggestions. Interesting. And
Agents MD they also own. Very
interesting actually. I did not realize
that Agents MD was big enough to go to a
foundation like this, but that does make
sense. And I've seen more and more cool
uses of Agents MD files. So, this is
cool. Oh, interesting. Goose is by
Block. If you're not familiar with
Block, they are a company that owns a
lot of different things. They own
Square, Cash App. They bought Title,
which uh is close to my heart, as well
as a few other things. It was founded by
the original founder of Twitter. Coolish
company. No issues with them. It's cool
seeing them doing more open source stuff
like this. Wait, no way. Yeah, I know
Roselle. She's great. That's so cool. I
had no idea she was at Block now. That
makes a ton of sense. Good for her. Good
stuff. Local AI agent autonomously
engineering tasks seamlessly. This is a
good set of things. It's cool to see
something like this forming. Now, it's a
weird set of projects. You have a
markdown file standard. You have an
actual big open-source AI agent toolkit
and then MCP which is a protocol for
interfacing with agentic stuff. Very
interesting set of things especially
when you look at the set of members that
they have. the block membership makes
much more sense now. Seems like they
kind of want to pawn goose off there,
but also the block founders have always
been very very pro- opensource things.
So, it does make a lot of sense there.
Back to the article.
Oh, they have a whole fancy little
timeline of the history of MCP here.
Early adoption from cursor of the AI
tools back in December to February. MCP
was presented at the AI engineering
summit. There was 2,000 public MCP
servers at the time. A second spec came
out which included enterprise off HTTP
streaming multimodal content but also
the enterprise o solution they picked
was lackluster at best. It's my
understanding that they used an existing
HTTP standard for O that had never
really been implemented by anyone. That
kind of sucks and as such it was not
something many people did. It's also
funny that a thousand people register
for the MCP night in San Francisco
because that's happening again tonight
and I will be there. That's why I'm
ending stream early. At this point, they
had 4,000 public MCP servers. Then June
to July, the third spec came out with
structured outputs, which is essential
to have the MCP actually output things
into consistent shape. Interactive
prompts as well as hardened security.
They also introduced the MCP directory
and 2,000 people took part in the agents
hackathon. Then enterprise MCP hosting
started happening on Google Cloud, Azure
and Cloudflare. MCP DevSummit sold out
in London. This is also around when we
started to see these like the push back
on the idea that MCP is a thing that is
just how models will do application
development and application style work
make less and less sense. And that's
when I did the videos about the
Cloudflare article about turning MCPs
back into code and then letting the
agents write code to execute instead.
Watch those videos if you haven't and
you want to better understand the things
I don't like about MCP as well as how we
can take advantage of the standard more
effectively. This is interesting though.
I also don't think the growth is that
impressive from like the first two
months they got 2,000 servers and then
over the next 10 months they got another
8,000. That's relatively linear growth.
I'm once again going to run a poll. Do
you use MCP? Yes, a lot. Greater than
five servers? Yes, a bit less than five
servers. Not really. Nope. I'll change
it to tried but churned. I should have
put a just contact 7 option. I'll do
another poll after for that. Yeah, gave
up on Context 7, which is the only MCP I
was trying to use after the last poll in
chat. According to my audience, 56% of
people don't use MCP, 16% tried and
churned, 25% use it a bit, and 4% use it
a lot. It's kind of crazy how close the
a bit of use and tried and churned are,
but this is my experience, too. I just
have not found many use cases for MCP in
my day-to-day usage. Yeah, the use case
I see for it, funny enough, is much
stronger locally. I've yet to find many
MCPs that are useful if you run them
remotely. Like there's the Mac MCP that
lets you control your Mac. Of course,
Super Memory are the ones doing it. This
is the Mac MCP is pretty cool. The point
of it is that you can control features
on your Mac via MCP. There's a bunch of
TypeScript code written to do things on
your Mac exposed over a local server
that your AI agents can call and do
things like send texts or update your
notes, touch your contacts. And here you
can see in Daria's example, he tells it
to send a message, this is a test to
somebody. And now it's running behind
the scenes on his Mac doing the things
that he requested, which is pretty cool.
But those are the use cases that I think
are the most valid right now due to the
stranges of how the protocol is
implemented. There's also like the
Ableton MCP. If you're a music nerd,
it's pretty cool. Ableton MCP is another
very fun one. It's for controlling the
Ableton Live digital audio workstation.
It's a very common tool for producing
music. And Ableton MCP lets you modify
your project in Ableton using bottle
context protocol. But once again, since
you have to run a full dedicated server
for this to work, you have to set up the
smiththery CLI, install this MCP, and
specify the client is claude. You
probably have to install UV as well
because it's all a Python server and
then spin it up and run it locally in
the background because like MCP can't
execute things that it doesn't have. It
is effectively an HTTP server that is
exposing specific behaviors to a model
and allowing for it to stay connected to
this server so it can do things and get
data and make changes. Another notable
thing about these examples is none of
them require complex off stories because
O is one of the most annoying nightmares
across all of these things. I don't even
know if you can off an MCP server and
cursor. Oh, fun. You have to hardcode
the tokens or you can pull them from
environment in the definition that you
paste in for the project. Fascinating.
Yeah, it's cool. They do support it now.
You also have to manually toggle the
features that you do and don't want it
to have access to on and off. Fun. You
couldn't tell I don't use MCP and cursor
a whole bunch because I don't use MCP a
whole bunch. You can also return images
as long as they're base 64. Fun, man.
What an interesting standard. Back to
the article because there more important
pieces here. We're continuing to invest
in MCP's growth. Clouds are directed
with over 75 connectors which are
powered by MCP and they recently
launched the tool search and
programmatic tool calling capabilities
in their API. I like that they worded
this carefully so they don't have to
call it the tool search tool as they
have been constantly. They did tool
search and programmatic tool calling
capabilities which helps with production
scale MCP deployments handling thousands
of tools efficiently and reducing
latency and complex agent workflows.
Yeah, that's a big step. It feels like
Anthropic recognizes that improving the
standard is not necessarily within their
area of expertise and instead of trying
to patch it again and again, it is much
more efficient for them to hand it off
to a better open standards committee
that can be designed. I know design by
committee is terrible, but for things
that are standards that are going to be
adopted by many, it makes a lot more
sense. make the standard external so it
can be iterated on, developed and owned
safely, openly and externally and
continue building the new features on
top. It definitely seems like their
recent interest has less been in
changing how MCP works and more in
building things around it. Like if we go
back to the timeline, they got early
adoption, they presented it, they
released the second spec which included
the O stuff. Then in June and July, they
did the third spec with structured
outputs, interactive prompts, and harden
security. And since then, a lot of the
new stuff isn't really part of the spec.
Like discovery isn't really part of the
spec for MCP. Async tasks are, but are
more like an implementation detail on
the thing consuming. And agentic
sampling, I don't even know what they
mean by that. Interesting. Sampling
allows for the server to reverse request
back to the model to do a thing. So if
you're using Opus 4.5 and it uses the
MCP for I don't know doing a a query
against your database to get some data
and then your code realizes because you
wrote the code this way that it has to
have the right SQL to generate it can
forward back that request to the model
to generate the SQL string that it has
to run. I've never seen anyone using
this. This is really interesting too.
There's a hints section which is an
array of arbitrary JSON objects by the
looks of it, but they're separately this
intelligence priority and speed priority
things that are top level keys which
means that they're probably part of the
spec,
man.
Yeah, you can see that the chaos that is
this diagram. The server initializes the
sampling with the sampling create
message. A human can review that
request. They're presented the request
for approval. they could request a a
modify or just approve it and then it
forwards that request to the LLM,
returns a generation. You can review the
response. The response is presented to
the human for approval and if they
approve it, it gets sent back to the
client which then sends it back to the
server to continue. Man,
when I say it feels like all of this is
chaotically overengineered,
do you understand?
It is also kind of chaotic that the the
client in this case is something like if
you use the cloudi desktop app or if you
use cursor the server is sending a
request to the client that the user then
approves so the client can then resend
it to the LLM. This weird graph of where
everything lands is just strange. It can
also request images back and audio.
Does
this feels like a standard looking for a
problem still? Like who's going to
implement this? What tools exist right
now where if I wrote my MCP server to
then request an image back to the client
that will handle that properly? I just
don't think there's going to be many.
Oh, and then yeah, the these are
standard cost priority, speed priority,
and intelligence priority. Cost
priority. Higher values prefer cheaper
models. Speed higher prefers faster. And
intelligence, higher prefers more
capable. And the values, by the way, are
between zero and one. Great. Then the
hints allow servers to suggest specific
models or model families. Again, they
had cloud 3 sonnet and then claude as a
fallback.
Yeah, interesting. So, yeah, that is a
new part of the standard. is not one
I've heard anyone talking about, but
they did add something. But it does
generally feel like the standard in the
spec has less been Anthropic's focus and
more the things they're building around
it, which makes the punting of it off as
they build new tools on top of it make
more and more sense. It is also like to
be fair a really good faith play to say
we're not just building MCP for our own
use cases and tricking you all to build
on top of it. They do clearly want MCP
to be a winning open standard. I have no
reason to believe that any of their way
of doing this is malicious. I still have
my skepticism around MCP, but I don't
think they're doing this because they're
dumb or evil. I think they do genuinely
believe this is an important thing to
exist and that it is the best chance we
have at an open standard for AI to do
things on real systems. Still kind of
crazy that they're making this so open
and cloud code is still closed source
and they've sent as many DMCAs as they
have to GitHub about people who shared
the source maps for cloud code when they
leaked them in their own package when
they deployed it. Just saying. The Linux
Foundation and the Agentic AI
Foundation. The Linux Foundation is a
nonprofit organization dedicated to
fostering the growth of sustainable open
source ecosystems through neutral
stewardship, community building, and
shared infrastructure. It has decades of
experience stewarding the most critical
and globally significant open source
projects including Linux, Kubernetes,
Node.js and PyTorch. Yeah, Node is
technically part of it too which is
nuts. Importantly, the Linux Foundation
has a proven track record in
facilitating open collaboration and
maintaining vendor neutrality. Yes,
absolutely yes. Node runs just as well
everywhere, which is kind of crazy if
you think about it. Like you all know
how much people complain about Nex.js
being easiest on Versel. Node works
everywhere other than Cloudflare kind
of, but you get the point. Node is a
very much supported standard that works
equally well pretty much everywhere.
It's really cool that that's proven.
Same with Linux kernel. I would imagine
a significant portion, if not the
majority of the money that Microsoft
makes on Azure right now is going
through Linux systems. Agentic AI
Foundation is a directed fund under the
Linux Foundation co-founded by Anthropic
Block and OpenAI with support from all
the other companies I mentioned before.
The AIF aims to ensure agentic AI
evolves transparently, collaboratively,
and in the public interest through
strategic investment, community
building, and shared development of open
standards. The idea of agentic AI
evolving transparently by making MCP an
open standard is an interesting thing. I
would imagine that transparent evolution
of AI and agentic work would involve
more transparency around how the models
are being trained, which none of the
labs are willing to do. Some deeply
skeptical part of me wants to see this
as an attempt to look more transparent
when they are more than ever closing off
their research and work so people can't
copy their details. But I do really want
to read this in good faith cuz there's
no reason not to at this point at least.
I'll be keeping my eye on it. And now
the donating MCP section. Anthropic
dating the model context protocol to the
Linux Foundation's new agentic AI
foundation where it will join Goose by
block and agents MD by OpenAI as
founding projects. I didn't know agents
was open AI. That does make sense.
Bringing these and future projects under
the AIF will foster innovation across
the agentic AI ecosystem as well as
ensuring that these foundational
technologies remain neutral, open, and
communitydriven. The MCP's governance
model will remain unchanged. The
project's maintainers will continue to
prioritize community input and
transparent decision-making. There is a
lot of work on this. I've seen so many
issues that are hundreds upon hundreds
of replies deep as well as their shared
Discord server.
They are trying to make it better. I've
had a lot of people, like more so than
almost anything I've complained about,
I've had a lot of people who are part of
the MCP spec and standards committee
replying and reaching out trying to
figure things out, especially the off
things. It seems like the off stuff's a
total disaster. The future of MCP open
source software is essential for
building a secure and innovative
ecosystem for agentic AI. Today's
donation to the Linux Foundation
demonstrates our commitment to ensuring
MCP remains a neutral open standard.
We're excited to continue contributing
to MCP and other agentic AI projects
through the AIF. Yeah, this is cool. If
they want to prove that it's an open
standard, they did it right. But it also
kind of seems like Infopa can only have
one open source project at a time. And
now that they have Bun as their big open
source project, MCP has been moved away.
Openai's article about this foundation
has even more interesting stuff. I'm
going to read this a little too. Similar
blurb here. It is cool that they're
calling out that they're working with
Anthropic and Linux Foundation block all
of them. I've never seen OpenAI call out
Anthropic as a supporter and Anthropic
call out OpenAI in matching articles the
same day. Weird to see this much
alignment and genuine collaboration on
the standards here. Why open standards
matter? Developers are rapidly adopting
AI to build more capable agentic systems
from coding assistance to workflow
automation and customer service agents.
In 2025, these systems have begun to
move from prototypes into tools that
handle real work in business and
consumer settings. We believe the
transition from experimental agents to
real world systems will best work at
scale if there are open standards that
help make them interoperable, like a
standard for a markdown file. Open
standards make agents safer, easier to
build, and more portable across tools
and platforms. And they help prevent the
ecosystem from fragmenting as the new
category matures. As more agents begin
handling real responsibility, the cost
of fragmentation increases. Without
common conventions and neutral
governance, agent development risks
diverging into incompatible silos that
limit portability, safety, and progress.
You were around for the early web days
between like the 15 different attempts
to do dynamic web apps. I see why
jumping on this early is a thing they're
all focused on. Agents SDK isn't open
source, is it? What is the license? MIT.
Is it standard MIT? It is. Cool.
OpenAI's history with the agents SDK,
apps SDK, and Agentic commerce protocol
is cited here as proof that they really
want to contribute to these building
blocks and open standards and codec CLI
was a pretty baller move. When they
dropped that as an open- source project,
I was surprised. I did not see that
coming, especially after how strictly
cloud code has stayed closed source.
It's a baller move on their part. And
then they did the GPTO OSS models, which
believe it or not, I'm still using
GPT120 bill quite a bit. It runs really
well on my laptop, too. Is the apps SDK
open source as well? There's no way.
These are just examples. That's what I
thought. The apps SDK is not open
source. Also, to give Enthropic a little
more grace here, the apps SDK and the
idea of apps in chat GPT was so hyped
when they announced it and I haven't
seen anyone use it for anything like not
a single thing. Apparently, the UI
portion is open source, but yeah, no
one's using it. I don't care. They've
been early adopters and core
contributors to MCP incorporating it as
a foundation for connectors and apps in
Chat GPT. Just last week, they announced
a collab with Enthropic and MCP UI to
extend the apps SDK to all MCP
developers through MCP apps. Oh, great.
They're combining the two standards I
don't like.
And they donated agents MD.
Agents MD has been adopted by more than
60,000 open source projects and agent
frameworks including AMP, Codeex,
Cursor, Devon, Factory, Gemini CLI,
GitHub, Copilot, Jules, and VS Code
among others. Jules mentioned that's
Google's agentic coding tool that nobody
knows about. There's also a bunch of
other things they're not calling out
here. Like I know that uh code rabbit
and grapile both use agents MD very well
as well which is super helpful during
code review steps. It's cool that we
don't have the problem that we have with
a JavaScript project like uh you know
the classic config hell. Look at all
these dot files. The envitign Ignore and
VMRC oxlint prettier ignore prettier rc
components json nip json agentmd all
these things. It is pretty cool that the
agent MD is now just one file that works
with everything and we're not going to
have as bad of config hell. Like there
was a bit where you had to have the
cursor MD cursor rules agent MD winds
surfs equivalent and like 15 other
things. Agents MD has been standardized
enough that like everyone uses it. That
said, different tools benefit from
different types of instructions which it
varies. Still overall very good thing
especially if more and more of these
tools adjust to follow the agent MD
standard better. This seems cool. I
really like this type of collaboration
and alignment. I think it leads towards
better things happening. I really like
the idea of the Agentic AI Foundation
where if a thing becomes popular enough
that is meant to be open, the Linux
Foundation can now own it as part of the
Aif. This makes a ton of sense. I wish I
had more negative to say cuz it's fun,
but this is a good thing. This is the
best future for MCP. This is a really
good thing for all of these standards
and it makes the future of Agentic AI
tooling much, much safer and potentially
much better. Curious what y'all think
though. Am I way too excited about this
or is this just another weird MCP thing?
Anthropic is giving MCP to the Linux Foundation... Thank you Greptile for sponsoring! Check them out at: https://soydev.link/greptile SOURCES https://openai.com/index/agentic-ai-foundation/ https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation Want to sponsor a video? Learn more here: https://soydev.link/sponsor-me Check out my Twitch, Twitter, Discord more at https://t3.gg S/O Ph4se0n3 for the awesome edit 🙏