Loading video player...
Hi everyone, Christian from Langchen
here. Today I want to show you something
that maybe completely changes the way
you build real agents. Both OpenAI and
Entropic now ship native provider tools.
Not just generic function calls, but
actually capabilities their models were
trained and tuned to use extremely well.
Things like the entropic MCP tool set,
tool search, even browse automation and
memory tools. All built in, first class
and model optimized.
And with the newest lang chain provider
packages, you don't have to handle JSON
schemas or wire everything together
manually anymore. You just import tools
from the provider package and pick the
tools you want. Basically bind them
directly to the model. That's it. Type
safe and fully integrated.
To show what this unlocks, I built a
small agent that pretty much connects
cloud to all CloudFare's MCP servers.
With that cloud can dynamically now
discover the tools, load them on demand
and pretty much explore and operate on
behalf of me on the entire Cloudflare
platform. It can fetch DNS logs, run
queries or answer any question I have to
my account all inside a single lang
chain agent. And the crazy part about
this is that this entire tool stack just
comes from a few imports from the
provider package. So in this video I
will walk you through how you can build
an agent like this from scratch. How you
can use native tools and explain why
provider native tools are such a huge
unlock for building real world
multimodal agentic applications. Let's
dive in.
Now let's jump back into our lung
example application. I've added a
Cloudfare MCP agent scenario where I
implemented a longchain agent that
pretty much connects to all of
Cloudfare's MCP servers out there and
pretty much gives this application
access to the entire platform with a
simple chat interface. The
implementation is fairly simple. The
first thing you need to do is just
define all these CloudFare MCP servers
in a basic list. Cloud has a bunch of
cool MCP servers that you can connect
to. One is just to have access to their
dogs. Other ones allow you to build work
application with storage, get insights
and manage your CloudFare workers. One
that I found particularly interesting
was the one where you can uh you know
access the Cloudfare browsers to fetch
websites and convert them to markdown or
screenshots. Another one I will connect
to is the GraphQL one. So I can access
my account information. You probably
realize that I didn't actually connect
to all of them. That is just because
connecting to all of them will take a
lot of time for the model to do on the
on the API side. So the invocation to
the model will take longer as more MCP
servers you connect to. In addition to
the MCP servers, I define an MCP server
middleware that pretty much is just
there so I can pass on these MCP server
information uh to the model invocation
through the model settings. And then I
define my uh agent function where I
define my chat entropic model instance.
And then I'm using one of the first
provider tools here. It's going to be
the MCP tool set tool from the entropic
provider package. This tool is pretty
much responsible to allow me to define
configuration for every MCP server I
want to connect to. So I'm mapping over
all of my MCP servers and call that tool
passing along the server name some
default configuration uh some cache
control as well as some configuration
maybe to specific tools of that specific
server.
Another tool I define in my in this
example is a tool search tool. This tool
search tool allows me pretty much to
defer the loading of all of the MCP
server tools and use this tool on the
API side to have the model just search
for the most appropriate tool for every
any given scenario. So this will save
pretty much a lot of space in my context
window and doesn't overflow the context
window with tools that I never need or
never use.
I then define an initial state pretty
much passing on the message from the
front end to the agent. Uh and define my
agent by passing along the model, the
tools and the MCP server middleware
where I pass along the server
information alongside my authorization
token that I've put into my environment.
And then I will just pass on and stream
the agent result back to the front end.
Now I have two example prompts here I
want to go through. One is just a simple
prompt that asks about some details
about my account. My account being the
account that the CloudFare token
connects to. So you can then see that it
will search for the right tool and found
the right tool to fetch the information
about the account. So it connects here
to the GraphQLMCP server to fetch the
right information about my account.
Another interesting and neat scenario is
the CloudFare browsers that are exposed
through the MCP server. So you can
pretty much allow your agent now to
browse the web and analyze every web
page that is out there to give you
information about what's there in real
time. So in this example, I want to go
to Hacker News and find out what's
currently the number one post uh on
there. So if I do that, it will again
like fetches or searches for the right
tool to do so. So it will connect to the
browser MCP server will find the tools
there that are needed to open up a
browser, fetch the website and pass the
information down and then it opens the
hacker news and finds that the GPT 5.2
announcement post is currently number
one with 539 points.
You can see that it's very easy now to
pretty much connect to all of MCP
servers with a simple provider tool,
which is much more efficient than
spinning up these MCP servers uh
potentially locally. It's much more
easier now just using this MCP tool set
and MCP search provider tool. All right,
that's a demo. And honestly, this is
just a glimpse of what's possible now
with these native provider tools from
companies like OpenAI and Entropic. I'm
very curious and excited to see what
other types of tools these model apps
will release in the future. With the new
tools API from our provider packages,
you don't have to handcraft schemas
anymore. You can forget about gluing
adapters together. You just bind the
tools that the model was literally
trained to use and you get real agentic
behavior just immediately.
If you want to try it out yourself, all
the code is linked below. And if you are
building agents like me, multimodel,
tool heavy or production grade, this is
the direction everyone is heading. Let
me know in the comments below what tools
or providers you want to see next and
what you're building with it. I love to
see what people do with these types of
capabilities. All right, thank you for
watching and see you with the next one.
In this video, Christian Bromann build and demo an agent that uses LLM provider native tools—specifically Anthropic Claude’s built-in MCP toolsets—to connect to Cloudflare’s managed MCP servers. Instead of manually wiring up a giant tool list, the agent uses: - MCP toolsets (mcpToolset_20251120) to access Cloudflare MCP servers (Docs, Browser, GraphQL) - Tool discovery via regex search (toolSearchRegex_20251119) so Claude can find the right tool when it needs it - Deferred tool loading to keep things fast even when there are hundreds of tools available - A simple middleware layer that injects the mcp_servers config into model calls (and optionally passes a Cloudflare API token) I walk through the code, then show the behavior live—like using Cloudflare’s Browser MCP server to fetch a page and summarize what’s on it, and using Docs/GraphQL tooling when relevant. If you’re curious about where “agents” are going beyond function calling—and how MCP changes the way we ship tool-enabled apps—this is a practical, end-to-end example. 🧑💻 Source code / repo: https://github.com/christian-bromann/langchat 📚 Docs: https://docs.langchain.com/oss/javascript/integrations/tools/anthropic#mcp-toolset