Loading video player...
Tanstack made an AI SDK. Yep, that's
right. In Tanstack's seemingly
never-ending quest to improve the
JavaScript world, an AI SDK to rival for
sales is the next step in that. Now, it
is currently in alpha, so I wouldn't
recommend using this in production, and
things may change, but I like the goal
that they've stated of an honest
open-source set of libraries across
multiple languages that works with your
existing stack instead of replacing it.
It's also super impressive for an alpha
release. Already has server support
across multiple languages. So far,
that's JavaScript, PHP, and Python. It
has client libraries for React, Solid,
and VanillaJS with more coming soon like
spelt. And it's got providers for
OpenAI, Anthropic, Gemini, and Alarm.
So, let me take you through some of its
features that I use to build out a basic
chatbot with web search.
We'll start out nice and simple. This is
all of the code that you need on the
server to start streaming text. You can
see we're using the chat function from
the Tanstack AI package. This is the
core library. Then, we also have the
OpenAI adapter, so we can use OpenAI's
models. You see in chat, you just pass
it through some options. We pass through
the adapter we want to use. We pass
through the messages that we're getting
from our post request, the model that
we're going to use, and of course, since
we're using the OpenAI adapter, this is
now going to be type safe, so it knows
of all of the models that we can use via
OpenAI. Then you also pass it through
the conversation ID that you get from
the post request. Once you get that, you
simply get a stream of chunks. And we
can simply consume that and turn it into
a HTTP response by using the two stream
response function that we also get from
Tanstack AI. So, a lot of this is going
to look very familiar if you've used the
Vcel AI SDK. I guess there isn't too
many ways to reinvent the wheels for the
absolute basics here. Where I'm
expecting Tanstack to excel is in that
type safety and just the overall
developer experience. They have a very
good track record of being good at that
stuff. A good example of this for me is
in the provider options. Not only does
Tanstack know of all of the available
models like we saw earlier, but also
knows the provider options to go along
with those models. You can see here if I
want reasoning on these models, it's
going to know of all of the options that
we can choose for this. So I'll choose
medium. And it's also going to know if
that's applicable for the model. So if I
change the model to something like GPT4,
you can see we immediately get a type
error as that doesn't have the same
reasoning option. So I'll change that
back to GPT5. And everything works. This
is actually something I struggled with
in the Vel AI SDK. Sometimes it was
unclear what provider options each model
would take, and there were ways to cast
the options to OpenAI's package, but
overall I just found the experience
worse than what we have built in here.
What I am curious about though is if
this will ever be compatible with a
service like Open Router. There's not an
adapter yet, but I would love to see
one. But I am curious if this type
method would work as you need to know
the provider options for every single
model that's being added to Open Router
and they add loads of them all the time.
So, it's going to be a lot of types.
Regardless of that though, we are now
ready to consume this on our client. For
that, we just need the relevant client
package. You can see in my case, I'm
using React, but there's also one for
Solid and a VanillaJS framework agnostic
one that you can use as well if you're
using anything else. Once we've done
that, all we need to do is use the use
chat hook. Yes, very similar to the cell
AI SDK's one. And then here we have our
connection and we can use the fetch
server events helper here. As you can
see, this is simply going to consume the
API endpoint that we set up earlier and
it's going to handle that response for
us. With that, the hook simply returns
all of the messages, a send message
function, and it is loading state. Now,
we have everything that we need to set
up a simple chatbot. To do that in React
here, I simply set up a handle submit
for the form that we send. So, this is
simply going to send the message that we
have in the input and then just set the
input to blank. And then you can see
down here we map over the messages that
we get back. Each message here is going
to have a role that can be assistant or
it can be the user. And again, this is
all going to be nice and type safe. So
this is going to know whether that's a
mistake or not. And if we change that
back, you can see down here we can map
over the parts of each message as well.
So we have thinking parts. We also have
text parts. That's just going to be the
normal message. Then there's also tool
call and tool results for when we add in
some tooling. So most of this is just
simple React code mapping over that
array. So I'm not going to focus on this
too much as most of this is now just
styling. So this is what we have so far
by using the use chat hook on the client
side and simply the chat function on the
server side. I can now ask GPT5
something like who is the current F1
champion. And here we can see it going
through its thinking and then quickly it
gives us the answer that Max Tapen is
the current F1 drivers champion. Except
that is wrong. That is last year's
information. So now I want to connect
this to web search so we can get
up-to-date live info. To do that the
first thing we need is a tool
definition. This lets you define the
tool schema and also give it a
description of what this tool is going
to do. So here you can see I simply have
an input schema that has our query of
what the AI wants to search and then
also the maximum number of results that
we'll get back and then a description of
what this tool is doing. So the AI knows
when it should pick this tool. Once we
have our definition which came from the
tool definition function of Tanstack AI.
This can now be used either client side
or server side as it is now isomorphic.
So it depends on how you want to
implement this. So you want this to run
on the end user's browser or on the
server. In my case, since this is a
service which I'll be using with an API
key, I'm going to be doing it all on the
server. But imagine if you wanted to do
a UI update or send a notification, you
could set up a client tool as well. All
you need to do to set up what happens
when the tool is actually executed is
simply take your definition either say
server or client depending on where this
is going to run and then simply provide
the function that will run when the tool
is executed. You see here we have the
parameters which are coming from the
input schema. So they've been validated
for us. And in my case, I simply send
off a fetch request to an API called
Tavilli, which is going to handle the
web search for us. Then we can simply
return that as a JSON response. Then all
we need to do to let the model use our
tool is simply provide it in the chat
function underneath the tools array.
Here you can see I've added in the
search internet one. Then on the front
end, all I've done is added in the tools
array onto the use chat hook as well.
But this time I'm using the definition
as it's not going to be running on the
client. We simply just want to know the
definition so that we have type safety.
You see down here where we have the map
of our messages. We can now say if the
part type is a tool call then we can use
part.name to actually get the tool
itself. So if we're using the search
internet tool so this is good if you
have multiple and again you can see this
is type safe so it knows if we're typing
in the wrong tool name. So it's going to
know all of them with that simple tool
set up. Now if I ask it the same
question of who the current F1 champion
is. We can see it goes through its
normal thinking process. Now it's
searching the web using my tool. And
finally now that it's searched the
internet it correctly returns that Lando
Norris is the current F1 champion. And
you can see I've also set up a sources
section here. So, that was a flyby of my
simple experiments with Tanstack AI, and
I've only touched the surface of what
Tanstack AI already has in its alpha.
There's more things like client side
tools, hybrid tools, approval flows for
human in the loop calls, and a gentic
cycle management. And there's just much,
much more. Like, apparently, they're
working on headless chatbot UI
components, similar to how Vel has their
AI elements. One thing I must say though
is in my experiments, I haven't seen too
much that's done differently from Vel's
AI SDK. So, I am excited to see what
gets added in the future that might
differentiate it. But regardless of
that, competition is always helpful to
push both of the packages to improve.
The only hurdle that I can see Tanstack
might have to get over and that I did is
the fact that the Facel AI SDK is over
two plus years old. So I found that
constantly cursor tab and all of the AI
models were trying to write for AI SDK
code when using similar functions like
the use chat hook. I am curious though
how you found tools like the Velc AI
SDK. Personally, I use it all the time
and haven't run into too many issues. So
let me know your thoughts in the
comments down below. While you're there,
subscribe. And as always, see you in the
next one.
A quick walkthrough of TanStack AI, building a simple, type-safe chatbot with streaming, tools, and web search. Early days, but already a solid DX-focused alternative to Vercel AI SDK. š Relevant Links TanStack AI: https://tanstack.com/ai/latest/docs/getting-started/overview ā¤ļø More about us Radically better observability stack: https://betterstack.com/ Written tutorials: https://betterstack.com/community/ Example projects: https://github.com/BetterStackHQ š± Socials Twitter: https://twitter.com/betterstackhq Instagram: https://www.instagram.com/betterstackhq/ TikTok: https://www.tiktok.com/@betterstack LinkedIn: https://www.linkedin.com/company/betterstack š Chapters: 0:00 Intro 0:45 Server Side 2:37 Client Side 3:46 Chatbot Demo 4:07 Web Search Tool 5:59 Thoughts