Loading video player...
If you're still using custom GPTs like
his 2023 or something like typing mind
or PO, you're going to want to check
this out. Niden just released something
that replaces all of them and adds on
one more important feature, the ability
to execute your workflows and your
agents all in one place. [music] So, not
only can you chat with things like
Claude, Gemini, and OpenAI in the
[music] same space, but you can also
invoke all of your workflows that you
use day-to-day within [music] the same
chat window. So, in this video, I'm
going to walk you through what it is,
how it works, and what you should look
out for, and finally, why you should
care. [music] Let's dive in. So, when
you update to the latest and greatest
version of NIDEN, at the bottom left
hand side right here, you should see
this chat icon. When you click on this
chat icon, it should bring you to
something akin to this hub where you can
pick from different language models. And
when I say different ones, you could use
things from open router like open source
models. You could use Olama. You can
also use AWS bedrock on top of the
normal well-loved models like OpenAI,
Anthropic, and Google. So, if you pick
something like OpenAI's GPT40, this will
bring you to a brand new chat where at
the left hand side you have tools and
these tools that you can invoke right
now are web search. So, you can toggle
both of these on and I'd imagine that
Eniden will be adding more and more
tools over time. And then at the right
hand side, you have voice control, very
similar to text to speech. And you can
attach things like images. So, if you
have something to explain, you can
upload something more multimodal and
have the LM break that down. Now, if
this new feature was just this, it
wouldn't be worth making a video about
it. But where they add more depth is
this custom agents tab where you can
enable different workflows to pop up
here so that you can invoke them on a
whim. So, if I open a new chat, I can
now select a certain agent. So, I go to
let's say my web scraper agent and I can
say something like, can you go and
scrape the entire following website? And
then I will just put my website right
here. www.promptadvisors.com.
And then this will invoke a workflow
that's using firecrawl a scraper. It's
using the MCP of firecrawl behind the
scenes to go invoke the different scrape
functions to come back with a response.
Now behind the scenes, this response is
coming from a workflow. And if you don't
believe me, if we go back to editen, we
go to my web scraper agent right here
and we go into executions, you'll see
this just ran right now and this invoked
all the fields that you saw on the left
hand side. And you can see these are all
the responses it came back with. So it
was able to invoke this specific
workflow. Now, was it magic? No. There
is one thing you need to do in order to
make your workflows eligible to appear
in the chat. So all you have to do to
enable this in the chat hub is double
click on this chat trigger and you'll
see you have this toggle here that says
make available in any chat. You want to
make sure this is set to on and then
once you do that you'll be invited to
enter a name for the agent and a
description for the agent. And one last
thing you have to do is make sure that
this is published and then you should be
good to go. But there is one thing to
look out for. So right now I'm on the
self-hosted version of aen and when I
clicked on this chat trigger initially I
couldn't see this toggle. So, in case
you have the exact same thing and you're
importing an old workflow to make it
work with the new chat hub, all you'd
have to do is just X this out and then
add on a brand new chat trigger. Once
you do that, you should be able to see
this brand new toggle right here. And
then you want to double check whether or
not your workflow invokes in general.
Once all that's done and the workflow
runs fine, you can go back to the chat
and double check whether or not you can
see them here. If you do, then you're
all good to go and you can invoke these
within the chat hub. Now, if for
whatever reason you still can't see it,
what I would try doing in terms of
troubleshooting is rebuild the workflow
node by node into a brand new workflow
and try and see if it works better with
that version. Hopefully, you shouldn't
have to run into that at all and it just
works. Now, what if you want to invoke a
workflow that doesn't have a chat
trigger as a part of the initial build?
So something like this where I have a
lead qualification agent that goes
through and you have a web hook as the
primary source of truth. Well, all you
have to do is just add on a chat trigger
onto the workflow as an additional input
layer. So you could just put this here,
plug it in to the same input, and this
will make it eligible to be invoked
within the chat hub as well. All you
have to do to make sure this works is
double check that your workflow is able
to be invoked by a natural language
command or by whatever input that you
can actually dump into the chat itself.
So to give you an example, if we go back
to here, I'll leave without saving. If
we go to my meeting notes processor
agent, very straightforward, it has an
instruction to just go through a series
of meeting notes and pull out all the
action items. So in this case, if we go
back into the chat hub and we select
that agent right there, the meeting
notes processor, I will just dump in a
sample transcript and I'll show you how
this would work. Now since I technically
have all the LMS in one place, I can not
leave any and create the transcript
here. So I can say something like, can
you create a completely fake back and
forth transcript that someone could have
had on Zoom? And I'll send this over.
We'll take the transcript, copy it as
is. Okay. and we'll paste it into our
new chat. So, I'll just take this. We'll
go back to custom agents. I'll go to my
meeting notes processor. I will dump
this in and then let me take out the
little instruction here. And then we'll
send this over. It should be able to
invoke via chat trigger since that
workflow is expecting a raw text input
as the primary input to be processed.
And within 10 seconds, we have the
meeting processor agent go take the
input, break it down into this action
items table, create a follow-up email
with those action items to the invitees,
and we're good to go. And if you want to
see the execution ID to be able to
correlate it back to your workflow,
especially if it's running on a
scheduled basis, and you want to be able
to differentiate between the chat hub
triggered workflows versus the normal
triggered workflows, you'd be able to
trace it down through this ID. And very
similar to chatbt, you have the ability
to read this aloud, to edit your
original prompt, and to regenerate it.
Now that we know what this is and how it
works, I'll quickly walk you through why
you should care. Now, while custom GBTs
are still popular to this day, they are
outdated and frozen in time in the sense
that no new features, no real big
advancements have happened ever since
the last couple years. And if you
actually wanted a custom GBT to do
something, meaning to execute an NN
workflow, you'd have to come up with
something like an OpenAI schema that you
would put in the custom tooling and then
you'd set up all the endpoints. You'd
create the web hooks, you'd send a test
request, make sure it works, and then
create a prompt within the custom GBT
that would know where and when to invoke
that specific workflow and what to deal
with in terms of the payload. And having
built hundreds of custom GBTs back when
it was cool for different clients, you
would always hope it would work. And
sometimes depending on the model,
depending on the version of custom GBT,
it wouldn't invoke the same function in
the same way. So the TLDDR is that
created this to essentially centralize
all the different actions you would do
in one place. If you want to be able to
just go back and forth with an LLM of
your choice, you can do that in the chat
hub. If you want to be able to invoke a
workflow and integrate that into a back
and forth chat, you could do that in the
chat hub. So, the centralization of
speaking to different LLMs and invoking
your automations in the same space is
the aim of this feature. And while this
is version one, I'm sure that as they
get more user feedback, they'll keep
improving it from here. And that's
pretty much all you need to know. So, if
you found this video helpful, please let
me know down in the comments below.
helps the video, helps the channel, and
I'll see you in the next
Join My Community to Level Up β‘ https://www.skool.com/earlyaidopters/about π Book a Meeting with Our Team: https://bit.ly/3Ml5AKW π Visit Our Website: https://bit.ly/4cD9jhG π¬ Core Video Description If you're still using Custom GPTs or LLM aggregators like TypingMind or Poe, this changes everything. n8n just released a Chat Hub that replaces all of themβand adds one killer feature: the ability to execute your workflows and AI agents directly from chat. In this 8-minute walkthrough, I show you exactly how the Chat Hub works, how to make any workflow available as a Custom Agent with one toggle, and why this matters if you've ever struggled with OpenAPI schemas and webhook configs just to make a Custom GPT actually DO something. No more hoping it works. No more version conflicts. Just chat β action. I'll also cover the gotchas for self-hosted users and troubleshooting tips if your workflows don't appear in the hub. β³ TIMESTAMPS: 00:00 β Intro: Why n8n Chat Hub replaces Custom GPTs, TypingMind & Poe 00:30 β Accessing the Chat Hub: Where to find it in n8n 00:50 β Model Selection: OpenAI, Anthropic, Google, Open Router & more 01:15 β Built-in Tools: Web search, voice control, image uploads 01:35 β Custom Agents: The game-changing feature 02:00 β Demo: Web Scraper Agent with Firecrawl MCP 02:45 β Behind the Scenes: How workflows execute via chat 03:10 β Enabling Custom Agents: The one toggle you need 03:40 β Gotcha for Self-Hosted Users: Missing toggle fix 04:15 β Adding Chat Triggers to Existing Workflows 04:50 β Demo: Meeting Notes Processor Agent 05:40 β Creating Transcripts in the Same Chat Hub 06:20 β Why You Should Care: Custom GPTs are frozen in time 06:50 β The Old Way: OpenAPI schemas, webhooks, hoping it works 07:20 β The New Way: Centralized LLMs + workflow execution 07:46 β Outro #n8n #n8nChatHub #CustomAgents #AIAutomation #CustomGPT #Anthropic #OpenAI #Claude #Gemini #WorkflowAutomation #NoCode #AITools #Firecrawl #MCP #LLM