Loading video player...
Most people think that building a
workflow platform is about the visual
editor. Drag, drop, and connect some
nodes. But that's the easy part. The
hard part, making those workflows
actually execute. In this tutorial
we're building Nodebase, a workflow
automation platform like N8N or Zapier.
In part one, we built the editor, the
canvas, authentication, and payments. In
part two, we're building everything
else. The execution engine, the
triggers, the integrations, credential
encryption, execution history, and
deployment. This is where we finish and
ship the entire platform. You will learn
how to pass variables between nodes
build a templating system for data
transformation, and implement real-time
updates so you can see workflows
executing live. We're building trigger
nodes, Google forms, stripe web hooks
the full AI integration layer with open
AI, Anthropic and Gemini, Discord and
Slack notes, encrypted credential
management, execution history with full
error tracking, additional out providers
like Google and GitHub, and finally
deployment. By the end of this tutorial
Nodebase will be a production ready
product. And now without further ado
let's finish this project. Before we
dive in, using the link on the screen
you can get 3 months of Sentry Team
completely for free. We'll be using
their AI monitoring to track all our LLM
calls throughout this build. If that
sounds useful for your project, feel
free to grab the deal. And now, let's
build. In this chapter, we're going to
focus on executing our nodes. In the
previous few chapters, we focused on
building the UI for each node and for
the editor itself, but we never actually
executed or used any data from those
nodes. So that's what we are going to be
focusing on today. Let's go ahead and
first improve the props for our HTTP
request node so it's easier to work with
its data. So inside of features
executions components HTTP request open
both dialogue and node.tsx
and let's go ahead and see the problem.
So the problem is that we are passing
these default form values in three
separate props instead of just one. So
let's go ahead and fix this. The first
thing I want to do is modify HTTP
request node data and make it so that I
remove this last part. So we actually
never even used this. I just added it
here for flexibility.
Once we remove this, our HTTP request
node data matches exactly what our form
provides. An input for the endpoint, a
select drop-down for the method, and an
optional body. Now that we've fixed
that, let's also fix something in handle
submit here. No point in handling these
like that. We can just spread values.
It's much simpler.
Now let's go ahead and rename this form
type that we are exporting from here. So
we have to go inside of the dialogue
here and we have to find the form type.
And now I want to rename this to HTTP
request form values like this.
Once we do that, let's go back inside of
node and let's import it here. And now
just make sure that you use it here in
the values.
Now let's go ahead and let's modify our
props for the dialogue component. So go
ahead and remove these three and instead
add default values optional partial of
HTTP request form values.
Let's go ahead and make sure we use them
here as well.
Let's go ahead and modify the default
values here
to be this.
Let's just see default values is
uh let me just see the problem here.
Let's go ahead and give this an empty
object as the default. This way we don't
have to do the question mark thing.
Basically, we're just doing some
fallback here for a better user
experience. And we have to do exactly
the same for the form reset. So, let's
do it here as well. Perfect. And for the
dependency array, we can now just use
the default values.
Once we've done that, we can go back
inside of node and we can now simplify
this a lot. So we can now just pass
default values node data. As simple as
that.
That's the first task finished. Let's go
ahead and mark it as finished. There we
go. So the second thing we have to do is
we actually have to display the execute
button. So if you take a look at any of
your workflows, there is currently no
way of executing them, right? For
example, I have this super simple manual
trigger and then I have an HTTP request
get post, whatever, it does not matter.
Uh but how do I even execute this?
Right? Even if I save it, nothing really
happens. So the first thing we actually
have to do is we have to show the
execute button. But we should only show
that if we have a manual trigger.
Luckily for us, we can do that quite
easily. So let's go ahead inside of
source. Let's go inside of features
editor components. And in here I want to
add a new component called execute
workflow button.tsx.
I'm going to go ahead and import
button
from components UI button. I'm going to
import flask
icon from lucid react. And then I will
very simply export con execute workflow
button
like this. I'm going to create the props
here to be a very simple workflow ID
with capital ID.
Let's go ahead and extract it here. And
then I'm very simply just going to
render set button with a text execute
workflow. And with the icon we imported
from above, let's give the icon itself a
class name of size four. Let's give the
button size of large. Let's give it an
on click of an empty arrow function and
disabled to be explicitly false so we
remember to change it later to something
dynamic.
Now that we have this, we can go ahead
inside of editor.tsx.
And in order to render this dynamically
we first have to create a constant has
manual trigger. And let's do use memo
here so it doesn't rerender too often.
So, make sure to import this from React.
Let's go ahead and fill the dependency
array with nodes because that's what
we're going to be using here.
So, let's do return nodes dot sum node
node.type
matches and let's use our enum from our
Prisma schema node type which we can
import from at generated Prisma dot
manual trigger.
So just make sure you've imported the
node type.
Let me just show you from generated
Prisma. And now that you have the has
manual trigger, you can go ahead and dup
uh duplicate this panel
and use the has manual trigger boolean
to render in a position of bottom center
our new
execute workflow button and pass in the
workflow ID prop. You should have the
workflow ID here. So this should work.
Let's fix this by using center. There we
go. And now you will see that every time
that I have a manual trigger, I also
have the execute workflow button. If I
delete it, it gets deleted as well.
Since I didn't click the save button
even if I refresh, it's still here. And
the execute workflow button is here.
Perfect. So that's another thing that we
can check off. Let's go ahead and just
do that. And now we actually have to
create the execute inest function
because right now clicking on the
execute workflow does not do anything.
Let's start with first defining the
background job. So we're going to go
ahead and revisit our source features.
My apologies source inest functions.
And let's go ahead and remove all of
this here because we're no longer going
to need it. So all of this generate AI
sentry all of it and we can remove every
single thing within the actual step even
the return. Now let's go ahead and
rename this from execute to execute
workflow. Let's go ahead and change the
ID here to be execute-workflow.
And for the event let's follow follow
the structure workflows/execute.workflow
workflow
like this. For now, this doesn't have to
do anything. We can just do await steps
sleep. Let's call this test and 5
seconds.
Perfect. Now that we have the execute
workflow, we have to also revisit our
app folder API in justest route.ts.
Let's import execute workflow here. Copy
it and simply paste it here.
Perfect. That should resolve the error
that just appeared.
Now let's go ahead and revisit our TRPC
procedures, more specifically the
procedures for the workflows. So instead
of the workflows server routers.ts
let's go ahead all the way to the top
here and create execute. It's going to
be a protected procedure.
It's going to have a simple input
which will simply receive an ID which is
a type of string and it's going to be a
mutation.
The mutation itself will have an
asynchronous function. So let's go ahead
and prepare it like that. And let's also
prepare the input and the context.
And now what we can actually do here is
super simple. We don't have to use the
input or the context. We can just go
ahead and uh first fetch the workflow by
using await prisma dotworkflow.findique
or throw. And I know I just said we
won't use the input. My apologies. I
just got an idea that we actually do
have to fetch the workflow. So let's do
it while we are here. ID will be input
do ID and user ID will be context out
user ID.
Perfect. And let's return the workflow.
And now in between those two, we're
actually going to await inest. So, make
sure you import inest from ingest client
dot send
id and let's go ahead and quickly remind
ourselves the ID is actually the event.
So, let's copy that and paste it here.
And I think we have to define the data.
Uh I maybe I'm forgetting something.
Just a second. Let me uh remind myself
of how I'm executing.
So it is injust send and this should be
name. There we go. Yes, that makes more
sense like the event name. Uh and then
we don't even need the data. So just a s
super simple execute procedure which
just calls a background job. We actually
did this already when we explained
background jobs. Now let's go ahead
inside of features workflows hooks use
workflows
and let's find let's copy use update
workflow. Let's paste it. Let's rename
this to how uh hook to execute a
workflow.
Use execute workflow.
You don't need the query client. So you
can remove it. You don't need any
invalidation.
This will be executed and this will be
failed to execute.
And this will use workflows.execute.
Make sure to modify this. So on success
has the data and data name because in
our routers for the workflows, we return
the fetched workflow. So if you don't
return the fetched workflow, you will
see that this will fail because it won't
have any data. So make sure you return
the workflow from our new execute
procedure here. Perfect. We now have use
execute workflow. Now let's go ahead
inside of our execute workflow button
which we started developing here. Let's
define the hook const execute workflow.
Use execute workflow. You can import it
from features workflow hooks.
Let's do con handle execute or handle
submit however you want to call this.
And let's just do execute workflow
dotmutate
and pass in the ID workflow ID.
Let's go ahead and modify the on click
to call handle execute and modify the
disabled to be execute workflow is
pending.
There we go. Before you start this, make
sure that you have both inest and nextjs
running. So I am doing this by using npm
rundev all because I've set up mroxs but
in your case you can either use I think
we also defined some package json
scripts here. Let me just quickly see
yes uh perhaps you can use inestdev or
if you didn't set that up you can always
just use in cli with nbx like this. All
of them will work. Perfect. So just make
sure you have all of them running here.
I will now refresh my nextjs. I will
refresh my injest development server and
let's go ahead and check it out. So I
have no runs running and when I click
execute workflow I get a success message
and I have something running and it's
just a sleep test for 5 seconds.
Perfect. Which means that we now
officially have something happening when
we click on the execute button. Perfect.
But right now what we should be focusing
on is this function.
So it needs to somehow fetch the
workflow that's been executed. It needs
to fetch all of its nodes. It needs to
sort them topologically and then it
needs to run a specific type of request
depending on the type of the node. So
those are our next goals.
Let's start by first checking if we have
enough information to even fetch
something. So inside of execute workflow
go ahead and do const workflow id
event data doworkflow id and if we have
workflow ID missing throw new
whoops let me fix this throw new non
retryable error make sure to import this
from inest so when you throw this error
inest will not retry this is because
there is nothing to try if workflow ID
is missing. So just workflow ID is
missing. As simple as that. We can't
proceed further. We have no idea what to
execute in this case. Let's not waste
any resources here. So if you go ahead
and try now this should fail. If I click
execute workflow here, there we go.
Immediately it fails. And you can see
there is just a single attempt and no
retry workflow is missing. So if you
didn't use non-retriable error, if you
just threw a normal error, you will see
that it's a different behavior. So let
me go ahead and click this. You can see
it's running, it's failing, and it will
now continue to attempt to do this for
the next three times for no reason at
all. Right? We know if it's missing
once, it's going to be missing for the
next three attempts as well. Perfect.
Now that we have that, what we are
supposed to do is we can finally get rid
of this and we can instead do const not
node and let's go ahead and do await
step.r run prepare workflow and let's
open an asynchronous function here.
First things first, let's fetch a
workflow with await prisma workflow.
Uh, of course, let's import prisma from
lib database. Prisma.workflow workflow
find unique where ID is workflow ID
and let's also
add includes
nodes true connections true
and if there is no workflow let's throw
new non retriable error workflow not
found so that's why I didn't use find
unique or throw because this will just
restart the query. Uh now you could
decide for yourself if find unique I
mean yes technically this could fail if
maybe the database is unreachable.
So maybe we shouldn't exactly throw uh
unreachable. Yeah, maybe we can do find
unique or throw here and then it will
just you know if the database is missing
I mean if the database connection is bad
it can it will just retry. Maybe that's
actually a good thing. Yeah, let's keep
it like that. And now let's just do
return workflow
nodes. As simple as that. Like we just
want the nodes and let's go ahead and
return nodes.
Perfect. So a super simple uh execution
here. Now let's just modify our execute
here to actually pass that. So data
workflow ID will be input do ID. Make
sure you don't misspell workflow ID.
It's used like this.
Let's go ahead and try it now.
So I'm going to uh let me just try and
refresh here.
And now I will just click execute
workflow and let's see what's going on.
So prepare workflow finalization. Let's
try and open this. Can I close the
sidebar? I can. Perfect. And here we go.
We should have two nodes. First one is
the HTTP request and the second one is
the manual trigger. Perfect. Amazing.
That seems to be working just fine. Uh
what we should do now is we should
somehow sort these nodes. Why do I say
we have to sort these nodes? Well, look
at it. Okay, this is super simple
right? We can just sort by date of
creation in a linear example. But what
if we branch out? What if we do this and
this? Right?
So that's why we need to use topological
sort. So it can handle this type of
branching.
For now, please keep it simple like
this. So you can have similar results as
me right?
Let's go ahead and work on the
topological sort. Now, so uh in order to
do that, we need to have one helper
package installed called topo sort.
There's a bunch of these packages which
help for this, but I found this one to
be the simplest to use.
And once you have that installed, let's
go ahead and go inside of the inest
folder and create the utils.ts.
Let's import sort from topos sort.
Uh looks like we also need the types for
it. So let's go ahead and just install
the types.
mpm install save to development
types topo sort and let's wait a second
perfect let's export const topological
sort
let's go ahead and accept first
parameter nodes to be a type of node
from generated Prisma and the second one
connections
connection from generated Prisma and an
array and it will return node like this
array of nodes. So first things first if
no connections return node as is meaning
they are all independent.
So let's just return my apologies. Let's
check if connections.length is equal to
zero return nodes. So what is this
example here? Well if this didn't exist
can I remove a connection? I can. to
this right these two are not connected.
So if we try executing them well
obviously you can decide for yourself
what should happen should anything even
happen in this case maybe not maybe you
should just throw an error right but
we're just handling that case for now
right if no connections let's just
return back the nodes there's nothing we
can do right we have no idea what is the
actual connection between these nodes
otherwise let's go ahead and create the
edges array for topo sort const edges
is going to return a matrix. So string
string like this connections do map
and then get the individual connection
and return an array like this
and in here in the first one use from
node ID and the second one to node ID.
Now let's go ahead and let's add nodes
with no connections as self edges to
ensure they're included.
So con connected node ids will be new
set with a type of string.
And let's do a simple for loop here for
connection of connections.
Connected node ids dot add connection
from node ID
and then another one to node ID.
Now let's go ahead and do a simple for
loop over our nodes. for node of nodes
if not connected node ids has a node id
let's go ahead and push it to our edges
array edges push and open an array
inside
pass in node do ID and node do ID inside
so this are connections as self edges
and now let's go ahead and finally
perform form the topological sort.
Let sorted node ids return a string.
Let's go ahead and click try. Sorted
node ids are going to be topo sort pass
in the edges.
Let's go ahead and remove duplicates
from self edges. So that's this part
here.
We're going to do that by simply
returning a set sorted node ids
new set sorted node ids. Otherwise
let's go ahead and catch an error. If
error instance of error
and if error dot messageinccludes
cyclic.
Uh, make sure you don't misspell this
like me. Cy click. So if this happens
it means that this
type of node array that we received and
their connections are cyclic. Meaning we
cannot create a linear sort from them.
So because of that we need to throw new
error here. Workflow
contains a cycle
as in something is wrong. This is not
linear. We can't actually do this.
Otherwise, just throw that error back.
And now finally, let's go ahead and map
sorted ids
back to node objects.
So, const node map new map nodes map use
a shortand constant n and return an
array nid
and n itself. And finally return sorted
node ids dot map get the ID and return
node map.get
ID
use uh an exclamation mark here for a
non null assertion fix
and then do dot filter boolean
like this. If you have biome turned on
this will most likely give you a
warning. Uh this is fine. We're not
going to have too many of these cases
but in this one, uh it just helps
simplify the code. So now our nodes and
their edges should be sorted, right? The
only exceptions are if we do a cycle
which shouldn't be able to happen
because we should never be able to do
this and then this, right? You can see
how our UX does not allow us to do this.
That's because on the triggers, we
removed the edge here. So that cannot
happen. But still, even if someone's
somehow someone breaks that, we are
going to take care of it here by
throwing an error. So I think this is
okay now. Uh let's go ahead and try it
now inside of routers. I think that uh I
mean I'm not sure if this is a good
example. Maybe I'm not understanding
this correctly. But you can see that the
first node that was returned here was
actually HTTP request and then manual
trigger
when it's actually the opposite, right?
It should first be manual trigger and
then it should be HTTP request. But then
again, perhaps it just depends on how we
read this array. I'm not even sure. Um
let's just try it so we can actually see
inside of functions. Now
well, let's rename this entire constant
to sorted nodes like that. And let's
return sorted nodes here.
And now instead of returning workflow
nodes, we can do return topological sort
which you can import from dot /utils
pass in nodes as the first argument and
pass in connections as the second
argument.
There we go. So now we have that. Uh
let's go ahead and run this so we can
see if there are any differences. Now
just make sure you have a connection and
click execute workflow. And let's go
ahead and see. Perfect. No errors. And I
can already see the first node is manual
trigger and the second node is HTTP
request. Amazing amazing job. Uh
obviously we don't have enough nodes
right now to you know try
uh and create some complex scenarios but
you can try like something like this. I
mean, it will be very hard to debug if
it's actually okay or not because even
if I try and do this, it's just going to
look the same. It's going to be manual
trigger and after that everything will
be HTTP request. So, we don't really
know what was the order here really. So
let me go ahead and look here. Yeah. So
as I expected, manual trigger and then
just a bunch of HTTP requests. But at
least we can count the number of HTTP
requests. So one
two, three. I think I counted three. 1 2
3. Meaning all of them are now
considered in a linear sense. So we can
now go ahead and map over those sorted
nodes and we can execute each of them.
Perfect. So even though we kind of
branched out, we uh made sure to have
all three in our new array. Perfect. So
at least that works. And now that we
have topologically sorted our nodes
what we have to do is we have to execute
each node depending on their type. So
we're going to wrap this chapter up by
kind of preparing that registry of
executors that each node will have for
itself. Basically the way to execute in
its background job.
So before we return sorted nodes, let's
go ahead and initialize the context with
any initial data
from the trigger. Now this doesn't make
too much sense, right? Because I'm about
to do let context
to be event
dot data dot initial data or an empty
array uh an empty object because we
never actually pass this. Now right you
can see where we call it we execute it
right here await inest.end.
So what should we really pass in the
initial data here? Well in this specific
example where we have a manual execution
absolutely nothing. That's why this is
optional. But to give you a better idea
of when this will be populated with
something imagine a web hook trigger or
a Google form submission. Those are the
types where we are going to also execute
this job like this. But since it will be
within a web hook, we're going to have
some payload there. And then we will be
able to do initial values and just pass
in uh or initial data, however I named
it, right? We're just going to do
payload
data, right? Something. And then we will
be able to run our executors with that
initial data. I'm kind of trying to
explain how this will be used in the
future. Uh if it's confusing you, don't
worry. It will make more sense once we
actually implement uh Google form
submission or something like that. So
now let's execute each node here
for con node of sorted nodes.
Let's go ahead and first get the
executor. So for each node we need to
get its executor. get executor a
function which does not exist yet and
pass in node.type type as node type here
from generated Prisma.
And now we have to develop the executor
registry. So I'm going to develop that
inside of features executions.
Let me create a new folder called lib.
And in here executor registry.ds.
Executor-registry.ds
looks fine.
And let's go ahead and export const
executor registry
like this. It's going to be an object.
And let's give it a specific type here.
Record. The first argument will be node
type. And for now, let's just make the
second one unknown. Import this from
generated Prisma. And now you will have
to use node type and then put for
example manual trigger. And then this
will be the executor. Then node type dot
initial and that will have its own. And
then node type dot uh http request and
that will have its own. So that is the
point right. We are now going to go
through each of our nodes and depending
on their type. Uh so we just did export
const executor registry. Let's now do
export const get executor
type node type
unknown
const executor
is equal to executor registry
type
if there is no exeutor found in that
object above throw new error and let's
be specific so open back takes no
executor found for node type
and let's just pass in the type
and return the executor.
There we go. And now we can use the get
executor here
from features executions lib executor
registry. Uh not too sure if that's like
the best place to put it but kind of
makes sense, right? executions executor
registry I guess I don't know and now
that we have here uh the executor let's
go ahead and let's do context
await executor
open an object here and pass in data to
be node data as record and string
unknown
pass in node id to be node dot ID
context, and step.
But now we have a problem. So this
executor is a type of unknown because
obviously that's what I've typed here.
So let's go ahead and give it a proper
type instead. So let's stay inside of
features executions here and let's go
ahead and add types.ts
here.
Let's go ahead and import
get step tools from ingest and ingest
itself.
And we can limit this to be types. Let's
export type workflow context to be a
simple empty object. So string unknown.
Then let's export type step tools to be
get step tools inest any.
Let's export interface node executor
params
to use a generic t data which is a type
of record string unknown because it can
truly be anything that we are going to
pass in these nodes right their data
will be able to be anything okay fifth
try on the unknown and I still don't
know how to spell it unknown there we go
so give it a type of data t data which
is just anything right for example in
HTTP request node this will be uh an
object with endpoint body and method uh
then in stripe it's going to be well
stripe is a trigger so it's a bad
example but in open AI node it's going
to be system prompt and user prompt and
model in anthropic it's going to be
similar right uh in I don't know any
other node that you create it will be
its own type so basically the data
which is this dynamic thing represents
whatever we have in this dialogue here
which can be anything depending on the
node that's why it makes no sense to do
any strict definitions of it
then let's do node ID so we know exactly
which node we are working with here
context workflow context which again can
be anything because the context will
simply expand as each node progresses
because we will be able to use the
context of the previous node into the
next node. So again, we can't really
define that. We have no idea what nodes
will return. What will this HTTP request
return? We don't know. Maybe it will be
a JSON. Maybe it will be a string. Maybe
it will point to an API. Maybe it will
be an error. We don't know. That's why
it's uh defined as this.
And let's do step step tools
later. We are also going to have publish
here, but I'm going to comment this out
and just do to-do add real time later
because we don't have that now. And
finally export
type node executor
t data
equals record
string unknown.
Okay. Unknown
params node executor
params
t data
and return promise
workflow context.
All right, very complicated, but that's
all the types we need. Perfect. We can
now head back inside of the executor
registry here and we can modify this
unknown to be node executor from dot dot
/types
which obviously means that all of these
are now going to fail but let's also
change this unknown to be node executor
and now we have to develop proper
executors here so for now I'm going to
just focus on the manual trigger here
let's go ahead and find where it is. So
it is inside of uh features triggers
manual trigger right here. And this one
will be super simple. So inside of
manual trigger create a new file
executor.ts.
And let's go ahead and import type
node executor from features executions
types
and export const manual trigger executor
give it a type of node executor
and inside of it we just need a empty
object. So this can for example be
uh let's go ahead and call it type
manual trigger data record string
unknown
like this and then just pass in that
here
open an asynchronous function here
like this there we go now the params
which we're going to have are data node
ID context and step. So exactly the ones
which we just defined here data node ID
context and step for the manual trigger
the data will not actually exist. So you
can already remove it. I just wanted to
show you type safety works here. Let me
add to-do publish loading state for
manual trigger because we don't have
real time yet. But to-do that will be
the first thing we're going to do here.
Otherwise, let's just do const result
await step.r run manual trigger and a
very simple asynchronous function
which simply returns the context. So
basically this will be passed through as
in there is nothing to do in here. Just
go to the next node and let's do todo
publish. Whoops.
Success state for manual trigger. So
after we succeed, just go ahead and
proceed.
There we go. And return this. And now we
have our first executor, manual trigger
executor. Let's go ahead and use it
here.
There we go. Uh now obviously I think we
have uh some problems here. We didn't
add initial or HTTP request. Uh, so can
I maybe just do like partial?
I I want to find a way of not having to
add every single one of them here like
node type dot initial and then I have
to, you know, think of something node
type
HTTP request, right? You can just add
all of these right now to get rid of
type errors. And now for each node that
we have, we're going to have to develop
their executor. And this way we kind of
have like a big switch case inside of
our where is our inest folder functions.
So basically each of those topologically
sorted nodes are now going to get their
executor and then we're just going to
execute it. And for each of them we're
going to extend the context even more.
Right? So if the first HTTP request node
returns some JSON, the second HTTP
request node will be able to access that
context and that's what users will be
able to define using these variables. So
the users will be able to use HTTP
request dot users or todos, right?
That's how that's going to work.
Perfect. So now we have this um not sure
if we are like ready to try this. Well
here's what I want you to do. I want you
to copy this executor here. Copy it.
Go inside of executions components HTTP
request and paste the executor here.
Change this from manual trigger data to
HTTP request data. This will be HTTP
request. HTTP request
executor and change these instances to
be for HTTP request and step.r run HTTP
request
and let me just quickly compare this
with my source code.
So
uh we are kind of right. Let's call it
okay. Yes, I think this is okay. We can
now go inside of executor registry and
just change this to be HTTP
request executor and um okay yeah we're
kind of mixing features.
Uh yeah I kind of don't like that we
have the executor registry in the
executions folder and then we have a
specific exeutor in the manual trigger
uh in the features triggers. H it's kind
of spaghetti going everywhere. But let's
leave it like this for now. Just make
sure you can import them. And yes, the
initial will actually never happen. But
we have to add something here just to
satisfy the type errors here.
And now if we actually try I think this
should work just fine. Uh the only thing
we ought to modify is what we return
here. So what we should actually return
is the following.
We should return the workflow ID. We
should return the result as context
because the context will be fully
modified by the end of this for loop.
Even though right now nothing will
really happen because we just return
back the context and go to the next
node. That's the only thing we do right
now.
So let's go ahead and try it now. The
only thing we should see now is I think
one, two, three, four. We should see
four steps now happen when we execute.
So right now we had one step. This one
doesn't count. This is finalization. Now
we should see four steps, one for each
node. So make sure to click save here.
And then let's go ahead and try and
execute. So let me just see. Okay, fully
saved. execute workflow and let's see
prepare manual trigger HTTP request HTTP
request HTTP request perfect amazing
that is exactly what we wanted and if
you remove one and save
now it should have three steps so let's
try this again manual trigger HTTP
request HTTP request so yes not counting
the preparation one just these ones
perfect you can now see that Our
workflow background jobs has exactly the
amount of steps in the exact order as
the their graphical interface here.
And now to end the chapter, let's
actually make an HTTP request node fail
or succeed. So keep it simple for now.
Just do a very simple connection between
a manual trigger and an HTTP request
node. And for the first example, don't
configure it at all. As in this should
say not configured. Don't pass any
endpoint URL. Don't do anything at all.
And let's focus on the HTTP request
executor. So the first thing I want to
do is I want to uh define proper HTTP
request data. So I'm going to go ahead
and define endpoint to be an optional
string. I'm going to define the method
to be an optional string. Body.
And that's it. So basically the exact
thing that's inside of Let me go ahead
and try and find node.tsx
http request node. There we go. This
basically so yeah perhaps the method
should be this.
Perfect. So we are kind of passing now
to the back end what are the possible
options for this HTTP request. And now
that we have that, we can actually bring
back data from here
because once we have the data, uh we can
actually do something with it. For
example, before we do the result, let's
go ahead and check if there is no data
dot. As you can see, we now have
completion here. If data endpoint is
missing to do, let's do publish error
state for HTTP request. But what we can
do is throw new non retriable error
here. HTTP request node no endpoint
configured.
So just throw that error. And I think
that already if you try this now this
should fail. So just make sure to save
this super simple example. Make sure
this is not configured. And once it is
saved let's go ahead and execute it. And
now we should see this fail. Okay, it's
running. And there we go. So what
happened? Let's see. HTTP request node.
No end point configured. So exactly what
we expected. The only thing that's
missing is visual feedback which we are
going to be working on later.
Now how could we uh do a request? Well
we could do a request by using await
step.fetch and using data endpoint here.
That is one way of doing it. And then we
could get const result like this.
Let me go ahead and do this and just
return result. You can actually remove
this. There we go. That's one way of
doing it. So if I go ahead and try try
and change this to httpsc
codewithandonia.com Google or something.
Maybe this is a bad example. Maybe this
will fail now because this will return
text instead of JSON. But let's just see
if, we, will, at least, see, step.fetch., Here
it is. Step.fetch is now happening. So
you can of course use uh inest builtin
uh step.fetch. And you can see the
output here body right. So, you can use
that. Uh, but I kind of found it easier
to use uh a bit more I'm not sure what
should I use to describe it. Not exactly
advanced because here's the thing.
Step.fetch is a wrapper around normal
fetch which we all know and love. But we
also know there are certain limitations
with it, right? It's quite hard. I mean
hard. it takes a lot of code to do a
super simple post request with it. So
for that reason, I recommend that you
actually do npm install ky
which is like a lightweight alternative
to ax. So let's import ky from ky
obviously. So if you prefer axius, you
can do this with axius. If you prefer, I
don't know
step.fetch, you can just use a
step.fetch, right? So this is what we're
going to do now. I'm going to go ahead
and do result. And instead of stepfetch
this will be step.run
http request. And this is why I also
prefer using my own kind of fetch
execution so that I can have a step.r
run independently like this.
And inside of this, I'm going to go
ahead and do const method to be data do
method or I will fall back to get
method.
Maybe I can even stop doing this all the
time. It's getting a little bit annoying
and I can just throw an error if method
is not defined because it it is getting
a little bit annoying right now. And
then what I'm going to do, let me just
define const endpoint to be data.point
end point like this because at this
point I think I can uh can I yeah I'm
just going to do this this will also
throw you a warning if you're using
biome or any llinters but just leave it
like this for now so we're doing a non
null assertion here because we know that
at this point data don't endpoint will
exist
and now I will define options to be
method like this options will be a type
of KY options.
Uh KY options where can I import that?
Okay, so KY
import type options as KY options.
Perfect.
And now that we have this, let's see if
we should also attach the body property.
So if open an array post put patch
includes
method
if we have data.body
options.body
is going to be our data.body.
The reason I'm doing this inside of an
if clause is because uh we are going to
first uh uh it's a bit hard to explain
but the way we will be able to write
this is also using variables.
So if instead of post method here you do
this HTTP response data id you have to
parse that here. We are not going to be
doing that now simply because it's
unnecessarily complicated. But let's
just leave it like this for now.
Um
yeah, let's let's just do this. Okay
simple as that. And then let's do con
response here. Await ky endpoint and
options.
And let's go ahead and do const response
data to be await response.json JSON
catch
response dot text.
And finally
let's go ahead and let's return
spread the context and do HTTP response
here. status response status status text
response status text
data
response data
and um
well I think this might be enough for
now and this will actually fail if you
try to fetch something that's not
returning JSON so instead what you can
do is you can check content uh Okay
const content type here is equal to
response headers.get
content type and then in here check if
content type question mark includes
application forward slashjson
then do await response.json
otherwise do await response.ext text.
Perfect. So now we can use the response
data as the actual data here.
All right. And I think
that should work just fine.
So let's try it out now. So I'm going to
go ahead and just make sure this is a
get request pointing to my website here.
I will click save and then I'm going to
execute the workflow. And basically I'm
not expecting much to change except in
the finalization step in the result I am
expecting this right the body the
headers I'm expecting to see that in the
finalization step here. So let me click
execute workflow
and let's see did I develop this
correctly or not. Perhaps I made some
mistake in the KY implementation.
Uh there we go. finalization now has the
result from the previous HTTP request
node which has the data for my I mean
this is useless in a sense that we are
not fetching any API we are just
fetching
HTML so let me try and just find you a
nicer example to make this make more
sense
so there is this public API that you can
use json placeholder typiccode.comtodos
forward slash one. So let's just use get
save here.
And after you've saved up here, let's go
ahead and execute workflow. And now we
should have a nicer response here.
It should be in a form of JSON in the
finalization. Here we go. The HTTP
response now has data. And in here we
just have some mock to-do completed
false ID one title something user ID
something we have status we have status
text basically exactly what happened
here but here's the thing if you I'm not
sure how this will now behave but if you
try and map like two HTTP requests now
and let's go ahead and just copy this.
Oh wow I I think I just did the circular
thing. Let's remove that. Okay, that
would definitely fail. So, go ahead and
put number two here. Get request. Save.
Click save here. And now, technically
we should have two objects with the name
of HTTP response. So, that could
technically fail. I think that this
might actually cause an error. I'm very
interested to see what will happen
because okay, it completed. So two HTTP
requests happened. You can see one with
an ID of two, one with an ID of one. So
what I think Oh, it just over over.
Yeah, we just get the HTTP response of
the second one. So it overrides the
first one. That's not good. We should
think of a solution to allow the final
step to have the data from multiple HTTP
requests. We can do that quite easily
but by maybe introducing a third field
here called variable name and then you
will have the exact variable name that
will be used here in the result and then
the user will be responsible for making
sure that they don't override
themselves. Uh but I think that's why I
wanted you to just use this super simple
example so you don't run into th those
kinds of issues. So you can see that
there is still some work to do here but
I think you kind of get the idea now
right each of our nodes now has a
topological order uh and it has its own
way of executing so in in this specific
HTTP request it's quite simple right but
when later when we create open AI
instead of doing this we're going to
check do we have a user prompt if we
don't throw an error user prompt is
required And instead of doing uh a fetch
request, we're just going to be doing an
open AI request and returning back
something.
So that will be uh the way we're going
to move forward. So let me go ahead and
check. We created the execute inest
functions. We did the topological sort
and we created the executor registry. Uh
amazing. Uh and yeah inside of our
executor registry we have one which just
kind of passes through this is the one
inside of triggers manual trigger
executor. So yes this executor doesn't
do absolutely anything besides have its
loading state and its success state
which is purely used for user
satisfaction. So they can see that
something is happening right? Even
though this will immediately show
loading and then immediately show
success like there is absolutely nothing
happening here we just kind of make sure
that the context gets passed further. So
we can also even remove node ID. I think
I think we don't even need node actually
we will need node ID later for the
loading and success states. But I think
that this is enough for this chapter. In
the next chapter we are going to solve
two problems. We're going to solve the
problem of our HTTP request nodes
overriding themselves and we're going to
start to do this. So instead of doing it
like this, we will be able to do I don't
know uh previous node dot id right you
will basically be able to use the
context of your previous nodes. So think
of it like this let me show you instead
of inest development server. So this
http response now returns user ID 1 or
ID2 or title or something. What if you
wanted to use that? So you would do HTTP
response dot user ID, right? That's kind
of the goal that you are able to use the
data from one node into another node.
That's what we're going to be focusing
into the next chapter. So for now, I
think this is enough for us to get
introduced into this execution thingy.
So 18 node execution.
Let's go ahead and commit all of these
files here. So I have 15. You might have
16 again if you have that MROS log file
here. Otherwise, this should be it.
Uh staged all changes. Before I commit
I'm just going to go ahead and click on
main here. Create a new branch. 18 node
execution.
There we go. So now I'm in this new
branch. I have staged my changes. 18
node execution. I will click commit and
then I will click publish branch.
Now that this branch has been published
I'm just going to go ahead and open the
pull request.
So I'm going to click compare and pull
request and create pull request 18 node
execution. And now let's go ahead and
review our changes.
And here we have the summary by code
rabbit. New features. We added an
execute workflow button in the editor
when a manual trigger node is present.
We introduced workflow execution from
the app via a new action and hook. We
enabled the HTTP request nodes to
perform real requests and return
response data. And we of course refactor
the workflow runs now follow dependency
order meaning the topological sort with
improved orchestration and a unified
context result enhancing reliability and
clarity of outcomes. as always file by
file walk through here. But here is the
sequence diagram that uh interests us
the most. So let's go ahead and try and
follow it. So when the user clicks on
execute workflow, we call the TRPC
mutation with a workflow ID param.
We then call uh workflows. Okay, so this
is the actual TRPC uh execution. And the
only thing this does is it sends the
event workflows execute workflow.
Immediately we return back the workflow
with a success message to the user. Uh
okay. And what actually happens here is
the background job. So uh we uh let's
see we trigger the function with the
event. We load the workflow its nodes
and its connections. We return that data
and we map it to the topological sort
function and then topological sort uh
gives us sorted nodes and we can then do
the for loop for each of that node and
we can find the appropriate executor for
that node and execute it and return the
updated context along. Great. So that's
exactly what we are doing. We do have
some comments here. So let's go ahead.
First one is in the HTTP request
dialogue here. So I changed from three
individual props to just one. And in
here it says destructure and depend on
specific form methods instead of the
full form object. Uh okay I will look
into that.
Now in here it is telling me to do
appropriate error handling for KY
package. We will work on that in the
next chapter where we improve the entire
HTTP request executor itself.
Same here. So yes, we completely forgot
to pass the option headers which will
lead to uh some servers to reject the
request again. This will be in the next
chapter where we improve the HTTP
request exeutor allgether.
In here uh it is telling me to improve
the way I handle errors in case I cannot
find an exeutor for a certain node. Uh
yes, so I could look into doing this. I
think it's fine as it is, but yeah, it
wouldn't hurt to have even more strict
checks here. I will look into that as
well. Uh same thing for this. So I think
the error message is cyclic, but in here
it's telling me is cycle. So I will just
look at the source code of toposort or
documentation and see which one of those
it is. And in here, yes. So I told you
that we can use the nonnull assertion
here with the exclamation point. So in
here it suggests not doing that and
instead just quickly checking if a node
is missing and throw an error. Perhaps
that is safer. Yes, we could do that. Uh
okay, amazing suggestions from code
rabbit. I will take a look at them and
for the next chapter maybe prepare a few
that I think are important so we can
proceed. But for now, let's go ahead and
merge this pull request. Amazing job.
This was a complicated chapter. Let's go
back inside of the main branch and let's
make sure to click on synchronize
changes and okay. And let's go ahead
inside of our source control. Open the
graph. And in here we should now see 18
node execution. Amazing. That means
everything here is merged, which I think
means we are ready to go ahead and wrap
this chapter up. So, we pushed to
GitHub, created a new branch, created a
new PR, and reviewed. Amazing, amazing
job, and see you in the next chapter.
In this chapter, we're going to continue
our work on executing nodes by fixing
some issues we discovered in the
previous chapter.
Let's start by fixing the code rabbit
reported issue about our missing content
type header. and let's discuss the
cyclic error message that we decided to
look for whenever uh topos sort is
happening. So I'm going to go ahead and
open the previous pull request right
here. And this is the first suggestion
and it is completely valid. We have
forgot to add headers to our uh postput
patch request. So let's go ahead and do
that to ensure that our HTTP request
node can properly uh make those
requests. So what we have to do is we
have to find executor.ts
inside of features executions components
HTTP request folder. So let me show you
how that looks right here. Features
executions components HTTP request
executor.ts.
And in here when we decide that this
will be a postput or patch request
besides filling the body we also have to
do options. headers and let's do this
properly. options headers and we have to
add content dash type and in the type
will be the exact this one that we are
querying for later. There we go. This
will ensure that our postput or patch
request doesn't get rejected because
headers are an important part of an HTTP
request. Of course, great. So, that's
one thing resolved from our previous
pull request here. There are some other
things here such as validate node types
and data at runtime. Now, depending on
your personal preference, you can of
course do this. This is a good advice
but let me show you exactly where this
is happening. So this is happening
inside of our source folder inside of
ingest functions.ds.
So basically what it's telling us to do
is the following. You can see that when
we pass this node.ype
by default it doesn't have a proper node
type. So we have to cast it as such
right? Or maybe we don't even. Uh yeah
you can see that uh okay, it looks like
it's working, but still cold rabbit is
telling us to check at runtime if it's
actually a part of node type using this
and then throw an error if it's not.
Here's why we don't have to do that. So
first things first, I had no idea that
we don't have to cast it as node type.
So I'll have to retrace my steps to like
fully confirm that. But just for now, I
don't want to change my the code I've
written because I don't want you to have
any errors, right? So if you have this
leave it just in case. Uh and instead
if you go inside of get executor here
you can see what happens. So if that
type, which is a node type, does not
match to whatever we define in our
executor registry, we're going to throw
an error. So you don't really have to
worry about node type not being
compatible. So that's why I chose not to
do this suggestion right here. But of
course depending on your preference, you
might think this is a great runtime
validation. So you might do it. Same
goes from data which is later uh
retrieved from the executor constant.
For now I'm going to leave this as is.
And now let's go ahead and discuss the
other problem that we have and that is
key collision. So what is key collision?
In our previous example, we had this
type of
schema, right? Very simple. When I click
execute, I'm going to get this and then
I'm going to get that in my results. So
if I go ahead and visit my let me go
ahead and just find my injust server
here
8288. So I suggest you go here too.
Uh having some trouble clicking on it.
82
88. Here we go. Completed. And if you go
inside of finalization here, you will
see that the result simply says HTTP
response with data of ID 1 inside.
Uh that's pretty straightforward, right?
Because we have an HTTP request which
requests to-dos by an ID of one. If I
change this to to do two and if I click
save here and then if I click execute
workflow again I'm going to have another
request here finished and this time this
request's finalization
will be different
it will include the ID of two perfect
but what if I do this? What if I add
another HTTP request right after this
one? And if I change this
let's make this to be to-do one. Click
save. And change this to be todos two.
And click save. Save the entire
workflow. And then click execute again.
So what's going to happen now? So I will
have, as you can see, two HTTP requests
fired. But in the finalization block
you will see that I only have one HTTP
response
only. ID2, which means that this one was
completely overridden. You can see that
and prove that by going inside of these
individual ones. You can see that the
first HTTP request topologically called
the to-do with ID of one, but the second
one called with to-do two. But both of
them have the se same value here http
response which means that in the
finalization block only the last one
gets written.
That's because we have a key collision.
So now let's go ahead and fix that. Uh
and yes there is this thing right here.
So cyclic error message. So code rabbit
told us to double check if that's the
correct one. So instead of my utils in
the source inest folder. Yes, this is
correct. I went into the source code and
I can confirm that this topos sort
library here throws an error message
which includes the word cyclic not
cycle. So this is correct. Just wanted
to resolve that because I think I
promised that it will resolve it. And
now we have to fix this bug that we
have. Right. Basically it's not allowing
us to chain our HTTP requests. In order
to do that, we're going to have to
introduce a new field to UI. So each of
our nodes besides having a method
endpoint URL, right? Those specific
things for it itself, it's also going to
need to have a variable name. And I
suggest we add that as the first field
in every node that we have because it's
going to be a very important one and
it's always going to be required. So
let's start from the UI side and work
our way into the executor and finally
assigning it to the context.
The first component I want to go in is
node.tsx.
So let's find it together. So we refresh
our knowledge. Uh it is inside of source
features executions components http
request node.tsx.
So besides having an endpoint, method
and body, we are also now going to have
variable name
like this. The reason we are making this
optional is not because it's optional.
It's because it will not be added at
some point, right? So we cannot always
uh rely that this will exist. It will be
required for executing the node but it
will not be required as the initial node
if that makes sense. Right? So user will
have to add this later.
Uh and once we have this variable name
uh what we can do with it is well we
don't have to worry about it too much
because you can see it's automatically
going to be spread here. It will
automatically be passed here in the
default values because we refactored our
code previously. But maybe we can
improve the description so the user can
visually see for example right here in
this HTTP request node. Perhaps we can
add some kind of variable name here.
That's like one idea I have. I'm not
sure. Maybe later. Let's Let's leave it
like this for now and then we'll see.
Now that we have the variable name, we
can go ahead and safely go inside of
dialogue.tsx.
And now we have to modify our form
schema. So our form schema now needs a
new field called variable name.
Let me fix it. Variable name is going to
be a type of string. Now, we can just
leave it to be any type of string, but
it needs to be a type of string that
will be compatible with a JavaScript
object, right? So, it should not be able
to be something like this. I think I
think this might not be valid or maybe
it is because whitespace is valid. But
basically we want to make sure that
whatever uses writes inside uh will not
throw any runtime errors when assigned
to the key value of an object. So
because of that we're going to add some
J uh some reg x to validate that. So
zstring like this after string let's go
ahead and make it required.
Let's give it a message variable name is
required.
And then let's add a reg x. So I'm just
going to copy the reg x so you don't
have to see me type it here. And let's
go ahead and make sure that we have an
error message
if it's not applied.
Variable name must start with a letter
or underscore and contain only letters
numbers, and underscores.
like this. So, this isn't an unknown
regax. It's you can find it quite easily
on Google. Basically, it does exactly
what the message says. It allows capital
and non- capital letters as well as
numbers from 0 to 9 with the exception
of a dollar sign because that's also uh
valid to be inside of a JSON object.
So I think this is good enough. You can
of course you know for yourself research
if you have a better reg x but I think
this will work just as well. Now let's
go ahead and let's add it here in the
default values here. Variable name will
be default values dot variable name or
an empty string.
Same thing in the form reset default
values variable name or an empty string.
And now what we have to do is we have to
add a form field for it. So let me try
and find the simplest one. I think
that's this one, the endpoint URL. I'm
just going to copy it because it's very
simple. And I'm going to add it above
the first form field because we said we
want this to be the first one. I'm going
to change the name to be variable name.
And I'm going to change this to be
variable name.
And the placeholder can be whatever we
want. For example, my API call.
Basically allowing the user to know
hey, you can name this whatever you
want. And in the form description
um, let's go ahead and describe
something like this.
Use this name to reference the result in
other nodes. Then I'm using this to
create a space. And then I open an
object so that I can do double curly
brackets because that's how we're going
to do templating. For example, my API
call HTTP response data and the
placeholder here is the same. So if user
names this test later in other nodes
they will be able to do test.httpreson
data. Perfect. So now once we do that
if you go ahead and open an HTTP
request, you should see a variable name.
And I just got a really cool idea. If we
change the variable name, maybe we could
also change the description to make it
even clearer to your users what this
will be used for. Let's see if I can do
it very quickly. So just as I did this
let me go ahead and do const watch
variable name form watch variable name
like this. And then in here in the form
description, let's see if I change this
to back ticks. Whoops. Change this to
back ticks.
And if I specifically do this
watch variable name
and maybe make this
fall back to my AI call
so it looks better.
There we go. Exactly what the variable
name is. That's what the user will see.
So it will be easier for them to
understand how they're going to use this
later in other nodes, right? So test or
if nothing is written, it's just going
to use the placeholder one. I think it's
cool. And if I save, it should give me
an error because variable name is
required. So let's go ahead and call
this uh my API call one like this. And
let's click save. And let's do this one
uh my API call 2. And let's click save.
There we go. So now we have different
variable names for two identical uh
nodes with of course different endpoint
URLs. So what we have to do now is we
have to use that variable name inside of
the actual executor.
So uh first things first
well at first things first and well last
things last we have to go to the
executor. So features executions
components http request executor.ts DS.
And now let's get inside of the HTTP
request data and let's go ahead and
let's add the variable name.
Again, it's going to be an optional
string.
And what we're going to do is well, you
can define how strict you want to do be
with this. I would personally be as
strict as if we were missing an
endpoint. Right? If there is no variable
name, it's a nonretable error. Right? So
let's remove this to-do right here.
Actually no. Yes, we also have to
publish the error state for this. So
nonriable error variable name not
configured
like this.
And now let's go all the way down to the
return method and we have to slightly
modify it. So let's go ahead and do
const response payload.
And let's go ahead and copy the HTTP
response from here. So basically our old
payload exactly as it is. And now we're
just going to slightly modify this by
spreading the context and instead of
storing this under HTTP response, let's
store it under data variable name and
the response payload inside.
There we go. So let's see. Uh a computer
property name must be a type of string
number, symbol or any uh data dot
variable name.
I was expecting this to take care of
that. Maybe I should also check for
string. Let me just just for my
curiosity if I do if data dovariable
name right here. Does that fix the
issue? That seems to fix the issue.
Okay.
Hm. Let me just pause the video just a
little bit and see how what is the best
way to handle this
here. Here's a potential solution that
we can do. We could just we can do both
actually. Let's do if data variable name
like this
and then return like that. And we can
also do another return here which can be
served as a fallback to direct HTTP
response for backwards
compatibility.
And let's just go ahead and do what we
used to do. So context
and then append the response payload
like this.
And let's just try it out for now. So I
think this should work now. So let's see
everything we changed in executor.dts is
we checked if we don't have the variable
name and we throw an error because it's
now required and we changed another
check here if we have it simply because
this type error is being a little uh
it's not it's giving us a hard time
basically. So I made sure to wrap that
inside of another if clause here so I
can safely return that with that
variable name. But if somehow all of
this, you know, validation fails or
something, let's just fall back to what
we have right now, even though it
doesn't work as well as we wanted to.
But I think that now this should work.
Okay. Um, I think I should be able to
save this because we can extend
individual node data without extending
the schema. That's another cool thing
about React Flow and the fact that we
store this in a JSON object. So I think
that now the situation should be
different. Let's click execute workflow
and let's follow it again. So it is
running and there we go. You can see
that now we have a my API call one with
its own HTTP response and then oh we
have another my API call one but here it
is thankfully my API call 2. Perfect.
It's just that this one uh was there
first. So I thought something had
happened. And here is the final result.
The result object now has my API call
one with HTTP response and data of ID 1.
And down here my API call 2 with HTTP
response data and ID of two. And this
way we resolved our problem of key
collision. So we have added the variable
name to the UI and we added the variable
name inside of the context. I'm not too
happy about this part right here that I
have to check and this fallback because
this fallback really doesn't make any
sense because it's never going to get to
that. Right? If we somehow remove the
variable name
it will just throw an error and honestly
since we are still in development phase
we don't really need to offer any
fallback. But this currently is a
solution to not have that type error
here. For the next chapter, I might
research a bit more into it and how I
can give you a prettier solution so that
we can simply validate our variable name
here. Perhaps I also have to add and if
type of let's see and if type of data
variable name is equal to string. Maybe
that's what I had to do. And then if I
remove this.
Yeah. No. Okay. Maybe I'm missing
something very obvious. So I will make
sure to take a deeper look into this in
the next chapter and tell you if there
is a prettier solution to this. But as
you can see, as I just tested, we
officially fixed our overwriting
problem. So let's go ahead and quickly
go over these three files that we have
modified. So I think we actually started
with the executor, right? And the only
thing we did here first was we added
options.headers. Make sure you have
that. Make sure you didn't misspell any
uh key and value there. Then we went
into node.tsx
where we very simply extended with a new
variable name. And I don't think we
modified anything else. If you want to
you can add it into your description so
that users so that users can visually
see uh what variable that is. You can
add it right like before the get method
whatever you prefer.
And then we went into the dialogue. We
added a variable name with some reg x
which basically allows uh a toz letters
lowerase and uppercase dollar sign and
numeric variables. Perfect. And we added
that new form field and a cool little
watch util that will directly show the
user how they can use that in other
nodes. Perfect. So, what I want to do
now is open a pull request and merge
those changes. Let's go ahead and open a
new branch. Let's call it 19 node
variables.
Let's go ahead and click on stage all
changes.
19 node variables.
Let's go ahead and click commit. And
let's click publish branch. Now, let's
go ahead and open a pull request.
And since this was a fairly simple pull
request, I don't think we have to go
through the entire uh review simply
because it's just three files. Some
quick fixes I want to resolve before
moving on to other chapters. I'm going
to immediately merge this pull request
and let's focus on uh adding more
interesting nodes finally. So, we pushed
to GitHub. We created a new branch, a
new PR, and we reviewed locally right
here all the files. Of course, you can
go through Code Rabbit's review for this
one. I just think it's a very short pull
request. So, now let's go ahead and
change this branch back to main. And
let's go ahead and click on synchronize
changes right here. And let's click
okay. And after that, as always, I like
to confirm by going inside of my source
control tab, clicking inside of graph
and I want to make sure that 19 is the
newest thing I just merged, and it's
just those three files. Amazing. That
means everything is right. And see you
in the next chapter.
In this chapter, we're going to continue
working on node execution by
implementing templating language. This
will be a particularly powerful feature
of our project and I'm so happy that we
are finally in this chapter because I
think it's very very interesting how we
are going to achieve this. So the first
thing we have to do is we have to
refactor one implementation from the
previous chapter. In the previous
chapter we added variable name which
basically fixes the key collision issue
which we discovered two chapters before
this. So let's go ahead and quickly fix
that before we discuss exactly how we're
going to implement this.
So we're going to go ahead inside of
source features executions components
HTTP request executor.ds.
So what problem do we have? Even though
we have an if check for the data
variable name missing and we throw an
error. Still, if I remove this if clause
right here, we get an error. And because
of that, we have this weird solution
where we've wrapped this instead of an
if clause and then we fall back to this
one. Since we made a decision that
variable name will be required, we can
safely remove this fallback. Now, and in
order to fix this, we can very simply
define inside of HTTP request data which
ones are actually expected to be
required. Keyword expected. So, let's
remove the question mark for variable
name and let's also remove it for
endpoint. This way, these two are
expected to always be required. And we
can also do the same for the method
actually.
But even though even though we expect
them to be required
the only way we currently validate that
is on the front end using react hook
form which is fine but not good enough.
We need to be secure. That's why we are
doing these things right here. We throw
error if any of our required fields are
missing. So let's go ahead and just do
the same thing for the method here. And
very simply method
not configured.
As simple as that. Uh if you want to
you can continue adding like these type
of errors simply so it's easier to find
in the logs. Great. So now we do make
these required in the types but we also
do runtime validation just in case they
are not passed because users can easily
bypass this. So we want to make sure
that these are actually passed so that
we can properly do uh the request. And
now once you've done this you can
actually remove the question mark here
because this will now always be
required.
And let's see the method. Same thing you
don't have to fall back to get because
method will now always be required.
Perfect. So that's how we resolved that
first issue. Our code is much cleaner
looking now. So let me quickly mark that
as completed.
So what is template syntax? How are we
going to implement that? And why do we
even need that? Well, take a look at
this scenario that I've prepared right
here. So very similar to our previous
project. I have one HTTP request with a
variable name my API call 1 with a get
method and a simple API placeholder
calling to-dos with an ID of one. And I
have slightly modified the second one.
So this one is called my API call 2
which instead of calling to-dos calls
users. So let me rename this. This will
be my users.
Let's call it user
and this will be my to-do.
Perfect. So make sure you click the big
save button and go ahead and execute
your workflow. Just make sure you have
both inest and next.js running and let's
take a look at what happens. So, I'm
going to expand this as far as possible.
Let's go inside of finalization. In the
result, we now have to-do object with
HTTP response data. And you can see it
has properties of the to-do, but we also
have a user object with all of the fake
mock user information. So, great, we can
successfully fetch one hard-coded to-do
and one hard-coded user. But what if I
specifically wanted to do this? What if
I first wanted to fetch the to-do and
then I wanted to use the to-do user ID
in the next node. So what if in here I
wanted to do this? I wanted to pass. So
this would be to-do
http response data dot user ID. Quite
long. We could maybe shorten that but
this is how it is accessible for now I
believe. Let me just confirm to do HTTP
response data user ID with capital ID.
There we go. So if I click save here and
if I just try running this it's going to
fail I believe. Let's see. There we go.
You can see that the second HTTP request
is failing with 404 not found. So I'm
going to cancel it now so we don't waste
any time. So now we're going to imple uh
fix that by implementing the template
syntax. For that we're going to be using
handlebars. So let's go ahead and
install handlebars.
So I'm going to go ahead and open a new
tab. MPM install handlebars. And then
I'm going to show you the exact version
that I'm using.
Handlebars 4.7.8.
And let me just quickly try importing
that. So import handlebars
from handlebars. And looks like type
safety is here as well. Perfect.
So now that we have handlebars, let's go
ahead and let's try and fix this. So
what I'm going to do is I'm going to go
to this endpoint right here and I'm
going to call handlebars.compile.
I'm going to pass in data.endpoint
endpoint
and I'm going to pass it context like
this. So what's going to happen now is
handlebars.compile will read
data.endpoint
which is going to look like something
right blah blah blah let me try and mock
this and then it's going to have uh
to-do http response dot data do user id
right so handlebars.compile compile will
read this and then it will use the
context to populate that. So what is the
context? Each of our executor has the
context. The context is the context is
basically the previous node data. The
context always gets updated which with
each sequencing node. So the first HTTP
request that's done if we tried for
example if you tried to do this for this
one right for the first one it will it's
always going to fail right because there
is no context behind the first one but
the second one will have context it will
have to-do context because we named this
variable to-do so I think just by doing
this this should already work and let me
just try and do console log
endpoint point and let me try and do
endpoint.
I'm using capital letters so it's easier
to find it. And if we've done this
correctly, we should see the endpoint
logged out. Well, if it works, I mean
we we will actually be able to inspect
the URL itself. Right? Basically, what I
expect to happen now, make sure you save
that file. I think I've done it
correctly. Maybe I'm missing something
but I think it should work. Is there we
go. If I use to-do http response.data
data
in here in the users. This should
compile to users one because the todo
one has a user ID of one. So let me go
ahead and click save here one more time.
Let's go ahead and click execute
workflow
and let's see if anything will change
now or maybe we'll need to fix
something. And given that it completed
I think we just fixed this. Let's
quickly take a look. So this is the
first HTTP request and it's stored into
a to-do variable HTTP response data with
user ID of one. And then in the second
one, let's go ahead and scroll down. We
now have a user. And you can see how we
have the context of the previous HTTP
response which allowed us to populate
the user ID. And I think this works as
well. Let's just confirm by finding the
ID here. And it is one amazing. Uh can I
directly see the URL somewhere? Let me
go ahead and try here maybe. There we
go. Endpoint JSON placeholder typing
code to-dos one. And you can see this is
the one users one. So it's successfully
compiled to exactly what we expected.
Amazing. Amazing job. It really is that
simple to do that. Now the only problem
is you currently can't do that with JSON
objects, right? And I mean right now you
don't really need to do it. We use the
super simple example, but what if you
change this from get to post? Well, now
things get a bit more complicated
right? Because sure this is as simple as
before, but what if you wanted to use
the JSON variable object? Well, for that
we need to do some slight modifications
here. We need to register a helper to
stringify objects. So, I'm going to do
handlebars
dotregister helper.
Let's go ahead and add JSON
grab the context
and immediately return JSON.stringify
context
null and two. As simple as that. There
we go. Now, once you've registered that
we can go ahead and go inside of our
HTTP request step here. Go inside of
post, put, and patch. And now for the
data body, what we can do is we can
first try and do the following. So
let's do const resolved to be handlebars
dot compile
data. pass in the context like that and
then let's do JSON.parse
resolved.
So this will basically protect us from
any invalid JSON being passed. That's
why we are doing that. And then instead
of optionsbody directly being that we
can just do this.
Great. Uh so yes, let's also do while we
do postput patch uh I also think hm
uh I wanted to add the body to be
required but I'm not sure if body is
yeah I don't think it's a good idea to
make body required.
Maybe we should just treat it like an
empty object
if it's not passed at all. I'm not sure.
Uh let me just quickly test is
JSON.parse
let me zoom in here. Can I do JSON parse
on an empty string? I cannot. Okay. So
yeah, maybe we should fall back
data.body to like an empty
object.
H actually it depends if handlebars can
compile an empty string. You know what?
Let's just try it. So I stop uh guessing
things. I'm just going to check what is
the proper endpoint URL to test this on.
Let's check quickly. Uh I think we can
just do to-dos
like that. Change the method to post and
just pass something like this. I don't
think indentation matters that much, but
what does matter is that you don't have
like an trailing comma at the end. So
only use commas if there's a new line
after. And make sure that your key
values are encapsulated within a string.
Otherwise, that's not valid JSON.
So we have fu, we have bar, and user ID.
I think JSON does allow numbers. So this
should work. Basically, this should now
create a to-do, I think. So, let's go
ahead and call this created dash to-do
like so. Uh, oh, great. Our validation
works. Created to-do.
Perfect. Let's click save here. And now
we should have one get request and then
one post request. Right now, not using
any variables at all. So, let's just
execute workflow. Let's see. Does that
work? Uh, looks like it's working. There
we go. Created to-do has an HTTP
response with a body of bar ID 2011
title of FU. Amazing.
And now let's go ahead and let's try
reusing something here.
So, okay, I can use this. Uh, how about
we try user ID dash
and then let's use to-do. HTTP response.
ID. So, in the title of the newly
created to-do, we're going to use the
variable that we pre previously used
here to fetch the user. So, doesn't make
much sense. We should technically use it
for this, but I think it will like be
more visible if we do this.
So, let me go ahead and I'm just going
to zoom out a bit so I can click save.
We should probably improve that. Let me
go ahead and save the project overall.
You always have to save it. and then
click execute workflow. And let's see if
that will maybe be more visible.
Created to-do. There we go. Title user
ID-1.
So it is officially working. We can
successfully template both in the
endpoint object and in the data.body
object. Now let me just check what
happens if I pass an empty request body
and if I click save. So I'm just trying
to see what bugs can we encounter here.
Now this is completely fine to fail but
I just want to see why it fails. Okay, I
see. Because of unexpected end of JSON
input. I see. Um I'm not sure if we
should handle this or should the user
handle this? So should the user know
that the request body always needs to be
something even just an empty JSON
object? because I think that now when
you save this and execute it, it's still
going to fail, but it should fail for a
different reason.
Uh yeah, okay, it didn't fail. It
managed to create it. So yeah, uh you
can make a decision. Do you want your
users to know that they should always
add something to the request body when
they select a post method even if it's
just an empty object or will you detect
when body is empty and then maybe change
it to this?
Perhaps if you do this
then you kind of save your users the
hassle.
Let's see if I remove this now and click
save here. Click save here.
And if I execute it, yeah, I think it
will like save your users some headache.
Let's see. Yeah, now it works. It works
exactly the same as before. Perfect. But
still we haven't tested one more thing
and that is the variables, right? So
what if I wanted to let's me uh can I
see some better example this one? What
if I wanted to add the entire data
object?
Like I literally wanted to create
another to-do with identical HTTP
response here. Now, right now, what we
can do is
let's see what actually would be the
proper way of doing it. I'm not even
sure myself. So
I think that what we should be able to
do now is pass JSON
to do HTTP response
data.
I think this should work now. I'm not
100% sure. Let's see. We registered the
JSON helper. So basically whatever
you've wrote here, you you can name this
anything you want. It's a name of the
helper. So it needs so it's a reserved
keyword here. So if I name this JSON 2
I would also have to register JSON 2 in
the helper. That's why we called it JSON
here. And basically it should be able to
just transpile that entire I mean
compile that entire object and just
create two identical to-dos. So the flow
is now fetch the to-do under ID1 and
then create a new to-do with the JSON
entire JSON object that we fetched from
the previous node.
That is a pretty reasonable request
actually. Let's go ahead and see if that
works or do we have to do something
else? It looks like it has failed. Let's
see why. Mhm. expected property name or
that in JSON at position 4.
All right, I think that we are making
some mistake here. I'm just not entirely
sure how. Do we maybe need to wrap this
inside of curly brackets like this?
Maybe. I'm not sure.
Let me try and add some simple
indentation. And let me click save
because I'm I'm trying to think how does
this compiling happen? Does it happen
like parsing
uh looks like not again syntax error.
Okay. The way I would debug this now is
of course by logging, right? I want to
see what exactly is going on here. So
let me go ahead and console log result
body
and, this, way, at least, I, will, be, able, to
see what's happening because that's
always kind of the problematic part and
I will remove curly brackets. I want to
see it in a string object, right? I want
to see what's going wrong. So I'm going
to remove curly brackets again and I'm
going to remove all whites space too. So
I only have this. Let me click save
here. Save up there.
And let me click execute workflow.
Again, I am expecting this to fail.
Perfect. We can actually cancel all of
the running ones because we know they're
not going to succeed. So we don't waste
any time. And now I'm going to go here
and I'm going to take a look at them
now.
Oh, that's interesting. Body.
Mhm. So, it managed to do it, but it's
completely incorrectly
parsing this.
That is interesting. H
I would assume
that we fixed that
right here.
I'm going to have to pause the video
just a little bit and research and I
will do my best to tell you exactly how
I debug this. Uh, of course you can do
it yourself. You can pause the screen
too if you like a challenge. Basically
the issue that's happening right now is
we do manage to get the exact body from
the variable. But you can see that the
quotes are broken, right? These quotes
should be actual quotes. So, I'm going
to go go ahead and try and resolve this.
All right,, I, think, I, might, have, a
solution. Uh, unfortunately, not a cool
solution. I mean, not a cool way of
discovering it. I just asked Chad GPD.
So, yeah, not exactly too cool. But
let's go ahead and modify this a little
bit. I never like this direct returns.
So, I'm going to open it like this. And
then I'm going to do uh let's go ahead
and con stringify
to be JSON stringify pass in the context
null and two and then I'm going to
return new handlebars
save string
and I'm going to pass in stringified
inside
actually let me call this const save
string like this and finally return safe
string
I always found these to be more
readable. And let's call this JSON
string. If I JSON string, there we go.
JSON string and save string. And finally
return safe string. So I described the
problem to claude and it uh told me to
try this. So I'm going to go ahead and
execute this again with no code changes.
I mean with no changes. And looks like
that actually fixed it. Amazing. That's
really cool. So let's see.
uh created to-do has completed of false
ID of 2011 title this one user ID this
one and our previously fetched to-do is
identical except ID which makes sense we
cannot force the ID I think this is a
public API right so that makes sense
that the ID wasn't honored that's
perfectly fine amazing I think that we
successfully complete ated uh templating
right now and it wasn't even that
complicated right we had a little hiccup
here but thanks to claude code it was
very easily fixed this is the type of
thing that I really like having the help
of chat GPT and claude uh because you
know it's really really the same as
googling around I could have just read
the documentation obviously but it
literally took like 5 seconds with
claude excellent now that we have this
you for yourself can decide you know how
deep do you want to go in this because
technically you could allow your users
to do the same for variable main right
you could change this to be to-do and
then use
uh what would it be to-do dot HTTP
response data user id right or like user
this change this to get and this will be
users and then the exact same card code
as up here, right?
If you want to allow your users to have
dynamic variable names
but yeah, you can see that then this
directly conflicts our regex rules. But
obviously you could modify that if you
really really wanted to, right? You
would just have to make a more loose
reg. And then before you assign the data
variable name, you would kind of create
some compile variable name handlebars
compile. You would pass in data variable
name and you would pass in the context
and then the compile variable name would
be used here. That's if you want to do
that. I personally will make the
variables as safe as possible. Again
this is also if you want to fall back to
an empty object or not.
Uh, great. So, we are halfway. I mean
more than halfway. Just like one more
thing left for us to be able to go to
other nodes. The reason I'm not building
any other nodes now is because I want to
make this HTTP request an example where
we have everything. And then when we
need to create new nodes, I can just
refer to HTTP request node where
everything works. Everything is
finished. So I don't have many bugs nor
any other things. So that's why it's
taking me so long to complete this node
is because I want to make sure it
literally has everything and because
it's a simple example, but again complex
enough that it allows us to do
templating and all those other things.
And I think this templating thing is
actually the most crucial part of a
useful workflow project, right? Because
if this wasn't possible, it really
wouldn't be that cool. I think you will
agree with me on that. And I believe
that is all we wanted to do. We now
allow dynamic body and we allow dynamic
endpoint. We can do individual strings
or we can do JSON uh objects. I will, of
course, test this a few more times just
in case I've missed out something big
and maybe you've noticed it and you're
wondering if I'm going to fix it. I
promise I will test this out until the
next chapter. So, in the next chapter
if I notice something, I will uh show
you how to fix it, if I find it. But I
think it's working pretty well. I think
it might be perfect as it is. Awesome.
Let's go ahead and push this to GitHub.
And let's go ahead and just quickly hear
what uh code rabbit thinks about this
because I am quite interested. So I'm
going to be 20 uh node templating I
believe is the branch name. I'm going to
go ahead and just stage all of these
files. Not too many changes. 20 node
templating. I'm going to commit and I'm
going to publish the branch.
Once the branch has been published
let's go ahead and open a pull request
here.
And let's wait for the review.
And here we have the summary by code
rabbit. New features. We added template
support for HTTP request endpoints and
request body content, enabling dynamic
value substitution. Improvements. We
enhanced HTTP request validation with
more descriptive error messages. We
improved response handling consistency
for HTTP request execution.
In here we have a sequence diagram. But
I think it is pretty clear how it works.
So we execute HTTP request with context.
The context is basically what is
assigned from the previous node. That's
why I explained that for the first HTTP
request there will be no context but for
the second one there will be context
which is the result of the previous node
stored in the variable that we defined
and basically we then use that context
to compile the end point endpoint
template or to compile the body. So, one
of those or both and then we send the
HTTP request with those newly generated
body and endpoint variables and we
return some response and then that
response again gets attached to the
context. So if we had a third node
right, if I go ahead and add a third
HTTP request, whoops, and connect it
right here, then this one would have
access to both the response of created
to-do variable and of the initial to-do
and it could do anything in once with
that.
Awesome. So let's go ahead and take a
look at the comments here. So the
comments are mostly to safely handle the
JSON parsing and to throw new errors if
JSON serialization
fails. So yes, we could do that inside
of the register helper. Instead of just
doing it like this, we wrap it instead
of try and catch. So we control exactly
what error is being thrown. I mean it
will still behave exactly the same right
now. It would just be a little bit out
out of our control where the error is
being thrown. Same thing goes for
compiling the endpoint. Right? Actually
this is something different. So in here
once we template the endpoint it
suggests actually checking if that
endpoint still is a valid endpoint after
we compiled it through handlebars
because yes probably some very smart
user could maybe abuse the handlebars
compile method if they know how it works
and if they watch this video and they
know that we use it. So yes, this is
some penetration protection that you
could be aware of, right? Basically
after you verify your after you compile
your endpoint with the variable, you
should still check does it exist? Is it
a type of string? And is it still an
endpoint you can call? So very good
catch by code rabbit here. But I am
satisfied with this as is for tutorial
purposes. So, I will merge this
tutorial. My apologies. I will merge
this branch. I'm going to go back to the
main branch and I will hit synchronize
changes. Once I hit synchronize changes
I always like to open my source control
click on the graph, and just cons
convince myself that it has been merged
right here. Amazing. I believe that
marks the end of this chapter. And in
the next chapter, we're going to be
doing the last thing in regards to this
HTTP request node, which will be real
time feedback. And once we do that, we
will be able to reuse that code and
create a bunch of other nodes like Open
AI request, Antropic, Gemini request
and then some submissions, I mean some
triggers like web hook trigger, Google
form trigger, stripe trigger and
similar. And very soon you will realize
there is no limit to how many nodes we
can create. It will all depend on the
ones that you want. Amazing amazing job.
And see you in the next chapter.
In this chapter, we're going to make our
nodes real time. Basically, what that
means is that we're going to emit a
proper status for each of our nodes so
that the end user sees exactly what's
happening with the workflow. Those will
include loading error and success
states. We're going to achieve this by
using inest realtime package. I would
highly suggest that you find the inest
realtime documentation page. The reason
for that is real time at the time of me
making this tutorial is currently in
developer preview. So what does that
mean? It basically means that the
feature is widely available for all
indust accounts but depending on the
user feedback some APIs and SDKs might
change in the future. So that is why I
suggest that you also visit this website
simply so you see which version you will
be working with uh and so you see if any
instructions here have changed since me
making this tutorial. But as always, I
will show you exactly the version I am
working with. So let's go ahead and do
npm install in justest real time. I'm
going to go here the same place I
installed handlebars
and let's add at ingest slashre time. I
am immediately going to go inside of my
package JSON so that those of you who
want to follow the exact same thing will
be able to do so. As you can see, I'm
using a version 0.4.4.
Just as a reminder, let me show you the
rest of my versions. My inest is 3.44.1.
My inest CLI is 1.12.1.
We are yet to see if these will work
compatibly. Sometimes a newer version of
Realtime can cause some problems and
vice versa. So, you have to match the
versions. But we're going to go by
through this step by step and we are
going to make sure to fix any instances
like that if they even happen. So just
make sure you have installed in just
real time. Now what we're going to do is
we are going to go back inside of our
source
and let's set of our uh inest folder
client.ds.
And now what we have to do here is
besides defining the ID which is
nodebase let's also add a middleware and
this middleware will be an array and it
will accept a real time middleware like
this. You can import realtime middleware
from our newly added package. Just make
sure to go forward/m middleware in the
import right here.
Perfect.
Now that we have added that, let's go
ahead and let's define our first
channel. So inside of the ingest folder
I'm going to create a new folder called
channels. And inside of here, I'm going
to create a new file, http request.ts.
Then I'm going to go ahead and import
channel and topic from inest real time.
And finally, I'm going to export const
HTTP request channel.
We're going to call our channel import.
And we're going to name this HTTP
request execution.
And then I'm going to chain add topic.
Let me go ahead and expand this just a
bit more. Or maybe not. But this is how
it looks like. You can also chain it
down like this, however you prefer.
Instead of add topic, let's go ahead and
call topic like this. And the topic will
be called status. Let's go ahead and
define the type of that topic. So each
status will have a node ID which it is
referring to which will be a type of
string and status which can be loading
success
or error like this.
And let's go ahead and execute this and
add a comma. There we go. That is our
first request channel. Now, of course
there are a lot of magic strings going
on around here, and we can surely reuse
some of the eniums and node types that
we have all over. For example, let me go
ahead and find some status. Node status
indicator has a type node status
loading, success, error, and initial.
So, yes, not exactly the same. Uh, I
will see if there is a way to reuse some
of these later, but for now, just be
careful that she didn't misspell any of
these.
Now that we have the HTTP request
channel, uh, we have to go back inside
of our inest folder functions.ts
and inside of here, go inside of the top
where you call the create function here.
And after you name your event, go ahead
and add channels.
And in the channels, go ahead and add
HTTP request channel and execute it like
this. Make sure to import it from our
newly created folder.
And now once you add the channels here
and if you've correctly added the
middleware here another prop should
appear besides event and step and that
is publish. So you can see that right
now there is no error if I try to do
this. But if I go ahead and comment this
out you can see that I immediately get
an error here. So make sure that you
have the middleware here. save the file
and then you should be able to
destructure publish from here. If it is
still not working, you can always
restart Visual Studio Code or you can
restart TypeScript server individually
which should make it work. Then now that
we have the publish method, what we can
do is we can pass it to each of our
executor here. So context await executor
which accepts the data node ID context
and step which will now also have
publish. Now obviously we have an error
for this because executors are currently
not made for that. So I'm going to go
ahead inside of get executor here and
I'm going to go inside of HTTP request
exeutor right here. And besides having
step we will now also have publish.
Let's go ahead and fix this error here
by going inside of node executor right
here. And here we have our publish to do
add real time later. So we can now
finally do that. So I'm going to go
ahead and comment this out. Publish will
be a type of real time which you can
import from inest realtime dotpublish
function
like this. And you can import real time
as a type.
Uh now in here uh we have an error and I
think I know exactly why. Yes, it is
because record string unknown is missing
the following properties. Yes, in the
previous chapter we modified these to be
required because that's how we expect
them to be.
But then our node executor here
this doesn't match that requirement. So
okay uh at least we are aware of it. I'm
going to, go, ahead, and, add, to-do, fix
types. I will see how to improve that
because yes even though this makes sense
to be expected perhaps we should make
all of them optional and then simply do
a runtime validation here.
I'm going to see what is the best
solution for that for now. Yes, to do
fix types because we do have a problem
here. But now inside of here, HTTP
request executor, you should have access
to the publish function because we
defined it right here. Let me show you
where this is. I don't think I showed
you that. So instead of source features
executions types.ts, DS in here you
should have node executor params which
now has publish. Previously this was
commented out. So let me go ahead and
quickly just show you everything we've
added so far. We added the inest
realtime package. After that we went
into client.ts
and we imported the realtime middleware
and added it to our ingest instance.
After we did that, we created the HTTP
request channel with node ID and status
loading success and error. After we
added that, we went into functions.ts
of source inest and we simply added that
new channel in the array channels in the
same object where event is defined.
After that, we were able to extract a
new field publish here and we simply
pass it along to our executor function.
The way we fixed the error for the
executor function is by going instead of
types instead of source features
executions. And in here we found node
exeutor params interface and we simply
commented out our to-do to add real time
later because we are doing it right now
and we simply gave it a type of
realtime.publish. publish function.
Perfect. And finally in the executor
itself, we were able to then destructure
the publish because now it is properly
properly typed and it is available in
here. So let's go ahead and remove this
to-do right here and let's actually do
it. Await publish. And let me go ahead
and switch to the actual code. There we
go. Await publish. So we are calling
this method right here. Make sure to
await it and call HTTP request
uh not this HTTP request channel like
this. So make sure to import it from
inest channels HTTP request and go ahead
and execute it and call status node ID
and status of loading.
Let's go ahead and just fix this again.
And I think I made something with my
imports. Yes, this happens often
actually. Let me try again.
http request
channel.
Make sure that you imported it from
here.
And then let's go ahead and do dot
status. My apologies. Executed dot
status node ID. This is basically doing
this, right? But since the prop is named
the same as the key, we can use a
shorthand operator and pass in the
status to be loading.
There we go. So now we are successfully
emitting that this node is loading. And
now we have to throw errors when it
fails. So for example, right here, let's
go ahead and give it an error like this.
Then we can copy this and do the same
thing here.
Same thing here. Perfect. So when should
we throw the success? Right here before
we return the result. So let's go ahead
and simply emit the status success.
Whoops. Success.
There is uh potentially a room to reduce
the amount of code that we are writing
by maybe wrapping this entire thing
instead of try and catch and then if
these errors get thrown we could simply
emit this which is pretty much the exact
same you know line of code everywhere.
Uh but I do have to uh verify that
that's what will happen because this is
an inest specific error. So, I'm not
sure exactly what happens here, but if
you've noticed that we could probably
reuse this code. Um, you're probably
right. There probably is a way to make
this better. But let's just be super
explicit with our states right now
simply so we know exactly what we're
doing and exactly what's happening with
our code. So, so far you shouldn't have
errors uh anywhere except in the
executor registry. uh and the error
should not be happening at all because
of our new realtime function instead
because variable name, endpoint and
method are no longer optional. So our
executor registry is confused because
that does not match the t data type
which we gave it right here. Uh one fix
could be maybe
giving each of these their own type. H
we will see we will see. But there is uh
a way to fix this in a nice way. I will
try my hardest to do that. Perfect. Now
that we have that uh these statuses are
being emitted and I think that we can
even try it out immediately. So make
sure you have all of your apps running.
I would recommend restarting your inest
server and maybe even your Nex.js app
simply because you just added a new
package. So maybe some cache is
happening or something. Make sure you
have both your ines development server
on and one of your workflows and go
ahead and click execute workflow. So
there we go. This is what you should be
seeing. Uh when we first what we do is
we prepare the workflow. This is
basically we do a topological sort.
After that we call manual trigger. Now
nothing is happening here but it will be
happening later once we add the channel
for manual trigger so that we can also
emit the loading and success state for
the manual trigger. But you can see that
we have this. Let me try and expand. Uh
I'm not sure how I can expand this.
Maybe if I hover over it. There we go.
Publish HTTP request execution. So we
are successfully
uh publishing some events, right? So you
should be seeing these success events
and these loading events right here. I
mean you can't really see which one is
which, but you should now be seeing this
publish things. Now here's an important
note. If for whatever reason you found
this uh complicated or maybe the
realtime package has changed
significantly for you, just know that
this is simply uh a cool addition to our
N8N clone. uh it doesn't change the
functionality itself, right? So if you
got it working, you will be able to
continue the entire tutorial even
without real time. I just wanted to let
you know that because this is a
developer preview. So in case it
drastically changes and you just can't
work your way around it, don't worry.
You can just go to the next chapter. But
I would suggest you know still uh going
through this chapter just in case I do
some other things here. Perfect. So now
we have to find a way to emit that
status to our editor right here. And the
way we are going to do that is by
implementing a hook. So let's go inside
of source features executions. And in
here I'm going to create hooks folder
and I'm going to add use node status.ts.
I'm going to import type
real time from inest real time.
I'm going to import use inest
subscription from inest realtime hooks.
I'm going to import use effect and use
state. And I'm going to borrow
node status uh from components. And I
think we call this react flow node
status indicator. So you should have
this component. It's basically the one
where we added node status with loading
success, error or initial.
Perfect. Now let's go ahead and let's
create the interface for this hook. So
interface use node status options will
accept a required node ID, channel and
topic as well as a refresh token method
which basically returns a promise with a
token from realtime.subscribe
token inside. Let's go ahead and do
export function use node status
and let me go ahead and open it
properly.
Let me fix the typo.
Let's go ahead and grab node ID
channel, topic, and refresh token. And
let's bind use node status option type.
Perfect. Now in here, let's start by
defining the status state. So status set
status is a use state. We are using the
type node status with the initial value
of initial. Perfect. Let's go ahead and
get the data by using use inest
subscription.
Pass in the refresh token and enable to
true.
And now let's create a use effect that
is going to listen to messages coming
from our ingest subscription. and we're
specifically going to be looking for the
newest one with our channel, our topic
our node ID, and our status. So, I'm
going to go ahead and first check if
there is no data.length
or in other words, if there is no data
coming from that use inest subscription
hook above, let's just break the method.
There is nothing for us to do in this
use effect. Otherwise, we have to find
the latest message for this node.
Now, I'm sure there are a bunch of ways
you can do that, but this is the way I
manage to do it very consistently and
safely for my project. You are of course
free to tinker with this if you feel
like it can be done in a simpler way.
So, latest message
is going to be data.filter.
Let's go ahead and get that message.
Now in here I'm first going to do if
message.kind is a type of data
and if message dot channel is exactly
the same as our channel which we define
when we call this hook.
If message topic is exactly the same as
our topic. if message dod data do node
id is same as our node ID. This way we
know exactly what this event is
referring to for exactly what node. And
once we find that we have to sort it by
latest. So let's go ahead and sort by a
and b values here.
And let's quickly check if a do.ind
is equal to data and b do.kind kind is
equal to data. Let's go ahead and return
new date b.created at do.get time
minus new date a do.created created
atget time
like this
and uh let me just see so this uh we
should not have a comma here I believe
and return
uh my apologies outside of the if clause
just return zero
like this and then this will basically
be an array which should only have one
item inside we can immediately access it
like this. The first index in this
latest message array.
Perfect. And now let's do a final check
here. So if latest message question mark
dot kind is equal to data set status
latest message dot data dot status as a
node status. So yes not too happy about
having to cast this but data can
literally be anything. So that's why
it's important that you make sure you
don't misspell it because you can type
this. My apologies. I think you can try
this and it's not going to give you any
errors. So just be careful with typing
these things. Same goes for data node ID
here. Make sure you are not misspelling
that.
And then let's go ahead and add some
dependency arrays here. So data no
whoops
data
node ID channel and topic. And perhaps
it would be better let me see. Do I ever
use data?
Uh well yes I use data in here entirely
so I think I have to pass it right here
and let's return the final status here.
That's it. That is our use note status
hook.
Once we have defined this hook we have
to go ahead and use it. But just before
we can use it we have to create our
refresh token method. There are many
ways we can do this but the quickest way
of doing this is by using server
actions.
So inside of executions components http
request I'm going to go ahead and create
the actions.ts
file.
Inside of it I'm going to go ahead and
mark this as use server.
And then I'm going to add a couple of
imports. I'm going to import get
subscription token and type real time
from ingest real time. I'm going to
import our HTTP request channel from
ingest channels HTTP request. And
finally, I'm going to import ingest from
our ingest folder client where we
recently just added the middleware. So
make sure you have all three. Then
let's define the type. Type HTTP request
token will be realtime dot token open
pointy brackets and inside define two
things type of HTTP request channel and
an array with a string status inside.
Let's export async function fetch HTTP
request realtime
token and return a promise
HTTP request token.
Define the token await get subscription
token. Pass in inest.
Define the channel to be HTTP request
channel. Make sure it's an executed
function and topics to listen to will be
status
and return the token.
And if you've done it properly, there
shouldn't be any type errors here. Let
me go ahead and just quickly zoom out so
you can see how it looks like without
any collapsing lines.
Perfect. Now that we have that action
defined, let's go ahead inside of HTTP
request node.tsx. tsx
and in here I'm going to finally change
this node status. So I don't know if you
remember but if you manually change this
node status to loading you will see that
both nodes magically become loading. If
you change to success they will become
success. So now we're going to actually
make it make it listen to the event. So
let's go ahead and call use node status
here
from our dot dot slot do/hooks use node
status.
Uh okay. Yes, it's a reusable one across
all executions. That's why it is all the
way up there. Perfect. And once we have
it here, let's go ahead and give it node
ID of props do ID channel. And now this
is where it gets kind of tricky. So this
is the part I don't like. HTTP request
execution. And you need to be super
careful that you didn't accidentally
let me find the channel that you didn't
accidentally misspell it. So it would be
a better idea to copy from HTTP request
channel and just paste it here.
I think there is potential to fix this.
I think technically we could call HTTP
request channel itself and then execute
it and then call name.
I think this should work. I'm not 100%
sure. Let's try it like this. I actually
haven't done this at my with my initial
source code, but looks like a very
interesting solution. I'm not sure about
what really happens when you execute it
and can you just execute it like that?
I'm not sure. But let's go ahead and
focus on topic status
and let's add the refresh token. Fetch
HTTP request realtime token. Do not
execute it. It's a promise.
So let's try it out. Oh yes. Yes. I
think this should be enough. Let's
refresh for good luck. Let me collapse
the sidebar and let's click execute
workflow. And now loading. Success.
loading success. Absolutely amazing job.
Amazing. Amazing job. Again, if for
whatever reason uh you were not able to
complete this, do not worry. This is not
crucial to completing this tutorial.
It's obviously a super cool effect, but
it does not really change whether you
will be able or not finish this
tutorial. Looks like this is working as
well, which is honestly a better
solution than to just, you know, copy
this and paste it here. And always be
super careful that you did it correctly.
Another alternative might be to simply
use a constant we define here and then
import it here. So because I'm just not
sure about implications of executing
this like that. I'm not sure what it
does.
Not not too sure. Okay. Uh inside of
HTTP request token. Yeah. For example
we executed here. So that's why I have a
feeling that it can fail.
Maybe.
Yeah, I'm not too confident with this. I
think that I will resort to using a
string simply because this is this is
what I did in my initial source code
right? So, I just want to stay
consistent to what I know works 100%.
And then later we can change this by
fixing all the weird magic strings that
we have around. So now what I'm going to
do is I'm going to purposely make this
an invalid JSON like this and click
save. Save this entire thing and let's
see a node fail. So execute workflow
loading success loading and yes it will
actually take a while for this to fail.
It's obvious that it's failing right we
know that but this will have I think
three attempts
before it reaches its actual failure
status. So if you want to speed that up
I think that you can go inside of ingest
functions. DS and find this individual
create function and
in this first object where you define
the ID. I think that you can also add
retries zero.
Now, I would highly suggest that you add
a little comment here to do change for
production
or maybe remove
in production simply because yeah, it's
a shame to fail immediately. Fetch
requests can fail. That's normal. Since
I'm already at two requests, I'm just
going to wait it out. Now
we could actually have a bug here, but I
I can see the finalization happened, but
this never actually throws the error. I
think that's because we forgot to do it.
So, let's go ahead go back inside of
features executions components HTTP
request executor.ts.
Yes, there is definitely a bug here. So
this entire step.r the run should
somehow be within try or catch or we
should, at least, look, at, the, response, and
what happens inside of it. So let me go
ahead and see what is the best way of
doing that.
I think what we can do is simply wrap
the entire await step here instead of
try and catch. So, I'm going to go ahead
and try and do that here like this
catch.
And then inside of this catch, I'm just
going to do I'm going to copy this await
publish like this and return error. And
let's make sure to just continue
throwing that error
like this.
Not 100% sure this is the best solution
but it's the first thing that came to my
mind right now. Basically skipping the
entire return result and all of those
other things. So let's see that now. So
make sure you've set the retry thing to
zero and make sure you wrap that. Let me
try and refresh this now.
Perfect. And let me try executing it
again.
Uh loading success. loading error.
Perfect. Absolutely amazing. Exactly
what we wanted to do. And you can see
how this time there were no retries.
Perfect. And if I'm correct, I think
that throwing these now, uh, wait, where
am I? In the executor. Uh, I think that
now we might not even need to throw
these right here because once we throw
this error, it's going to go. Oh
actually, no. It will not go inside of
catch because try catch is only for the
request right here. Yes. So, we still
need them right here. Uh, we'll see.
Maybe code rabbit will have some
interesting solutions for this. Maybe we
should just wrap the entire thing
instead have try catch. I'm not sure.
One thing that I do want to do before we
move on is go inside of the HTTP request
channel and let's export const HTTP
request channel name and let's go ahead
and just make it this.
There we go.
And then let's use it here. HTTP request
channel name. I'm going to search
through my code exactly to see where
else I'm using it. It's only in it in
its equivalent node. So I'm going to
change this here. HTTP request channel
name. There we go. And we can remove
this then. We don't need it. And can we
just import this as type? We cannot.
Okay,
there we go. So that is now working. No
more magic strings here. Perfect.
And let's go ahead and just try it one
more time.
And yes, if you execute your workflow
two times in a row, it's simply going to
reset, all, the, statuses., Or, at least, it
should. Let's try.
There we go. So, it will try each status
again. Perfect. Amazing job. Now, let's
go ahead and do the same thing for the
click manual trigger. So that's actually
kind of the only thing that will be
happening in these super simple triggers
which don't require anything to load.
They can only emit the loading state and
then they can emit the success state. I
don't think there's any way an error can
even happen in those triggers.
So let's start by creating the manual
trigger channel.
So, instead of features
um, no, where is it? Instead of ingest
yes, they're they're kind of everywhere.
We should improve that, too. So, HTTP
request, let's change this to manual
trigger like this. And let's change this
to be manual trigger channel
name.
And this will be called manual
trigger
execution.
This will be called manual trigger
channel. So even though they are all
exactly the same, I would highly suggest
having them separate. I think you have
to have them separate. You could create
some magical abstraction that would
generate all of them, but sometimes I
think abstractions are not that good. Uh
okay. So make sure you have you know
identical thing but for the manual
trigger. Now let's go inside of ingest
functions.dts and let's add manual
trigger channel.
There we go. Channels manual trigger.
Perfect.
Now that we have that, uh, let's go
ahead and, uh, I think we have to go
inside of the manual trigger executor.
So, inside of triggers folder
components, manual trigger executor.ds
I agree that the folder structure is a
bit complex as of now. I'll have to see
if maybe I should rethink how my
executes and triggers work. But yes
let's go ahead inside of manual trigger
executor.ts right here. And now we have
publish. We shouldn't have any errors
here because we are using node executor
here. We already define the publish
function inside of it. And now uh what
we can do is very simply the same thing
we did before.
So await publish import manual trigger
channel, from, ing inest, channels, manual
trigger and use the loading status and
then down here the moment we run this
completely unfailable
uh workflow step change this to success.
There we go. Now, in order to make this
actually work, what we have to do is we
have to create the action to refresh the
token. So, I'm going to go inside of
features executions components HTTP
request. I'm going to copy the actions
file and then in the triggers manual
trigger folder, I'm going to paste them
right here. And inside of here, I'm
going to rename the instance of HTTP
request token to manual trigger token.
It's going to be using a type of and an
instance of manual trigger channel
which means I have to fix this import to
be manual trigger. There we go.
Everything else should stay exactly the
same. So we are just using the new
channel and of course we are renaming
the type which also means we should
rename this. So fetch HTTP request no
fetch manual trigger realtime token.
Perfect.
Once we have that working, let's open
the node from HTTP request so that we
can cop copy the node status.
And then let's go inside of triggers
manual trigger node.tsx
and change the hard-coded node status
to now be our hook use node status. Make
sure to import use node status from
features executions hooks use node
status. And by now I fully agree it's
weird that we are in the folder called
triggers
working on a manual trigger which is
technically the node execution of that
trigger. It's like I'm confusing
node execution with the node type. So
yes, I fully agree it's a bit confusing
how things are everywhere right now. I
will try to think of some better folder
structure for now, but just bear with me
at least in this chapter and import use
node status from where we created it.
Features executions hooks use node
status. Change the channel to be manual
trigger channel name from inest channels
manual trigger. And finally, use fetch
manual trigger realtime token from dot
/actions.
And I'm trying to think if I forgot to
do something. I think this should work.
Let me go ahead and refresh.
Make sure you have saved all of your
files. Let's click execute workflow
right here. There we go. Loading
success. Loading success. Loading fail.
Amazing. Now everything has its own
channel. Perfect. Amazing. Amazing job.
Now that we have this working, let's go
ahead and merge this. So 21 node
realtime. Let's see. We added in just
real time. We created the channels. We
are publishing events and we are
capturing events using our use node
status. So 21 node real time. I'm just
going to go ahead and create that
branch. Create new branch. 21 node real
time.
And then I'm going to go ahead and
commit these 15 files right here. So
stage all changes 21 node real time. I'm
going to commit and publish the branch.
Now once this has been published, as
always,
let's go ahead and
open a pull request. And since this is a
pretty significant pull request, I want
to make sure Code Rabbit reviews this
one. So let's see that in a second.
And here we have the summary by Code
Rabbit. New features. We added real time
status tracking for HTTP request
executions, displaying dynamic updates
loading success, and error states. And
we added real-time status tracking for
the manual trigger. We did all of this
by introducing realtime capabilities to
replace the static status indicator
across execution nodes. So how did we do
that? As always, here we have a file by
file and cohort summary which basically
goes over all the files that we added.
But here we have the sequence diagram
explaining exactly what's going on. So
every node component of our now has a
hook called use node status with all the
fields it needs and after that it
subscribes using use inest subscription
and the action.ts file which we have
created. Once the connection is
established with the real-time channel
that we define per node, we go ahead and
emit events. So during execution we
publish the loading event and then after
completion we publish the success event
and finally uh that state updates on the
front end and it rerenders with the new
status.
So uh here is what code rabbit suggests
in the use note status hook logic verify
data filtering sorting and state update
executor publish integration confirm
status events are published at correct
life cycle points in both HTTP request
and manual trigger flows. This was one
of the questions I did have for the
ingest real-time team. Um I confirmed
myself that this works. I pretty
consistently managed to get sequential
states. So I can only conclude that if
you await publish they come at the right
time. So let's actually take a look at
the requested changes here. So in here
in our use node status hook it tells us
to address race condition with status
initialization. I think the problem is
that we could technically miss out on
the loading state if the success or
error comes too fast. In our specific
example, I think this is okay. It gives
us an option to do optimistic updates by
setting the status to loading, but I
think this is fine as it is right now.
As always, code rabbit is not a big fan
of node status. Um, my apologies of type
casting as it should. Of course, it is
our reviewer after all. So, if you want
to be as strict, you can implement this
is a valid status which will uh
basically allow you to check at runtime
if what you received from the use inest
subscription data is what you intend to
show to your user.
And in here, it tells us that we are not
handling error for the manual trigger
which is a good point. I just don't see
how it can fail. But yes, step.r run
could technically fail for some reason
right? So, we could be consistent and
wrap that instead of try catch and just
publish the error on catch.
Uh, and in here, it's basically telling
us to use this to handle that and not a
to-do. What it doesn't know is that this
is a YouTube tutorial. So that's why I'm
showing it inside of a comment here. But
yes, completely valid comments. But
let's go ahead and merge for now since
we got exactly the result we wanted at
this state of our tutorial. Once you've
merged it, go back inside of your main
branch right here. And as always, make
sure that you synchronize your changes.
So click okay right here. And then what
I like to do is I like to click on my
graph here and just double check that 21
is the latest one which I have just
merged. I believe that that marks the
end of this chapter. So what we've done
is we implemented real time, we pushed
to GitHub, created a new branch, created
a new PR and reviewed and merged. And
now we should be ready to start
developing some other nodes because we
can easily copy and paste from these two
nodes which we have which are completely
finished and have all the important
features in them. Otherwise, it would
have been very hard to update a bunch of
nodes which we created. Amazing job and
see you in the next chapter.
In this chapter, we're going to add a
new trigger to our project, Google form
trigger. And while we have already
developed some useful nodes which means
that we will be able to reuse most of
the code. This is the first trigger
which uses a complete external service
to activate some workflow within our
application. So it will be quite
challenging and interesting to develop
this. Nevertheless, let's get started.
The first thing I want to do is I want
to enable the user to add an option of a
Google form trigger. So because of that
I want to start with the node and the
dialogue for the Google form trigger. So
basically the exact same thing that we
have here for the manual trigger. I want
this but for Google form trigger.
Let's go ahead and get started by first
downloading an asset. So using the link
on the screen, you can visit my assets
folder and go inside of images here and
find Google form.svg
and go ahead and add it in your app.
So I'm going to add it inside of public
logos right here. Google form.svg.
Perfect.
Once we have that developed, let's go
ahead inside of our source features
triggers components and just as we have
the manual trigger, I think that we can
actually copy and paste this and just
rename it Google form trigger. This way
we can reuse most of the files inside.
Let's start with node.tsx.
Let's go ahead and change the export to
be Google form trigger like this.
And let's go ahead and change the node
status here to just be initial just for
now. So yes, that will make all of these
unused. That's fine. We're going to
bring them back later.
And then let's go ahead and just modify
some things here. So we have a manual
trigger dialogue which we are going to
change later to Google form trigger
dialogue. But let's leave it like this
for now. And let's go ahead and modify
the props of the base trigger node
instead. So for the icon, looks like it
accepts both string and lucid icon
which should mean that we can now try
forward slash logos/googleform.svg
because that is the exact thing we just
added here. Google form.svg within the
logos folder. Let's change the name here
to be when form is submitted
like that. And status will be status.
Yes, everything else will be the same.
Okay. Now that we have the Google form
trigger, it's not enough for it just to
exist here. We also need to go inside of
node components.
Node components is a file we maintain
inside of source config folder. And in
order for React Flow editor to render
that, we need to go ahead and add it
here. But one thing we're noticing is
that it's missing with within our node
type. So let's just go ahead and prepare
this Google form trigger. And let's go
ahead and add Google form trigger. You
can import it from features triggers
components. Google form trigger forward
slashnode.
Now in order to add this to our node
type, we have to revisit our schema
Prisma. So instead of Prisma
schema.prisma,
let's go ahead and add our new node
type. So below HTTP request, let's add
Google form trigger. Once we do that
let's go ahead inside of our terminal
and let's do npx prisma migrate dev.
Let's give it a name of Google form
trigger node.
And once we submit that, it should
synchronize our database with our
schema. What I suggest you do now is
restart both of your ingest and next
processes. If you're using MROs like I
am, you can highlight the process that
you need and press the letter R on your
keyboard. That's going to reset the
process.
or you can just rerun
npm rundev. So just make sure you've
done that and refresh your app to make
sure everything is still working. And
now you should be able to go back to
note components and note type should no
longer give you an error. Instead it
should properly load Google form
trigger. Great. But that's all not all
we have to do. We now have to go inside
of the node selector. If you don't
remember, node selector is maintained
inside of source components
node selector. Node selector is the
sidebar that opens up when we click on
the plus button to add a new node. So
we have to add some new nodes here.
Let's go ahead up here inside of the
trigger nodes. Let's go ahead and
duplicate the code for the manual
trigger
and let's go ahead and change it to
Google form trigger. Let's go ahead and
change the label to be Google form.
And let's quickly change the
description. So this will be runs the
flow
when
a Google form is submitted.
And for the icon, let's change it to
forward slash logos Google form.svg.
And I think that already we should be
able to see it here.
There we go. Google form.
Now, I'm not sure if
uh yes, I think just by giving it the
type that should be enough. Let's see if
I click this. There we go. But looks
like something is wrong here. Or maybe
it's not. So, when form is submitted.
Yeah, I'm not sure if if that's what it
should be the title here. I mean
depends. If you like it, you can leave
it for this to be the title
or you can go ahead inside of your newly
created trigger features triggers Google
form trigger node.tsx.
And in here you can either give it a
name here or you can use that as a
description and give it a name of Google
form.
So if you prefer that, you can do it
like this. For some of you, I'm sure
this might be a cleaner solution
especially if you later expect a Google
form to have multiple ways of triggering
the form. Like maybe when Google form is
deleted or I don't know if it's updated
right? Things like that. So maybe this
is a better option for you. The reason
we are using the title for this type of
trigger is because it's kind of the only
thing that can happen. So you can choose
do you want to be consistent and just
have the name here when form is
submitted or do you want to plan ahead
maybe and change this to be a
description and then give it a name of
Google form whichever one you prefer.
Perfect. So, one thing that's missing
now is when I open this, it opens the
dialogue of the manual trigger when it
should open the dialogue of the Google
form trigger. So, let's go ahead and fix
that now. So, inside of our Google form
trigger folder, let's go ahead inside of
the dialogue. DSX and let's slowly
modify it. So props will stay exactly
the same but the name will now be Google
form trigger dialogue
and let's go ahead immediately here
change this and add that. There we go.
So nothing much has changed. We just
renamed the component internally. But
now let's go ahead and actually change
what we need here. So the user actually
won't type anything inside of here. they
will only be able to see the information
they need to do inside of their Google
form to trigger this. So this will be
Google form trigger configuration
and the description will basically tell
the user what they have to do. Use this
web hook URL in your Google forms app
script to trigger this workflow when a
form is submitted. So this is how your
descriptions should look right now.
And now we're basically going to give
the user some way of doing this. So I'm
going to go ahead inside of params. My
apologies. I'm going to define the
params constant using use params from
next navigation. So make sure you add
this import. And then I'm going to
define const workflow ID to be
params.workflow
ID as string.
So this will basically tell me what is
the workflow ID that I'm currently in
the editor of.
Now let's construct the web hook URL. So
const base URL will be process
environment next public
app URL or let's go ahead and let's fall
back to HTTP version of localhost 3000.
So, next public app URL actually should
exist here, but looks like it doesn't.
So, I think that we can just add it
here. I'm going to add this under other.
And let's go ahead and define localhost
3000 here. Yes. So, this doesn't make
sense right now, but think of production
instances. So in production, we're going
to change this to be the actual URL of
our app when it's deployed, right?
Our.com domain. So I would rather that
we always have that available rather
than somehow uh using it in a different
way. All right. Now that we have the
base URL, we can also do con web hook
URL. The web hook URL will be
constructed as following.
It's going to use the base URL. So
either our com domain or localhost
depending if we are developing or
something else. Forward slap API
slhooks
forward slashgoogle-form
with one single param workflow ID and
workflow ID will be appended here. There
we go.
So now that we have that, let's go ahead
and define a simple copy to clipboard
method. So this will be super simple
asynchronous method. Here it is.
It's asynchronous method. Open try and
catch
in try await navigator clipboard write
text web hook URL the constant we
defined above and then toast.uess in the
catch toast. and make sure to import
toast from sonner. So the reason I
copied and pasted this is because it's
super simple. It's just a simple copy to
clipboard method.
All right. Now let's go ahead and use
this information above and actually
display it here in the dialogue. So
inside of dialogue header, let's go
ahead and clear things up. So let's
change this to space Y4.
Another div with a class name space Y2.
And let's go ahead and add a label.
We can import the label from components
UI label. And let's give this one web
hook URL HTML 4 web hook
URL. And then in here, let's create a
div with a class name flex and gap of
two. Let's add an input from components
UI input and a button from components UI
button. The button will have a copy icon
from Lucid React. And let's go ahead and
give the copy icon a class name of size
4. The button itself will be a type of
button so it doesn't actually
accidentally trigger some submit form.
The size will be icon variant will be
outline and on click will be copy to
clipboard.
For the input itself, we're going to set
the ID to web hook URL. The same one we
used HTML 4 above.
Value will be web hook URL.
read only will be true and class name
will be font mono and text small.
So so far when the user opens the Google
form trigger they should see the web
hook URL that they can copy. There we
go. You can see it's right here
and they will be able to paste this
inside of their Google form apps script.
Now, the problem is most users won't be
able to do this on their own. So, we're
going to make it a little bit easier for
them by adding some instructions here.
So, after these two divs end, after the
last button ends, open a new div here.
Let's go ahead and give this div a
rounded large background color of muted
padding of four, and space Y of two.
Let's give it an H4 element. Setup
instructions.
Let's go ahead and give the H4 element a
class name of font medium and text
small.
Now let's open an unordered list with a
class name text small text muted.
My apologies text muted foreground space
Y one list decimal list inside.
And now let's go ahead and add all the
steps that we needed to do. The first
step will be open your Google form. The
next step will be click on the three
dots menu and then click on the scripts
editor. I have no idea how to generate
this arrow uni code. You can just use
this or just try and Google you know
arrow uni code and copy it.
Third step will be copy and paste the
script below.
Then replace web hook URL with your web
hook URL above which we are actually
going to do for the user.
This is first, second, third, fourth
fifth step will be save and click
triggers and then add trigger. And
lastly,
choose from form on form submit save.
Again, you don't have to actually use
this. I mean, these are just
instructions for your users. You will
of course, know very well how to do
this. This is so your users know how to
do it, too. And if you're wondering, oh
well, Zapier has this, you know
one-click implementation, you also have
to be aware that these companies like
Zapier, N8N, they most likely have some
deals with Google and with uh other
external services to make this much
easier for everyone. We are here
completely trying to do this ourselves.
So, we have to use these methods. But
I'm 99% sure that in the background of
this one-click setups, it's actually the
exact same thing happening. It's just
all automated for those services, mostly
because they have a contact at Google
and they all want to make it smoother.
But this is what's actually happening in
the background. Don't take my word for
it. Of course, that's what I think is
happening and this is the best solution
that I found. If you know a better
solution, of course, feel free to write
it down in the comments. I will be very
happy to read and learn about it. Now
let's go ahead and give the user an
option to copy the Google apps script.
So open a new div here and a class name
rounded large background color of muted
padding four and space Y of three.
Let's create a heading for Google apps
script.
Give the heading for a class name font
medium
text small.
And in here let's go ahead and let's
create a button.
The button will have a type of button
variant of outline
on click
for now an empty function.
And let's go ahead and add a copy icon
here. Again, we already have it
imported. Give it a class name size 4
Mr. of 2. Copy Google Apps script
like this. Outside of the button, create
a paragraph. This script includes your
web hook URL and handles form
submissions.
Give this a class name text extra small
text muted foreground.
All right. So now
this doesn't make too much sense because
we didn't actually create the on click
to happen here. So let's quickly do that
so you can see what this Google apps
script will be. So this is a specific
scripting language that I personally
have no idea how to write. I mostly did
it with AI. So if you go back inside of
my assets folder, you can find Google
form trigger script.ts
and this is basically the script, right?
I just wrote it in this TypeScript form
so that you can easily add it to your
code. But this is the script. Um I think
this actually might be just JavaScript.
I I told you it's some random scripting
language. Looks like it's just normal
JavaScript. All right. Uh but yeah, for
example, what I meant to say is I have
no idea what name of the function should
be, right? I have no idea what this
event has inside. So that's why I used
AI help to basically do that for me.
Like this things get item, get title.
It's basically reading from the response
and building a web hook payload. And
then it's going to basically create a
fetch request to our web hook URL.
So let's go ahead and let's copy uh this
entire
script.
And now we're going to go ahead
and create that. So instead of Google
form trigger folder, create a new file
utils.ts.
And let's just paste the entire thing
inside. So export con generate Google
form script. It will basically copy this
to users clipboard and it will replace
the web hook URL with the prop web hook
URL. Let's go ahead and add it to our
dialogue.
So I'm just going to go ahead and make
this an asynchronous method.
const script will be generate Google
form script pass in the web hook URL
constant which we generate above make
sure you've imported our generate Google
form script from dot / utils open try
and do await navigator
clipboard
dot write text
script
and do toastsuccess
script copied to clipboard.
And in the catch
let's go ahead and just throw a toast
error failed to copy script to
clipboard.
All right.
And
okay, I think this is good enough. Let
me go ahead and just try this now.
So, just make sure you have your app
running somewhere.
And let me go ahead and add my Google
form trigger here. And when I click copy
Google apps script, it says script
copied to clipboard. And take a close
look at my web hook URL. So if done
correctly, it should now add this entire
web hook URL with my Google apps script.
So I can very easily check if that is
true. There we go. Function on form
submit
has a web hook URL which is exactly what
I expected it to be. Amazing. So our
generate Google form script works.
And now what you can do here is just
give your users some more information.
So after this div, after the last
paragraph here, open a new div with a
class name rounded large
background color muted, padding four
space Y of two
add an H4, available
variables.
So just some things to help your users
right font medium text small you will of
course see more of these variables and
which are available in the inest
developer screen
so let's make an unordered list here
with a class name text small text muted
foreground space y1
and let's go ahead and add a list here
and let's add code
and for example one of the things user
will be able to do is access Google
formrespondent
email and let's give this a class name
bg background px of1 py.5 and rounded
and you can explain what that is
respondent
and Then you can copy this list, paste
it and you can for example change show
the user how they can access the
question name like this. For example
this would be a specific
answer.
And then you can do the same thing if
you want to list all responses as JSON
using our registered JSON helper.
So basically just a way you can see I
have to zoom out a bit just a way to
help your users so they know how to
access this in the next node that will
be connecting to it. If you want if you
don't want to you don't have to add this
hint for your users.
So now that we have the Google form
ready I mean we just have the UI ready.
we still have to create the actual
uh web hook that will listen to it. And
besides the actual web hook, we also
need the executor. Let's actually do the
executor first. And I'll show you why.
So inside of
features triggers, Google form trigger
we also have the executor.ds.
And it's basically just this. That's it.
So let's change this to be Google form
trigger
executor.
Change this from manual trigger data to
Google form trigger data.
And that's it. Just change the step run
to be Google form trigger. And since we
are here, you can obviously notice that
we are using manual trigger channel to
publish the loading and success states.
So yes, the Google form trigger itself
will not start anything. Uh because this
will not be activated on click like our
manual one is. We just click execute
workflow. Google form trigger can only
be submitted through a web hook. So
that's why the executor is so simple
because it only needs to uh tell the
user all right I received an event
that's it
so while we are here let's go ahead and
let's quickly create the Google form
trigger channel so that we can change it
so instead of Google form trigger oh yes
we don't do them there for some reason I
have to improve that we do them in inest
channels Well, in one way they are all
in, one, place., So,, at least, that's, good.
So, let's copy manual trigger, paste it
here, rename it to
google-form-trigger.ts.
Go ahead and change this to Google form
trigger.
Change this to be Google form trigger.
And change this to be
Google form trigger channel. Basically
no instance of manual just Google form
everything else is exactly the same.
Once we have this channel we have to go
inside of functions.ts
inside of source ingest right here and
we have to register our new channel. So
make sure you import the new channel.
Perfect.
Now once we have that we can go back
inside of the executor.ts DS inside of
our Google form trigger folder. And we
can now finally replace all instances of
manual trigger channel with Google form
trigger channel. And we also have to fix
the import now. Google form trigger.
There we go.
Now that we have that working
let's also go inside of actions. DS.
So instead of manual trigger token
let's rename this to Google form trigger
token.
Let's rename the function to fetch
Google form trigger realtime token.
And let's go ahead and replace the
channel instances to our Google form
trigger channel.
Let's go ahead and fix the import.
Google form trigger. And there we go.
Now we have actions.ts
which will register to Google form
trigger channel.
Now that we have that, we can go back
inside of node.tsx
and we can finally revert this. So let's
take a peek at how it's done inside of
the manual trigger. Let's go ahead and
borrow the code here. Here it is. So
I'm just going to copy it. I'm going to
go back inside of my Google form trigger
node.tsx
and I'm just going to paste the entire
thing here. Let's go ahead and change
this to be Google form trigger channel
name and fetch Google form trigger
realtime token. Let's go ahead and
remove the unused import from the
actions. Let's go ahead and remove the
unused import from manual trigger. Let's
go ahead and remove the unused mouse
pointer icon. And I think everything
else should now be fully used in our
code.
Let me just double check that inside of
my executor. I'm using Google form
trigger. Perfect. If you want to do a
final check, you can go ahead and
highlight your Google form trigger
folder and click find in folder and
search for manual.
Manual trigger. Perfect. If nothing
shows up, you don't have any leftovers
because we copied this from the manual
folder.
Great. So, we are very far ahead with
our Google Google form trigger. One
issue though is that we never actually
developed that web hook route. Which web
hook route am I talking about? Well
this one. My apologies.
This one.
this endpoint right here doesn't exist.
So even if we copy the Google Apps
script and create a new form and paste
it here, it would just return a 404
because our project has no idea what
route that is. So let's go inside of
source app folder API folder. In here
let's create workflows.
Inside of workflows, let's go ahead and
create Google form.
And inside of here, let's create
route.ts.
Route is a reserved file. Oh, you
already know that, right? We went
through this. My apologies. Yes, it's a
reserved file name just like page.tsx.
So, let's go ahead and just add some
imports here. So, let's import type next
request and next response. Let's export
asynchronous
function post. Let me fix a typo in the
function.
Let's go ahead and make the request a
type of next request.
Let's open a try and catch right here.
Let's go ahead and resolve the catch
since it's easier.
console error Google form web hook error
and simply log the error.
It's always good to do these things even
though we have sentry. So you will have
advanced error logging. So it will be
way easier for you to discover if
anything like this breaks. That's why
it's super useful to have something like
Sentry here because there's so many
factors that can go wrong. It can be
user input. It can be your
implementation. Maybe just the way
Google app script works can change. A
bunch of things can happen. That's why
having Sentry so they log errors for you
and you don't have to log them is a big
big help. So let's also officially
return a response with success false
error failed to process Google form
submission
and let's also pass in the status
500.
Now inside of the actual post method
what we have to do is we have to
dstructure the URL. So new URL request
URL and then we have to get the workflow
ID from the params. So URL URL searchs
get workflow ID.
In case we are unable to find workflow
ID, it means we have no idea what
background job to trigger. So let me go
ahead and copy this right here.
And let's go ahead and simply say
instead of failed missing required query
parameter
workflow ID and in this case it's most
likely a user error. So 400 instead of
500.
Now if we do have a workflow ID, we can
go ahead and destructure the body using
await request.json
and then we can submit form data.
Now inside of the form data, we can add
everything relevant to the form. For
example, form ID, form title, and all
the other things that we might need. But
I also suggest that whatever you choose
to do like I do right here, individual
ones, you can also always pass the raw
body. This will allow your users to do
all kinds of things. Basically, using
our template language, they will be able
to access the raw object and then they
will be able to access whatever they
specifically want from the Google form.
But the reason we did this even row even
though raw exists is just to save the
user some time and to make the
templating simpler.
And now what we finally have to do is
trigger an ingest job.
But let's go ahead and take a look at
how we currently do that. So I think
that we have uh workflows
uh routers.ts
in the server folder. So instead of
source features workflows server
routers.ds, we have the execute folder
right here.
And in here we do awaiting inest. Send
like this. So we could do that right
here.
This would work perfectly fine. But
here's what I want to do instead. I
would suggest that we wrap our ingest
send into our own abstraction so that
later if we need to modify it for
whatever reason we can easily do it in
just one place instead of all the other
places which will call this function. So
instead of inest go inside of utils.ds
since we already have it. Let's reuse it
once again at the bottom here. export
const send workflow execution.
Let's go ahead and make this
asynchronous
and make the data be workflow ID which
is required and then anything else we
might want to pass.
Go ahead and return inest.
name workflows
forward slashexecute.workflow
workflow
and pass in the data.
Let's go ahead and let's import inest
from dot /client.
So just double check that we didn't
accidentally misspell this. And yes, now
we are only going to have to spell this
properly once. We don't have to worry if
we misspelled it here. So now what we
can do here is we can completely change
the way we call this. So instead of
doing this we can do await send workflow
execution here
and pass in the workflow ID to be input
do ID and then you no longer have to
pass this.
And then same thing here
you can just pass send workflow
execution and instead of input do ID you
actually have the workflow ID so you can
just use the shorthand operator. But
here's the thing
uh you actually don't want to just pass
that. Now in our previous examples, we
always began with an empty context until
some something happened like an HTTP
request. But this time it's different.
This time we will start this background
job with some context. So let's pass in
the initial data here. Google form form
data. Whoops. Did I call it form data? I
did. So let's just pass it as form data
here.
And then let's go ahead and quickly
revisit how our functions.ts work.
Instead of source in justest
functions.ts,
we can extract the workflow ID. And what
we do here is basically we just
initialize. Oh, looks like we already do
it. Perfect. So yes, our context will
either use event.data.initial
data or it will use an empty object. So
so far we've always had an empty object.
Why? Well, because we manually started
the workflow. So, obviously no initial
data could have happened from our manual
click. But now it's a different
situation. Now we have this Google form
which will parse all of this data from
the Google form submission and it will
then start the ingest job using initial
data. So that will be a completely
different situation. Now, perfect. So
now that we have all of these ready
let's go ahead and actually try it out.
So, I'm going to simplify this just a
little bit. Let me remove this one.
Yeah, let me remove all of them.
And I'm going to add Google form. And
then I'm going to add Well, yeah, let's
let's do just for fun, let's do two of
these. This will be I don't know my API
call httpsc codewithantonio.com
or you can use the uh pretty variables.
Let me just remember which one it was.
This one. So you have nice JSON. So user
there we go. Save. Save here. As you can
see now immediately
we have uh no execute button. So the
only thing we can actually do here is we
can copy the Google apps script and we
can paste that inside of Google form.
But here's the thing this still won't
work. Here's why. So this is the exact
script that will be pasted inside of
Google apps script. The web hook URL is
localhost 3000. That will not work.
External services have no idea what
localhost is. Local host is only
available on your current device. So in
order to resolve this, we have to add
enro or any other local tunnel. I highly
suggest enro because it is super
reliable and it also offers you one
static domain which will basically
always be the same no matter how many
times you start Angro. So, I'm going to
quickly show you how you can set that
up.
So, head to enrock.com or use the link
on the screen. And once you create an
account, you will be greeted with a
welcome screen like this. In here, make
sure you are looking at the agents
dropdown here and select your agent.
Basically, your operating system. If you
are on Windows, select Windows. Do not
confuse that with SDKs. This is
something different. This is if you want
to use it programmatically. That's not
what we're looking for. We want to use
it as an agent.
So in my case, I select Mac OS and I I
can either download it or I can use
homebrew to install it.
And once you've done that, so you have
two steps to run brew install angro and
then you have to add the al token. Do
not share this token with everyone. Uh
so I show it for tutorial purposes. I
will rotate this token after that. So
it's a new one and to test if you did it
correctly you should have enro available
inside of your terminal. So if I run
enrock you will see that it now works
and if I go ahead and try do enrock
http 3000 what's what it's going to do
is it's going to capture my nex.js JS
instance which is running at log host
3000 and it's going to forward that to
this public endpoint but you can see
that it's completely random and every
time that you try and you know use it
it's going to be completely random as
well but it works you can see that my
app is available through that weird URL
but yeah the problem is every time you
try and do this it's completely random
so this old one now no longer works
there is The way you can actually fix
that completely for free by going inside
your sidebar and let me just remember
uh here it is universal gateway domains.
So click on the domains here and if you
don't have any I think on the free tier
you can just create a new domain and
then basically in here you will always
have one completely free domain and you
can click on the little CLI button here
and it's going to show you how you can
select it. So now you can see that I
have this thing and now I can always do
this and it's always going to be this
domain. So, a custom domain for my free
account and then I can use that here. I
think this is very useful for
development because you can see every
time I run this, it's always the same
domain. So, you don't have to change
your code that often. So, okay, let me
just close this. This is if you're
wondering what was that file, it's just
the Google apps script that I copied and
then paste it here to demonstrate what's
actually being copied to the clipboard.
So if you're using MROs, what I would
suggest you do is you well, let's go
ahead and do it together. Let's go
inside of package.json first.
And inside of package.json
let's go ahead and do angrock dev.
And in here, let's do
I already forgot the script. this one.
There we go. Like this.
So then when you are inside of your
project, you can just do npx my
apologies npm run angrock dev and then
you will always have your local tunnel
running so you can test your web hooks.
But I'm going to go ahead even further.
So what if you want this to be dynamic?
Well, that's completely reasonable.
So I'm going to go ahead inside of my
environment here
and in the other I'm going to add my
angrock URL. Keep in mind all of this is
completely optional at this point
right? I mean I showed you how you can
do it yourself using Angro. So if you
manage to have your app running using
this, perfectly fine. you can continue
with the tutorial. So if this part
doesn't work for you because I know
Amprox can be a little bit tricky for
Windows users, that's perfectly fine.
What I'm doing here is just making it
more convenient. Just doing it so it's a
better team environment, right? So yes
it kind of would be reasonable to store
that instead of your environment. So
then in your package JSON you don't have
to literally use the URL and instead
what you can do
is you can reference the environment
variable using dollar sign angro URL but
that will not work just like that you
can see now it's not working it's fall
it's full it fall back to random URLs
the reason it doesn't work is because
it's missing environment CLI. So, let's
do npm install.
Whoops.
Mpm install environment- cli and save it
as a dev dependency because you don't
actually need it in the dependencies.
So, let me show you that. Environment-
cli. I'm using version 10.0.0.
And once you have that, you can add a
prefix here dot environment
and then try it again. Ambient run
angrode dev.
Uh looks like it is still not working.
Uh because yeah, my apologies. You will
have to
run it like this. So add two dashes
here. Let's go ahead and try again. This
hopefully this time it will work again.
not working. Okay. Uh dot environment
enro angro URL.
Let me check if I'm doing something
incorrectly. Enro URL. It's right here.
Angro URL right here. H
maybe that's simply not the way it can
work. So instead, what I'm going to do
is the following. I will do what worked
for me. I will put dot environment here
in my mrox
like this.
And then I'm going to go inside of my
mrox
configuration file.
And I'm going to go ahead and add a new
process here called engrock
cmd
npm run angrode dev.
And I think that now it should work
hopefully because this is what worked
for me. So I had dot environment here
and I also had it here and in mrocks.
So I'm going to go ahead and shut this
down. I'm going to shut this down and
mpm rundev all. And now I will I should
have three of them running. And now it's
working. So you can see now I have my
enrock running at my uh static URL which
I can now always easily find here. So I
never have to guess what it is. I can
just paste it in my URL here and I have
my app running. So sorry for
detouring so much. I just really wanted
you to have that because it's useful to
develop like that just having one
command being run and all configured
from your environment file. again uh you
can completely you know just do what we
previously had this this is perfectly
fine right I'm just thinking that some
of you might be doing this in a company
or maybe want to impress your employers
so yeah it would be kind of useful to do
it like this so they can easily set it
up for their uh web hook URL
all right so now that we have that ready
make sure that you have your angro
running on any URL. It really doesn't
matter. Just you should know what the
URL is. Uh and then we are now ready to
do this because if we try to access our
web hook URL through our new forwarded
URL, we can now do it. So let's go ahead
and create a Google form.
So here I am in the Google form tab and
I'm going to click start a new form. I'm
going to call this nodebase. So, no
thanks. Uh nodebase test.
And for the question here, let's do uh
what endpoint should I fetch? And let's
go ahead and make this
how can I make this like a test
textbased answer?
I have no idea. I'm not that good. Okay.
So here uh yeah I just want like a short
answer like a URL. I think this will be
fun. Make it required.
And let's go ahead and publish this.
Let's click publish.
Okay. And now I should be able to copy
responder link. I should be able to copy
it. And in my new tab I'm going to paste
it. And there we go. So we are we're
having this super simple question. what
endpoint should I fetch? And now in
here, uh, users would, for example
answer something like
this, right? Our JSON placeholder typing
code. So the goal is so that we create
the following thing. When Google form is
submitted, read the user's answer and
then we're going to use a variable here
instead. That's kind of my idea behind
it. Now, obviously, if you just submit
nothing will happen because we didn't
add any script. So, I haven't done too
many of these scripts and this is all
new to me. So, you're going to have to
bear with me. I'm not the expert at
Google Forms. So, I will kind of try to
follow my own guide here. Maybe get
stuck a few times, but we will get
through this. Let's go ahead and do it
together.
All right. So first things first, open
your Google form. Click on three dots
menu and script editor.
App script. Okay, that's it. So we
should rename that to apps script. And
once we open that, it should load the
apps script editor for this specific
form. Looks like it's taking a while. So
I'm just going to pause the video until
it loads.
All right,, here, it, is., So, let, me, zoom
in.
I will call this project the Nodebase
Google apps script
and this is it. This is the function. So
you can see it's not JavaScript. It's
I'm guessing Google script.gs.
I'm not sure. And now in here I can copy
Google apps script
and I should be able to paste it like
this.
And now basically what we have to change
is this. It shouldn't be localhost 3000.
It should be our running URL right here.
So since I have it inside of my
environment, I'm just going to copy it
from here
and make sure you are using the HTTPS
option.
So change this to HTTPS
and then paste it here. Make sure you
don't add any double slashes. So, HTTPS
and then your URL link because we are
using HTTPS right here. In production
obviously, you wouldn't have to change
this because it would be the correct
domain. In development, it's local host
so it's a little bit harder to do. And I
think that uh this saves automatically.
Oh, you can do command save and then it
will save the drive or you can just
click here. Okay, so that works. Uh, and
now let's go ahead and go inside of
triggers here. I think that's the next
step. Yes, let's go ahead inside of
triggers.
Uh, and click create a new trigger. Add
a trigger. There we go. Choose which
function to run. So on form submit you
should have this because you just saved
the file which has that function.
And let's go ahead and see what else.
Choose for from form on form submit
from form
on form submit. Perfect. And let's click
save right here.
And let's see what will happen. All
right. script authorization failed.
Please check your pop-up blocker
settings and try again. So, I'm going to
click here and I'm going to allow
pop-ups and redirects from this website.
And I think I know exactly what's
happening here. So, I have to reverify
my account now. And this is very
important. What you're seeing right here
is warning you specifically. So when I
first saw this message, I thought, "Oh
this is a bad implementation because
you know, my users will see this and
they're going to have to, you know, see
this message." That's not nice. This is
only warning uh the script creator, the
form owner, the form owner, right? Why
is it warning us? Well, because this
scripts can be dangerous, right? Who
knows what we just copied here and
pasted? Imagine someone who is not as
technically capable as you are. They can
very easily hide some malicious code
here and tell them, "Yeah, just paste it
here." So that's why Google is telling
you that this is requesting access to
sensitive info because it is we are
accessing the Google form submission
data and we are sending that to some
random endpoint. That is what Google is
warning us. But obviously you can also
get rid of this warning too once you
verify this app with Google. So those
are production steps. So this is 100%
safe. You can click go to notebased
Google apps script even though it says
unsafe. I know this sounds very weird
but that's because look at what it will
do. It will view and manage your forms
in Google Drive and it will connect to
an external service. But we are the ones
doing that, right? So that's why you can
allow this to happen. I know it sounds
sketchy. Uh but
it's perfectly fine, right? You
developed this. You saw the exact script
that is pasted inside. You know exactly
what it does.
And now it says loading data. This may
take a few moments. I think at this
point you might even be able to submit
or maybe you just have to create a new
trigger again. I know this part was a
bit weird for me too. Here we go. Owned
by me deployment head. To be honest, I
have no idea what these deployments
really mean because I think that all I
can do is I can just always just save
the file and it will work. I have no
idea what deployments mean really. I'm
not too familiar with Appcript. I just
managed to get it work this way.
Perfect. So, we now have this. Make sure
it's on form submit from form on form
submit event type and make sure that the
code is exactly that you have copied
from here. Right. And the only thing you
should have modified is the web hook URL
to use your active angro tunnel. And I
think that this should be it. Keep in
mind that this will obviously only work
for Whoops. for this specific endpoint.
So, let's go ahead and try it out. I
have no idea what we can expect
actually. So, let me just see what this
error is. Uh, some hydration error. I'm
not too sure what that is. I will focus
on that later. For now, let's submit a
form. So, I will just copy the link. I
will paste it in one of my tabs.
And yeah, let me go ahead and try and
add some useful URL here.
Let's actually try with this one.
It's more recognizable. And let's click
submit.
Let's wait here. Maybe we can already
see it being submitted here. Maybe we
won't. I have no idea. Uh, looks like it
is not being submitted right here.
Let's go ahead and check it here. Looks
like no results are found here. This
would mean that it is not working, I
believe. So, let's go ahead inside of
the triggers here. Maybe it will take
some time for the first one. I don't
know. I'm not sure. Error rate is a 0%.
That's good.
Let's go ahead and click on executions
here. Looks like something did trigger
it and it did manage to complete.
If I click view trigger
it just redirects me here.
Still I cannot see any runs happening
here nor here. Let me try my inest here.
Can I maybe see a hit somewhere here?
Oh, here it is. We have it. API web
hooks Google form.
It's 404.
Could I have maybe made some typing
mistake here? Source app folder API.
Did I name it workflows? I named this
workflows instead of web hooks. My
deepest apologies. Select yes to update
imports. That's going to open this cache
file. You can just save it, close it
close the next folder. That was the
problem. I think that now it should
work. Let's try again. It's super easy
to retry. Just click submit another
response. Let's go ahead and try code
with antonio.com again. Let's click
submit.
And will it maybe work now?
Oh, there we go. It failed. But no
executor found for node Google form
trigger. All right, this is actually
good news even though it doesn't sound
like it. We forgot to add our executor
but it's good because something tried to
trigger the executor. So, instead of our
triggers, we have Google form trigger
and we have executor.ts.
But yeah, we never actually use this. If
you search, we don't use it absolutely
anywhere in our code except in its
definition.
So let's go ahead and uh I think this is
in executions.
Uh maybe not.
Let me go ahead and try and find. Okay
I think I know the name of the file. All
right, it is in source features
executions lib executor registry.
Uh yes, we have this to-do that we have
to do. I will take care of that. But
let's add a node type dot Google form
trigger and let's use Google form
trigger executor. Make sure you have
imported it.
And I think that now third try. I think
this should work fine. Let's refresh for
good luck. Oh, there we go. It
retrieded. It retried itself and then it
worked. That's great. But let's let's
try ourselves again from scratch. So
I'm going to go ahead and click submit
another response. And I'm going to do
httpsc codewithantonio.com.
Submit.
Let's wait. Let's wait. It should
highlight any second now. Let's see.
There we go. Finally, it works. We
successfully triggered using a
third-party service. And let's go ahead
and take a look. Instead of our Google
form trigger, we have Google form
variable. We have the form ID, form
title, we have raw data, we have
respondent, well, we don't have
respondent email because we didn't make
it required in the Google form, but we
do have responses. What endpoint should
I fetch? Like this.
So if I am correct, the way I could now
do this is by using
Google form
dot
let me see dotresponses
and then
quoting
this.
I think
I if this doesn't work, it's probably
because of the white space. So, I should
rename the question to be a single uh
word. But let's try it. Let's me go
ahead and click save here.
Uh
okay, I see.
Let's go inside of HTTP request
dialogue.ts. DSX
and let's go inside of the form schema
here. Yeah, I think
it's complicated. Yeah, but let's make
it a string and let's go ahead and chain
this
to be at least required
like this.
So yeah, end point will be any string.
And I think that now this save should
work.
And let's click save. And we can refetch
now. So now
you can see that basically when the form
is submitted, we're going to read from
the context Google form.responses and
specifically that question.
And now let's go ahead and try and make
it more fun. So I'm going to go ahead
and submit my form again. So submit
another response. What endpoint should I
fetch? I'm going to use https JSON
placeholder users one. And I will click
submit.
I have no idea if this is going to work
or not because of the white space in the
question name. So let's see. Yeah, it
does not work. I'm guessing because of
that specifically. Let's see. Yes
it cannot do that.
So, can I make it simpler by calling
this URL?
Save.
Submit another response. Now, it's
called URL. So, I think that I can just
modify this to be well, I think it
should be just URL now
because it's just the name of that
question. So, let me save that.
I'm going to do another refresh here. I
mean, at this point, what we wanted to
implement for this chapter is finished.
I'm just trying to make some fun
conclusion before we wrap up the
chapter. So, let's go ahead. Fingers
crossed. There we go. Perfect. So, if
we've done this correctly, HTTP request
should fetch the first user. And that's
exactly what it did. It fetched the user
with an ID of one because that is
exactly what we submitted
for in the Google form which we can
prove right here. Respondent responses
answered the URL should be users one.
Amazing amazing job. You just
implemented a thirdparty service trigger
to your app. What an amazing job you've
done. You've learned so many different
things in this one chapter. And yes, I
completely forgot about this. I will
make sure that we resolve that. So
let's go ahead and let's finally merge
this thing now, shall we? So, 22 Google
form trigger. I'm going to go ahead and
open a new branch here. 22 Google form
trigger.
There we go. I'm going to go ahead and
stage all 20 of my files here, including
the new MROS. If you didn't change that
you might have 19, 18 vinyls. I don't
know. But yes, these ones should be the
important ones. Let's go ahead and do 22
Google form trigger commit. And let's go
ahead and publish the branch. Now, as
always, let's go ahead ahead and open a
new pull request. And since this was a
big one, let's go ahead and review it.
And here we have the review by code
rabbit. So new features, we added Google
form trigger support. Workflows can now
be triggered by Google form submissions
with web hook configuration and
real-time status monitoring.
Enhancements. We added Angro integration
to development environment for local web
hook testing. Improvements. Relaxed
endpoint validation to accept non URL
string inputs. This refers to our last
change where we basically allowed the
entire endpoint URL field to be a
variable otherwise it would not work. So
let's take a look at the sequence
diagram even though I think it is pretty
simple. So Google form makes a post
request to forward/appi web hooks Google
form with required workflow ID and form
data. We then proceed that information
to send workflow execution which fires
the injust background job. We then
execute Google form trigger which is
very simply used to publish the loading
status and uh forward the context to
whatever is the nest next topologically
sorted node
as per some comments in here. Uh it says
that this script assumes authentication
is already configured. uh but this
prerequisite isn't documented. So yes
all of this is true. If your users, if
you will, you know, give this source
code to someone, you should probably
tell them that Enro is required and that
they need to have Enro set up. Another
thing it mentions here is that uh Enro
URL parameter suggest a reserved domain
configuration that is not Angro Pro
feature. So yes, multiple domains are
but single domain is not. So yes, it's
fine to have this on free tier. In here
it is warning us that technically
anything can access this web hook right
now, which is completely true. So I'm
going to see if there is a simple way I
can show you how to authorize your web
hooks. But if you're interested, there
is this service called uh Swix Web
Hooks,
which I know Clerk uses
to protect their endpoints. So it could
be something you could explore for
production use basically web cooks as a
service. For now let's just focus on
this. So yes right now anyone could
access this. Uh I will try to implement
something simple. So at least you have
to know the secret to access it which
will be enough to not make this
essentially a public endpoint. It's
going to work something like this. Yeah.
And in here it noticed a typo. I said
container. I should set contain
in here. We have an invalid string that
we have to replace web hook URL with the
web hook URL from above when it actually
embeds that in the copy button. So we
can remove that part. Correct. In here
as in the previous uh pull requests we
don't handle any errors here. So yeah we
could wrap that instead of try catch and
then uh catch errors. And in here it's
telling us that node.tsx
should have use client. Uh since our
previous ones didn't have it, I think we
don't have to add it here simply because
its parent component is already a client
component.
And in here it's basically telling us
the same thing that it did in the uh web
hook part since this is the Google apps
script. later when we add some kind of
validation to that web hook, we should
also include authorization
property here so that it can access it.
All right,, so, great, great, comments, from
code rabbit. Some serious security
issues were caught up here. Let's go
ahead and merge that pull request. Go
inside of main and make sure to click on
synchronize changes. This will
synchronize your main bridge with your
newly merged one. Click on the graph to
convince yourself that 22 was the latest
merged one. Amazing amazing job. So very
uh well I wouldn't say challenging but
complex chapter with some new elements
like Google script and learning how all
of that works. So we added Google form
trigger node dialogue executed realtime
channel and web hook. We created a
Google form and we even discovered this
appcript thing we pushed to GitHub and
reviewed the pull request. Amazing
amazing job. And see you in the next
chapter.
In this chapter, we're going to add
another trigger to our project. Just
like we've added the Google form
trigger, we're now going to add the
Stripe trigger. Basically, when certain
Stripe events reach our application's
web hook, we're going to initiate the
workflow based on that workflow ID. But
before we do that, let's go ahead and
resolve one TypeScript error that I keep
postponing.
Let's go ahead inside of our executor
registry.
And in here, we have a problem with our
HTTP request. So, what I'm going to do
is I'm going to go inside of the HTTP
request executor here, which you can
find inside of source features
executions, components, HTTP request
folder, and then find the executor.
Let's go ahead and bring back the
optional question marks here. This will
then resolve the problem right here. So
we can remove the to-do
but now we have a problem inside.
One easy way of fixing this is instead
of doing uh validation checks outside of
step
run, let's simply do it inside. So the
reason this needs to work is
TypeScript flow control works in a very
specific way. Even though we just
validated that data method exists, that
variable name exists and that endpoint
exists, there is nothing guaranteeing us
that once this step.run happens, data
itself won't change. That's why in here
it still thinks it can be undefined.
Same with the endpoint. It still thinks
it can be undefined. So one easy way of
fixing this is just by moving all of
these if checks inside
of step.r run. So let's go ahead and add
them here. And you can see that the
moment we add them, all of the errors go
away. So it's actually that simple to
resolve this. And now when you hover
over data endpoint, it's a string
method. It's uh
I think it the error is no longer here.
So I think it's just showing us options
method. Okay. Yeah, I see. But the
method itself, yes, it definitely
exists.
Perfect.
And same thing for the variable name
which was the problematic one.
Basically, it's no longer causing us any
errors. I think that if you did variable
name here, data variable name, and
hovered over this, there we go. It tells
you it's a string.
So just by moving the if checks inside
of the scope of this function, we fix
TypeScript flow control. And now it
knows that these must exist because they
can no longer be mutated after these if
checks. Perfect. So let's go ahead and
close that. Now that we have this ready
let's go ahead and let's implement the
Stripe trigger. So, this will be quite
simple as we've recently just
implemented the Google form trigger.
Let's go ahead and copy most of the
files., I'm, going to, go, inside, of
features triggers. I'm going to copy
Google form trigger and I'm going to
paste it in the components. I'm going to
rename this to stripe trigger.
Instead of stripe trigger, let's go
ahead and modify all of these files. I
think that we can already delete the
utils because there will be no Google
form script. So let's remove the entire
utils file. Then let's go inside of node
of the new stripe trigger. And let's go
ahead and rename this to stripe trigger.
Uh is it called a node in the end? I
just want to be consistent. Yes, it has
the keyword node at the end. So, let's
call it stripe trigger node. Perfect.
Let's go ahead and change this to
stripe. And let's do when
stripe event is captured or anything
like that. later. If you want to be more
specific, uh for example, you could add
a description which will be based on the
trigger dialogue where users could
choose whether they want to listen to uh
invoice customer
purchase, failed, right? Any specific
events. So in this chapter, I'm just
going to show you the kind of overall
idea of how you would do this and then
you can specialize it into one specific
case or allow the user to select what
case they want. That's why I'm doing
such a broad description here. And now
we have to change this. So instead of
using Google form.svg
go ahead inside of my nodebased assets
folder. And in the images here, you
should find stripe.svg. SVG.
Once you have stripes SVG, go inside of
public logos and add it here.
So, just go ahead and add it here. And
you should have stripe.svg
inside. Perfect.
Uh, oops, I added two of them. Now
let's go ahead back inside of our node.
DXX for the stripe trigger. Make sure
you are in the correct one. And let's
change this to stripe.s. SVG and I can
leave this as is. Now let's go ahead and
add it to our node components. I think
this is inside of source config folder
node- components.
Uh and then we have to just copy and
paste this and change this to be stripe
trigger. Obviously we have an error here
because we haven't added the stripe
type. But let's just prepare this with
stripe trigger node.
Now let's go and set up our
schema.prisma and let's add stripe
trigger. Once we add that, let's go
ahead and do npx prisma migrate div. And
let's go ahead and give it a name of
stripe trigger node.
So very simply stripe trigger node. And
after you do that, it should synchronize
the database.
As always, I recommend restarting your
next server, your ines server, and even
Enro wouldn't hurt. Uh, let me just go
ahead and quit the entire thing and do
npm rundev all.
Looks like Enro was running twice. So
there we go. Now, it's fixed.
Great. Now that I have the stripe
trigger here, I should no longer have
the error inside of here. If you do just
restart your VS code or your TypeScript
entirely.
Now we have to go inside of components
instead of node selector and we have to
copy Google form trigger. Change this to
stripe trigger and use the stripe SVJ.
Let's go ahead and do stripe event here.
And let's go ahead and change the runs
the flow when a stripe event
is captured or anything that you feel is
sufficient to explain how this workflow
will be triggered. Let's go ahead and
refresh our app to make sure everything
is working. Every time you restart
Nex.js, uh, you should do this. And
looks like when I deleted that utilus
folder
inside of my Stripe
it messed up the components. And inside
of
uh Stripe trigger right here
dialogue.dsx,
it uses generate Google form scripts. So
just remove that.
And let's go ahead and remove the entire
on click here
like this.
This way we shouldn't have any errors.
Perfect. And now when we click here, we
should have a stripe event here. Runs
the flow when a stripe event is
captured. There we go. When stripe event
is captured. Perfect. So we can now
visually add stripe trigger to our app.
Now let's go ahead and create the proper
dialogue for it.
So the dialogue itself will actually be
quite similar to the one in the Google
forum trigger.
Let's go ahead and make sure we rename
it first. Instead of Google forum
trigger, this will be Stripe trigger
dialogue. And we are still going to use
the params. We still need the workflow
ID and we still need to generate the web
hook URL. But instead of going to web
hooks Google form, we're going to go to
web hooks stripe.
Copy the clipboard function. can stay
exactly as it is. And now let's just
change the title and the description to
describe exactly what we are doing in
regards to the Stripe event. So Stripe
trigger configuration.
And let's go ahead and change the
description to something useful.
Configure this web hook URL in your
Stripe dashboard to trigger this
workflow on payment events. Obviously
again a broad description. Later you can
specify this to be something specific to
make it a bit more useful to your
viewers. I mean to your users uh these
fields will be exactly the same. There's
nothing we have to change here. And for
the setup instructions, well, we should
just, you know, change what we have to
do. So, I'm just going to go ahead and
show you. The first step will be open
your Stripe dashboard. After that, we're
going to, go, ahead, and, go, to, developers
web hooks. Then we're going to add click
add endpoint. We're then going to paste
the endpoint
which users will see in the input above.
Users will then have to specify which
events they want to listen for. For
example, payment intent succeeded. And
let's go ahead and tell them to copy and
save the signin secret.
And then uh in here we have the Google
apps script which we can completely
remove because no such thing exists for
the Stripe event. And for the available
variables here, uh, you can be as
creative as you want. For example, I'm
just going to go ahead and add a few
here. So, instead of this unordered
list, I'm just going to remove the
entire
content inside so it's empty. For
example, one thing you can do is the
payment amount
using the list element, the code element
class name bg background px1 py 0.5
rounded and just render stripe dot
amount because stripe will be the name
of the variable where we're going to
store the initial context of and that
will be the payment amount. And then you
can just go ahead and add a bunch of
these for anything useful. for example
currency
or maybe customer ID
or you could just show the users how
they can access the entire Stripe object
by using our JSON helper. So again, this
is just UI helpers. This is this does it
doesn't matter if you make a typo here.
It's just to make it easier for your
users. Another useful one might be the
stripe event
stripe event type. So they can see
exactly what happened. Great. So once we
have this, let me see. Can I remove
anything? I think everything here is
ready to go. We can now go inside of the
node.tsx
and we can change this to be stripe
trigger dialogue. Let's go ahead and use
it right here. There we go.
And immediately now you can see stripe
trigger configuration. Uh in here it
uses web hooks stripe. Perfect. I can
click copy here. Setup instructions
include the stripe dashboard. Uh and
available variables are listed here.
Perfect.
So what we have to do now is obviously
update the node status so it uses the uh
Stripe channel and not the Google form
channel.
So, that shouldn't be too hard. Let's go
ahead and go inside of ingest channels.
Copy Google form. Paste it here. Rename
this to Stripe Trigger. Go inside of
your newly created Stripe trigger. Go
ahead and change this to replace all
instances of Google form with Stripe.
So, Stripe trigger channel name again.
Stripe trigger execution. And this will
be stripe trigger channel. Everything
else will stay exactly the same. Once we
have uh stripe trigger, let's go ahead
inside of ingest functions.ds
let's make sure to register the new
stripe trigger channel.
Just make sure you have imported this.
Great. Once we have that, we can go
ahead inside of
well back where we created this inside
of features triggers stripe trigger
actions.ts
and let's just modify it. Right? So
immediately we can change this import to
be from stripe trigger stripe trigger
channel type of stripe trigger channel.
Use stripe channel here. Let's go ahead
and rename all instances of Google form
to be Stripe.
And I think that should be enough.
Perfect. Now, obviously, we have an
error here because we have to go back
inside of the node of the stripe trigger
and we have to import stripe fetch
stripe trigger real-time token. There we
go. And I think
oh the only thing we have to change here
is the import. So, stripe trigger
stripe trigger channel name.
I think that might be it.
And what I always like to do is I like
to right click on the folder stripe
trigger and I would like to click find
in folder
and
uh search for Google form. Yeah.
And now you will see everything that we
have left over to fix. So it's the
executor one we haven't fixed. Perfect.
So immediately change this from Google
form to stripe. So I am inside of
executor.ds instead of the stripe
trigger folder. And in here instead of
Google form trigger executor it's going
to be
so let me just first resolve the name
stripe trigger executor. Then let's
change the channel to be stripe trigger.
Stripe trigger channel.
All instances should also use stripe
trigger channel. The step name should be
stripe trigger. There we go.
And now I think we're done. I think if I
go ahead and do find in folder again
Google form or Google does not exist as
a search result. Perfect.
So now I think we might actually be
ready to try this.
So the only way we can actually
try this is
since we don't have any transform
nodes, we can just go ahead and add this
just so we have like two nodes
available.
Let's go ahead and just use
any here. Just set whatever you want.
Click save.
And basically what we have to do now is
we have to configure this inside of
Stripe to make it work.
And before we actually do this in
Stripe, it would be a good idea to
create the web hook. I completely forgot
that we need the web hook as well. Lucky
for us, it is super simple to what we
had before. So just go inside of source
app API
web hooks Google form. Go ahead and copy
it and paste it here. Rename it to
stripe.
Uh if it asks you to update the imports
you can select yes. And then you get
this cache, you can save that, close it
and make sure to close that folder. It's
not important. Go inside of Stripe
route.ts. And let's go ahead and improve
this. So this is exactly the same. We
still need the workflow ID. And now the
form data will be a little bit
different. So what I suggest that you
keep here is at least the event
metadata. Basically allowing the user to
quickly access uh the event ID for
example.
Then you could do event type basically
just those useful things. Obviously you
would modify this you know how your
users are using this and what they
expect what they think it's better. uh
and then to give them all the other
useful things I would suggest using raw
like this. And in here they can
basically access uh customer ID, amount
currency, session ID, payment status
customer email, description of the
product, everything. They you can
simplify this for them as much as you
want, right? But do keep in mind that
you would probably want to do it per
stripe event because every stripe event
has a different data object. So because
of that, you should probably be careful
with what you pass here. Let's just go
ahead and fix uh this. So instead of
Google form web hook error, this would
be Stripe web hook error fail to process
Stripe event.
Stripe event. That's it. Now in order to
send workflow execution we have to
modify the initial data to be stripe and
I will just pass in stripe data here and
this would be stripe data. There we go.
Perfect. So once we have this I think we
should be ready. So, make sure it is
inside of web hooks, stripe. And uh now
we can go back in here
and basically it depending on when you
created your stripe account, you might
have this sandbox thing or maybe you
won't have it. Just go ahead and try and
create a new account. Keep in mind that
I think you can only create like one
sandbox if you don't verify your
business. Verifying your business
basically means that you need to have an
actual business information. So if
you're just doing this for development
you have to use the single sandbox that
you have or just create a completely new
account on Stripe and then you will get
this new sandbox thing. And in here you
have to access web hooks somehow. So I
just search for them in here and click
web hooks. You can see I already have
one so I'm just going to delete it so it
doesn't confuse you. And now there are
two ways you can do this. You can do it
with an actual production ready URL link
which would be useful for when you
actually deploy this application. But
another way would be here test with a
local listener here. So in order to make
that work you need to download the
stripe CLI.
It is as easy to do this as Angro. So
just go ahead and follow the install the
stripe CLI on Mac OS, Windows or Linux.
So depending on what you use, you can go
ahead and see all the instructions. So
in order to check if you did this
correctly, go ahead and type Stripe and
you shouldn't see any error. Instead
you should see the flags that are
available.
All right. Now, let me go back here and
let's try and trigger an event here.
Let's try and make that happen. So click
test with a local listener here. Let's
go ahead and first do Stripe login.
There we go. So, go ahead and open this.
Once you open it
you should allow access. Just always
confirm that what you see here is
exactly what you see here. So, you're
not accidentally allowing access to some
other device.
You may now close this window. Perfect.
There we go. So, this step is now
correct. And now we have to do the other
step which would basically be 3000 API
web hooks forward slash
uh stripe and yes I think we also need
the exact web basically this right here.
So just copy that part too.
So forward to and just this entire URL.
Obviously in production this would be
much easier. your user would just copy
this. They would go ahead and actually
add the web hook endpoint here. But we
can't do that because it cannot target
the local host. So what you can do is
you can use your angro forwarded URL and
then add it here as the actual
destination. But I think it's just
simple to do this too. Uh okay, no
matches found.
Uh maybe we need to like wrap this
like that. Does that work?
Okay. So, now that we are connected
here,
let me just see. We should try and
trigger an event. So, I'm going to open
a new tab and let's see, did I do this
correctly or not? So, Stripe trigger
payment intent succeeded.
Uh, and looks like it is receiving
something, but it's receiving back 500.
So, I don't think that this event fired.
Uh, is this the one that just happened?
I'm not sure.
I'm trying to figure out. Oh, no.
Executor found for not Stripe trigger.
Oh, I think everything is actually
working. We're just making one mistake
here.
Uh, we forgot to do inside of Stripe
features executions lib executor
registry. We forgot to add node type
stripe trigger. Stripe trigger executor.
Make sure you import it. So the same
mistake I made with the Google form
trigger.
And I think that now it should actually
work. So let's just go ahead and focus
on this. So it should just trigger this.
And then this should trigger that. As
simple as uh well that. Oh yeah. You
need to have this forwarding here.
And let's go ahead and do this again.
Let's wait for a second. And there we
go. So, we successfully uh triggered our
app. Looks like when we run this, it run
a few times. That's probably because it
received, as you can see, a lot of
events. Uh, I think that's
probably because
inside of route DS in the stripe here
um, what I'm doing here is just I'm
accepting like anything here, but you
will probably want to like limit this to
specific events that your user required
with that workflow ID, of course, and
then it shouldn't react that many times
because I'm not sure if payment intent
succeeded only fires once. once or
multiple times because we can see that
in here uh it fires a lot of time. Now I
can still see 500 here.
I think that's because I always need to
return something. So return next
response.json.
Let's go ahead and do success true. And
let's go ahead and add
status here
of 200.
One thing I forgot to tell you, yes, is
that pep hooks always need to end with
some kind of successful return
otherwise they will keep repeating. So
let's try this again. So, I'm running
this locally.
I'm going to try and trigger this again
since I already know it's working. There
we go. Now, we have 200. Perfect. And
this is still working. Great.
So yes, make sure that you add that here
and you should probably also add it in
the Google form one. So always, you
know, end up saying some kind of success
message so they know they don't have to
retry the web hook because there's a
limit to how long they will retry before
they turn it off. And it is that easy to
do it with Stripe. So if you're confused
like why do we need this local listener?
Well, because it's easier to demonstrate
in development mode, but you don't have
to use it. So, if I just go ahead and
uh close this, what I would usually do
is in here, I wouldn't see localhost
3000. I would see my domain.com or you
could use the angro public domain. And
then inside of here, you would click add
destination. Your users would click
that. Uh I'm not sure, not too familiar
with Stripe. I think it should be your
account. They will select the events
they want to listen to. I don't know. Uh
I think the most popular ones are
uh inside of checkout checkout session
completed. Right? When someone
successfully purchases something, they
will select web hook endpoint and then
they would basically paste that URL with
your actual domain. That would be here
right? But in here you can see it
noticed that it's a local host. So if
you're using log host you need to use
stripe CLI. So even if you want to use
your angro one
you could do that but then you have
another problem.
I mean problem it's not a problem it's
just hard to test even if you added
this. So nodebase web hook test
it would be a bit difficult to like test
it. Oh you can't do send test events.
Okay maybe not too difficult. So I think
if I refresh here, it should work even
if I send the test event from here.
Okay. So still it's still telling me to
do it through the CLI, right? Testing
the events is difficult. You would have
to set up an entire Stripe app to make
it work. But let me see if I can just
like fire this event and see if that
works.
Let's wait a second.
Uh, looks like not
again. I'm not sure if this is because
of the way I maybe I have to log in
again.
Let me go ahead and check inside of the
events here.
This is something else. This is not
that I think.
Yeah, I think this is it's just being
confused now because I logged in to test
it locally and now I didn't log in again
to test it here. So, node web hook test
uses HTTPS. Then my angro URL web hook
stripe with this specific workflow ID
that I'm in. That's important. And if I
click send test events, let's go ahead
and try and stripe login again. Maybe I
have to do it again.
Just trying to prove that it still
works, but maybe I'm missing something
obvious. I'm I mean, the code definitely
works. That's not the problem.
I'm just trying to bring it as close to
production as possible for you. Uh let's
see.
Yeah, I'm not sure why it's not working
now. Let me see inside of npm here.
https
web hooks edit destination. So
this should be a completely valid
endpoint. Let's just see 405 is correct
in this case. I should be getting 405
because it's the an invalid method
request.
I'm not seeing anything new here.
Oh,
because I didn't select my events.
Let me go ahead and try and find the
event I'm actually firing. Payment
intent succeeded.
Payment intent.
There we go. It's so It's such a bad
search function.
Okay, fifth time is the charm. Let's try
the event again.
Maybe now it will work when it listens
to the event. There we go. And you can
see how it doesn't repeat it now. So
looks like yeah, it only repeats in
local host version. When it's actually
using the URL, it doesn't repeat the
events. Perfect. Now obviously
in a real world app you would have to
protect this endpoint. Now as I said I
will try to find time to add some like
basic protection for our web hooks
because they're now publicly available.
Literally anyone can uh find a workflow
ID and just trigger this endpoint.
That's not good, right? So we would have
to protect it in some way. Uh with
stripe you can actually do it quite
easily because your users would simply
have to copy the signing secret. you
would have to allow them to add the
signing secret somewhere here right and
then in the web hook we will simply
check if that signing secret is correct
and if it's not break the endpoint
someone else try to access this right
who is not the user who added stripe uh
but if you wanted to do it without that
so like a universal protector uh you
would have to like generate a secret
every time you create a workflow and
then you can reuse that kind of
authentication flow for all of your web
hooks that's something I'm going try and
find time for.
And if you want to, you can also use
Swix, which is another thing I think I
demonstrated already. Uh it's basically
web hooks as a service. It's really
cool. Uh a lot of like high-profile
companies use them as you can see like
Clerk, Brex, these are really like
high-profile companies. So, uh if you're
looking for production level protection
you could look into six.
Amazing, amazing job. I think that is
everything we wanted to do in this
chapter. Let's see. We added the stripe
node dialogue executor realtime channel
web hook and we managed to trigger a
stripe event both through a uh local
host and through our uh port forwarded
Angrock public domain. So we know it's
going to work in production as well. So
23 stripe trigger. Let's go ahead and
merge that. Create a new branch. 23
Stripe trigger.
I'm going to go ahead
add all of the changes. 23 Stripe
trigger commit. And I'm going to publish
the branch. And since this was almost
identical
uh to our previous pull request, we
don't have to review it again because we
know exactly it's we just literally
copied the same number of files, right?
So 15 files I have here. We added new
event. We wanted added an icon and then
we just copied every single thing we had
with the Google form and we just
repurposed it to work with uh Stripe. We
also fixed the issue of uh invalid types
in the executor registry. But besides
that, I don't think there's anything uh
worth waiting here. We can just go ahead
and merge this.
But still it stands what code rabbit
told us in the previous chapter that we
should protect this endpoints somehow.
Right? So the same is true for this one.
And let's go ahead now once we merge it
go back to main. Go ahead and click on
synchronize changes. Okay. And inside of
your source control tab here once it
synchronizes open graph and confirm that
you have it here. 23 Stripe trigger.
Great. So, I believe that marks the end
of this chapter. We pushed to GitHub. We
reviewed it technically because it's
exactly the same as the previous one.
Amazing job and see you in the next
chapter.
In this chapter, we're going to add AI
nodes to our project. So what exactly is
the difference between this chapter and
chapter 7 in which we've added AI
providers?
Well, in chapter 7, we've learned how
we're going to use AI within this
project. In this chapter, we're going to
literally create the canvas drag and
drop nodes for each of those AI
providers that we've added. So, a quick
reminder, I taught you how to add
Gemini, which offers a completely free
API key. And on top of that, I told you
that you can also use OpenAI, Anthropic
and a million other providers that AI
SDK offers. AI SDK is well, the SDK for
AI that we are using in this project.
So, just a quick reminder, let's go
ahead and revisit chapter 7 right here.
As you can see, that's exactly what I
told you. You can use Gemini, which is
free, or OpenAI or Anthropic, which will
set you back a minimum of $5. If you
want to use them, sure, but not
required. And also, a quick reminder
once you finish this project and you
actually deploy it, no one will be using
your AI API keys. All of your users will
be using their API keys. Just to clarify
that one more time. And as you can see
in here, it's also marked as finished.
We set up AI SDK and we even used it
within injust. So now when we start
chapter 24, we have some uh already
added things for us. So just in case, go
ahead inside of your package. JSON and
confirm that you have the AI package. So
you should have AI SDK, Google at
minimum or open AI andropic or maybe
Grock, you know, whatever you wanted to
use, but you should have at least one
here.
Since AI SDK Google is the free one, I
will be mostly focusing on this one, but
I will also show you how to add entropic
and open AAI and I would suggest that
you follow through uh with me and
implement them as well. Even if you
don't have an API key, in fact, if you
go inside of your environment file, you
will see that I don't have anthropic API
key. It's completely empty. That's
perfectly fine because in the end, it
will be our users who are going to
provide their own API keys. So just make
sure that you're inside of your dot
environment, you have at least one API
key for AI, whether that is Google
generative AI or Gemini, Open AI or
Anthropic. Just make sure you have at
least one. And inside of your package
JSON, make sure you have at least one AI
SDK provider here. And alongside that
you should also have the AI package
itself.
Great. Quick reminder, you can use uh
the link on the screen to visit a
studio.google.com.
And in here you have get API key in the
sidebar and you can very quickly create
a new one. Uh maybe you have to delete
your old one. I'm not sure how does the
free tier exactly work here, but at
least one API key will be completely
free if that's what you need.
Perfect. Just make sure you have that.
And now let's get to implementing this.
So the first thing I want to do is I
want to add the images for all three of
these. So let's go ahead and prepare our
public logos folder. Then you can go
ahead and visit my nodebase assets
folder. You can go inside of images and
you should find anthropic gemini and
open AI. So let's go ahead and add all
three inside of here.
Once you add them inside of here, you
should have Antropic, Gemini
and last one, OpenAI. Great.
Now that we have them, let's go ahead
and add them inside of our Prisma
schema. So, instead of node type, let's
go ahead and let's add anthropic.
Then, let's add Gemini.
And let's add Open AI. Of course, you
are free to name this however you
prefer, but I would highly suggest that
you follow the exact same naming as I am
doing so you don't cause yourself any
unnecessary bugs or problems. Save this
file and then as usual, let's go ahead
and do npx prisma migrate dev.
Once you get the prompt to enter a name
feel free to add whatever you want. for
example, AI nodes schema or AI nodes
types and that will synchronize your
database with the schema. And as always
whenever you do uh the migration, I
highly recommend restarting your next
server, your ingest and I don't think
you need to restart Angro. In fact, we
are not going to need local tunnel
running for this chapter, but if you
want to, you can have it running. So
basically if you plan on using your
Google forms or Slack uh nodes, my
apologies, Stripe nodes, you will need
uh a forwarded uh open tunnel, local
tunnel like Angro.
Great. So just make sure that works. Go
ahead and restart your Nex.js app. So
everything is up to date. And now what
we're going to do is well the same flow
that we did before except instead of
building uh the trigger we're going to
be building an executor. So let's go
ahead and let's copy the HTTP request
inside of source features executions
components. Let's go ahead and copy HTTP
request and paste it here. And let's
rename it to Gemini.
Uh, and while we are here, let's also
immediately do the same inside of the
ingest folder. Channels, copy HTTP
request, paste it here, and rename it
Gemini.
So now we have the real time channel for
Gemini. I'm going to go ahead and change
this to Gemini
channel name.
And this will be called Gemini
execution. Everything else will stay the
same except of course the name of the
variable right here Gemini channel.
Once you have created the Gemini
channel, immediately go inside of ingest
functions.ts
and register the new Gemini channel so
you don't forget to do that.
Perfect.
Now we can go ahead and focus back
inside of the Gemini folder which we've
copied. So let's head inside of
node.tsx.
And basically in here, yes, we again
have the data because we will need some
data here. What I would suggest is not
changing anything inside. Instead, let's
just focus on renaming this so it's
easier to refactor it later. Gemini node
data. This will be
Gemini node type using the Gemini node
data from above. And then use the Gemini
node type in here. Let's rename this to
Gemini node.
And let's leave this as is for now. No
need to change it. Let's just focus on
the base execution node and the name
here. So for the name, I'm just going to
set it to be Gemini. And for the icon
I'm going to go ahead and use logos
Gemini.
SVG.
And let's go ahead and fix this. So
Gemini node dot display name
Gemini node. And let me just quickly
check inside of our triggers. Do we even
have that display name?
Looks like we don't. Okay. I was just
worried that I forgot to rename that or
something. If we don't have it and if we
don't have any errors, I think
everything is fine.
Uh now let's go ahead and add Gemini to
our node components uh factory if I can
call it like that. Inside of source
config node components let's go ahead
and do node type
gemini node
you should be able to import it from add
features executions components gemini
node perfect and now we have to go to
our node selector inside of source
components node selector and let's go
ahead and go inside of our execution
nodes let's go ahead and duplicate this
and let's add the Gemini type. Now the
Gemini type uh you can make any
description you want really. I'm going
to use a super simple one. Use Google
Gemini to generate text.
Gemini and the icon is going to be
forward slash logosmini.svj.
Perfect. And I think that this already
should be able to show you Gemini and
you should be able to add Gemini. The
only problem is the dialogue of course
uses the HTTP request configuration. So
that's what we're going to be working
on. Now let's go ahead and start by
changing this from HTTP request dialogue
to Gemini dialogue. So go inside of the
Gemini folder dialogue. DSX.
Again, don't focus on the schema. I'll
leave that as is. Uh oh yes, I also have
to do this to-do thing. Basically, yeah
uh I forgot this. Um the reason I
thought about adding this to the body uh
property of HTTP request node is because
we now have templating. So I thought it
would be a good idea to like validate
JSON maybe if it's valid or not. I don't
know. Maybe we don't even need it. It
kind of depends on uh the user
experience you want to add, right?
Nevertheless, let's focus on the Gemini
folder now. Dialogue component. And
let's start by renaming the HTTP request
form values to Gemini form values.
Change this to use the Gemini form
values as well. Change this to be Gemini
dialogue. And I think inside we should
not have any more errors. Let's just
change this from the title of HTTP
request to be Gemini like that. And for
the description
um
we can just say
configure the AI model and the prompts
for this node. And let's do Gemini
configuration.
Now let's go back inside of Gemini
folder node.tsx
and add Gemini node here.
Let's go ahead and just import.
Uh yes. So let's do Gemini node and
Gemini
my apologies not Gemini node Gemini
dialogue and Gemini form values. So I'm
doing something incorrect here. This
shouldn't be Gemini node. This should be
Gemini dialogue of course. And this
shouldn't be HTTP request form values.
This should be Gemini form values. And
now you should practically solve all of
your errors. You can remove the unused
globe icon.
And now you can see that the dialogue
says Gemini configuration configure the
AI model and the prompts for this node.
Perfect.
Let's go ahead and continue developing
this dialogue right here. So, uh I want
to start by modifying the form schema.
Let's go ahead and first define all the
available models. So, unfortunately, I
didn't find a type- safe way to do this.
So, I just extracted things that work.
So, this kind of might be different for
you depending on which AI SDK version
you're using. Just a quick reminder. So
I'm using AI SDK Google 2.0.17
and AI 5.0.60.
So at the time of making this tutorial
these are the available models.
So even if you're not sure, you can
write it exactly like this. And I'm
going to show you how you can see which
models are available.
So the variable name can actually stay
the same because every execution thingy
should have the variable. Right now
let's go ahead and change the endpoint
to be the model and the model will not
be a type of string. It's going to be a
type of enum and let's simply add
available models inside. Then let's add
system prompt which will be a string and
it will actually be completely optional.
And the last one we're going to do is
user prompt. This will be a string, but
it will be required. So let's go ahead
and use a shortand user prompt is
required.
There we go. That's our form schema.
So now in the default values, obviously
we should reflect that the model should
use default values domodel.
uh or let's fall back to available
models first in the array. The system
prompt should use default values dots
system prompt or an empty string and the
user prompt should use user prompt or an
empty string as well.
And then you can go ahead and copy this
entire thing and do the same thing
inside of a form.reset right here.
Now, let's go ahead and just get rid of
some things we don't need. So, uh we do
not need any of these. Yes, you can.
Yeah, no need for any watch methods
here.
So, in order to make this a bit simpler
you can remove this.
No need to dynamically change the
description anymore.
Oh, actually, wait. That was a cool
feature, wasn't it?
Maybe we can leave that. So, leave watch
variable name. Sorry, I just told you
you don't have to do it. So, just remove
the other ones because this one is cool.
Uh, and inside of the description then
yeah, instead of this, it will most
likely be I'm not exactly sure what will
the response
uh look like. So, I'm kind of guessing
right now. Um, let's see.
Maybe it will just be like text. I'm not
sure. We're going to see once we get the
ingest output. Now, for this form field
uh let's go ahead and remove it
entirely. We're not going to need it.
Uh for the end point here, uh let's go
ahead and remove that as well.
and go ahead and just remove the dynamic
conditional part for the form field
which controls the what was in the HTTP
request request body field like this. So
just like make it always available. The
reason I'm leaving only this one is
because uses it uses text area. So it's
the smallest amount of code we have to
modify. So let's just change this to
call system prompt
like this.
And let's go ahead and do system prompt
optional
to let the user know they don't have to
do this. And then let's go ahead and
just give a little placeholder here. For
example, you are a helpful assistant.
And let's make this smaller like 80
pixels. And the form description will be
something useful.
for example sets the behavior of the
assistant and then use variables for
simple values or use JSON variable to
stringify objects right again this is
just instructions for your users I think
you can already take a look at this if
you go inside of the Gemini node open it
we have the variable name and here we
have the system prompt and this is how
that description ends up rendering as
now let's go ahead and let's copy and
paste that and let's do The same thing
for the user prompt.
User prompt
like this. Remove the optional part
here. Change this to user.
And in here, for example
in the placeholder, you can do something
useful like summarize this text. And
then maybe use the
JSON thingy to let the user know that
they can use values inside. There we go.
This is how that placeholder looks like.
Let's go ahead and just increase this
one a bit since it's more useful. And in
the form description, again, you can set
something useful. The prompt to send to
the AI, use variables for simple values
or JSON variable to stringify objects.
Great. So, just one more field left here
for now. Later we're also going to have
a credential dropdown so users can add
their own credentials but we're not
going to implement that now. Uh what we
need to do now is we need to copy this
form field.
We need to paste it here and we need to
modify the model. Right?
Let's go ahead and add the form item.
The form label will be model. And let's
go ahead and let's remove the entire uh
form control here
because what we're actually going to do
is we're going to use the select
component. I think we should already
have select imported here. So if you
don't make sure you have select select
content item trigger and value from
components UI select.
So for the select here
let's go ahead and add two values on
value change and default value. Now
inside of the select, we can add the
actual form control and inside a very
simple select trigger with full width
class name and select value with
placeholder select a model. Outside of
the form control, let's render the
select content. And inside of select
content, we're going to iterate over our
constant available models do map model
and render it in a select item with a
key of model and the value of the same
thing. And of course, render the actual
model name inside. For the form
description, we can simplify it even
further. the Google Gemini model to use
for this completion.
So let's go ahead and check it out now.
There we go. I'm just going to zoom out
a little bit. So this is how it looks
like. Users can now specify the exact
Gemini model they want to use. They can
name their their variable. They can add
a system prompt and they can add a user
prompt. Amazing. One thing I want to do
is I want to move the variable field to
the top just to be consistent because I
feel like that is one of the most
important values to have really. And
let's make uh let's make it familiar for
our user to always expect it on the top
of every configuration form. Where did I
move it? Oh, here it is. Variable name.
Did I do this correctly?
Let me go ahead and add Gemini again.
Open it up.
Variable name. Perfect. That's empty.
And in here I have the dropdown. Then I
have the system prompt. And then I have
the user prompt. Perfect. Now let's go
ahead back inside of node.tsx.
And in here we should kind of modify the
description, right? Because it makes no
sense uh that it does this. So instead
of uh rendering node data do method, it
should render node data domodel.
The problem is model is not defined.
That's why that's because we have to
define it right here. So let's go ahead
and remove all of these and replace them
with optional model, optional system
prompt, and optional user prompt. Now we
can go back in here and modify the
description. So node data will be
props.data and then description will be
uh if we have node data dot user prompt
then go ahead and render the model that
user selected or fall back to this one
which is the first one in the array here
I believe. Yes. If you want to, you can
also uh export const available models
and then do available models first one.
Just make sure to import available
models from dot / dialogue.
And in here, let's quickly attempt to
show the user prompt, but let's make
sure to limit how long it's going to be.
So I'm just going to say node data dot
userprompts slice 50 characters and then
three dots or not configured.
So if you if I close this now it should
say not configured. But if I go ahead
and add a user prompt and click save.
There we go. Gemini 1.5 flash and the
user prompt right here. Perfect. So I
believe the UI part is pretty much
finished at this point.
uh except we have to fix the Gemini
dialogue. Let's go back inside of the
Gemini dialogue here and uh let's see
what exactly is the problem here.
Default values uses partial of Gemini
form values here. But Gemini nod data is
not assignable to that. So I have some
kind of uh a problem here. Did I forget
something?
Oh, variable name. I think that's one
variable name.
Is that the problem? It is not the
problem. Let me go ahead and check
node.tsx
in the HTTP request. I want to see how
that looks like. So, variable name
endpoint method and body. Hm.
It's probably because the model needs to
be a uh how do I do this? Type off key
off.
And is it available?
Not 100% sure.
And then Mhm.
Looks like that is still not working
inside of Gemini dialogue here.
What if I just make this required? Does
that work?
No., All right., I'm, going, to, go, ahead, and
debug just a little bit and then I will
tell you the conclusion about this.
All right., So,, I, didn't, really, find, an
elegant solution for this, but one easy
way of doing it is well two two ways.
One I really don't like. It's by using
any that resolves the type error, right?
Uh, but another way of doing it is if
you literally copy what it expects
like this. That's not really the
greatest of solutions, but yeah, that
also fixes the issue.
Um, I think the problem is because in
the dialogue here, I'm using this as a
constant when I maybe should be using
this as an enum.
So that's why instead of the node dsx
this is not understood correctly. U this
is not terribly important really. Feel
free to use any if it saves you some
time. Uh I will try to find like a more
elegant solution for this. But right now
I really want to focus on the exeutor
and making this Gemini uh node actually
work. So with whichever one you prefer
you can hover over it right here. I
think in the default values. There we
go. You can see exactly what the uh
model wants.
And then just add that here and add
undefined here. Let me see if I need it.
Looks like you don't need undefined
because the question mark transforms it
into undefined. Yes, if you know
TypeScript better than me, which is
quite easily possible, you can maybe try
and do it with some available models
thing. But you can see that when I try
to do it, probably because of this read
only. Oh
yeah, it expects uh it turns it into an
array. Makes sense. Yeah, that's that's
just not true. That's not what we expect
here. So yes, whichever one you prefer.
If you just want to use any, that's fine
for now. I'm going to find a way to
transform this into some enum and then I
will be able to use it like this via
import.
Let's go ahead and just focus on what we
need to do next for now. Great. So the
UI part is now done. Now what I want to
do is I want to go inside of source
features.
Let's go ahead and find our Gemini and
let's go inside of the executor right
here. So the executor is of course where
the magic happens.
I'm going to go ahead and uh yeah we can
register helper here. We can make it
exactly the same as this. But let's just
go ahead and start renaming this. So
this will no longer be HTTP request
data. This is now going to be Gemini
data.
We're going to have a variable name.
We're going to have a model. We're going
to have system prompt.
Let me go ahead and just
make this a string. And we're going to
have user prompt.
Now, let's go ahead and add Gemini data
right here. Let's go ahead and rename
this executor to be Gemini executor.
There we go. Instead of publishing to
the HTTP request channel, let's go ahead
and change this import to use Gemini and
the Gemini channel. So start here.
Gemini channel status is now loading.
Perfect.
Now let's go ahead uh and let's try to
generate a system prompt. So you can
actually remove the entire try and catch
here.
so it doesn't necessarily confuse you
like this and remove the extra bracket
here. And I'm just going to go ahead and
do const system prompt data system
prompt handlebars
do compile data dots system prompt and
pass in the context
or let's fall back to you are a helpful
assistant.
Now I am noticing that I have a typo
here which is actually quite important.
So let me go ahead and fix it. system
prompt.
There we go.
Now, let's go ahead and add user prompt
here as well. Handlebars do compile
data user prompt and pass the context.
Great. So now we have template variables
here for both our system prompt and for
our user prompt. Now I'm going to add
to-do fetch credential that user
selected. We currently don't have this.
So the only thing we can do is we can
use the ones from our environment file.
That's why I made sure in the beginning
of this chapter that you have that and
all the other necessary things
installed. So now let's remove uh KY
entirely. We can leave non-retriable
error because we are definitely going to
use it. But for now, let's add create a
Google generative AI
from AI SDK
Google.
Perfect. So now that we have the system
prompt and the user prompt and we
pretend to fetch a credential, let's go
ahead and actually create the Google
instance using create Google generative
AI. open an object inside and make the
API key be well let's make it like this
const credential
value and for now you can use process
dot environment and then just use this
one or whichever one I mean if since we
are doing Gemini right now you should
use the Gemini one right
and just pass it here so why am I doing
it in this super weird way where I'm
doing it. So I prepare the code for
later when we are going to actually
fetch the credential value and then
we're going to pass it here. And at that
point we are no longer going to be using
our own API keys nor are we going to
need any API keys for AI inside of our
environment file. But just for now to
test this and see if it works, we need
to do it this way. Now we can open our
try and catch. Inside of try, let's go
ahead and immediately dstructure steps
from await step.ai.v
Gemini generate text. Then let's go
ahead and pass the second uh parameter.
So I'm just going to collapse them like
this so it's easier to look. Generate
text.
generate text like this which we don't
have imported yet and then open an
options. So generate text can be
imported from the global AI package like
this. So generate text make sure you
have added that.
And now
let's go ahead and define the settings
first will be the model. Which model are
we going to use? So we are using Gemini
and then we're going to use data domodel
or here are all the options. So this
package is type safe as you learned in
chapter 7. And you can see that there
are actually way more of them here than
what I added to my uh available models
list of course. Uh but I kind of did it
in a quick and dirty way. Uh I would
highly suggest exploring how you can
extract these exact types here. If I in
the meantime manage to do it myself, I
will of course update the code to
reflect that change. But yes, in here
you can see all of these that exist for
your version. So using that you can also
go back to the dialogue inside of our
newly created Gemini folder and you can
see if all of these exist here, right?
If they do, all good. So just you know
compare do they exist and you can fall
back to 1.5 flash. Now let's go ahead
and add system to use system prompt
prompt to use user prompt
experimental telemetry
is enabled set to true record inputs set
to true and record outputs set to true.
This will give Sentry access to report
telemetry on our AI models which will
help us greatly to see exactly which
costs occur, which models take the
longest and then you will be able to
recommend your users some change based
on that.
And now once we run this we will be able
to extract the text from steps first in
the array content first in the array.
Check if the type is text and if it is
go ahead and use steps.0.content0.ext
or simply fall back to an empty string.
So let me collapse this so it looks
nicer. There we go.
And now let's go ahead and publish an
event. So right here
publish Gemini channel status node ID
and status of success. We successfully
executed the AI model. And now let's go
ahead and let's return right here. So
I'm not sure how you want to do this
but uh obviously spread the context. And
then let's do data dot variable name.
And you can either directly do text or
you can do AI response
and then do the text inside however you
prefer.
Obviously we have this error here. We
will take care of that. But let's just
quickly take care of the error. So go
ahead and catch the error and then all
you have to do is publish that error
status and throw the error. Now let's go
ahead and resolve this. So we already
learned how to resolve this. The problem
uh the trick is to do it inside of the
function right here.
So step uh AI.rap.
Oh actually I think we might be able to
do it in an easier way. So let me just
check. After loading, I am immediately
going to check if there is no data
variable name await publish
gemini channel dot status node id status
of
error
and let me properly wrap this.
There we go.
and then throw new non retrial error
Gemini node variable name is missing
and immediately you can see if we throw
an error here if data variable name is
missing you can see that this will no
longer yell at us right so we fixed this
why did it work so simple this time the
reason it worked is because all of This
is in one single scope. But usually we
don't do this. But what we do is
step.run.
So step.run then opens another function
which is a whole new scope for
TypeScript. So it cannot do its flow
control properly because uh technically
data which we look for right here could
have been modified inside of that scope.
So that's why since this is kind of a
simpler example, we don't have to worry
about that.
Let's also go ahead and do if no data do
user prompt because that is another
thing we consider required. Let's go
ahead and throw this Gemini node uh user
prompt is missing.
All right. And later I'm going to add
to-do throw if credential is missing.
Great.
Now that we have this ready
let's see. Did we miss anything? I think
uh this is okay. And I need a quick
reminder. I'm just going to take a peek
at HTTP request executor.
Uh, all right., So, we, ended, up, making, all
of this optional, right?
Yeah. And still this this model thing is
Yeah. I I really dislike how I handled
this. I should have either made it
everywhere a type of string or
something. Yeah, technically maybe I
could do that. Maybe I could just make
this Z.
Like this inside of form schema instead
of inside of uh Gemini folder dialogue.
DSX.
Let's just go ahead and make it
required. Model is required.
And then this way if you go instead of
node dsx here you can just simplify this
to be an optional string.
And as you can see it still works. The
default values now handles this
properly.
Great. This is a kind of a dirty fix for
now. So we are only using available
models for one thing and one thing only
and that is to
render the select items in a loop here.
All right. Now that we have the
executor, we have to add the executor to
our executions lib executor registry.
Let's go ahead and add node type Gemini
Gemini executor.
Obviously, we have some errors here
because we don't have the anthropic nor
OpenAI ones. If you want an easy fix
you can just repeat the same for
anthropic or for Open AI. Just make sure
to add to-do
fix later and to-do fix later. So you
remember there that these are not valid.
All right., I, think, there, are, still, a
couple of things we need to do. Instead
of node dsx, we are still using http
request channel name here. So let's
quickly go inside of gemini actions. s
let's rename this from HTTP request all
three instances change it to Gemini
Gemini token and fetch Gemini realtime
token
change the import here to use inest
channels Gemini and the Gemini channel
and remove I mean change the two
instances to use Gemini channel that's
it now go back inside of node change
this to be Gemini channel name
fetch Gemini
realtime token
and I think everything else should be
fine. Remove fetch HTTP request realtime
token and remove HTTP request channel
name. And as always you can go ahead and
rightclick click find in folder and
search for HTTP. This is okay. This is
one example where it's okay because we
are using it as an example of the
previous node information that you can
do. So I think everything should be
fine. So how do we test this in the
easiest way possible? Well, let's use a
manual trigger like this. Let's connect
the two. Let's click save.
Let's go ahead and open Gemini. I'm
going to call this Gemini.
I will select 1.5 flesh. You are a
mathician.
User prompt. What is 2 + 2? And click
save. Click save again.
Let's go ahead and prepare our localhost
8288.
So in here we can see our workflows and
let's click execute workflow and let's
see if we did this correctly. Uh
something is happening here. This never
I'm not sure if is it our realtime
connection that's failing or something
else. Uh
Gemini 1.5 flash is not found for API
version v1 beta.
Oh okay.
All right. Let's see. Let me try one
thing here. Instead of executor
I mean instead of executions components
Gemini executor
let me try and just um not even
listening to the user
input. Can I just like choose the one
that's being offered here?
Does that work? Can I refresh here? Is
all of this good? Looks good. Can I just
execute workflow now?
Okay. Oh, this time the uh channel
worked, but it still failed.
Again, Gemini 1.5 flash is not found for
API version v1 beta. Call list models to
see the list of available models and
their supported methods.
So, it could be that in the middle of my
tutorial, Google AI Studio received some
updates. Maybe the API tokens are new.
So, I'm not even sure myself which
version I can use now.
It shouldn't be too hard to fix really.
I'm just really not sure which version I
can use. Maybe 2.0 Flash. Can I try
that?
Maybe they've added some limits to their
tiers.
Honestly, I have no idea.
Yes. So if you change to 2.0
it seems to work just fine.
Let me just confirm. The result is
stored inside of Gemini. AI response 2 +
2 equals 4. Amazing. So it officially
works. But yeah, something is a little
bit weird here with Gemini 2.0
options.
Yeah, here's what I might
or might not do. We can like fall back
to one that is working and inside of our
dialogue here, maybe just hide the
select option. I don't know. I'm not
even sure myself. Now
one thing is for sure, this is a
horrible way to offer users which models
they can choose because we just, you
know, demonstrated how quickly that can
go wrong if you forget to update. Uh, so
we should definitely find a way to
synchronize that. But the problem is
even in this kind of AI SDK Google
version itself or maybe just my API key.
I'm not sure but something here is not
working as it should because it offers
me 1.5 flash but when I try to use it it
fails. It also offers me 2.0 flash and
when I try using it it works.
It could also be a bug within Google.
AI. I'm not sure. So, just try some of
these models until they work.
You can fall back back to data.mmodel
here if you want to. Just make sure that
you then add this option
inside
like that. Again, not sure how this is
supposed to be uh working since, you
know, type safety is telling us that we
can use one model, but obviously we
cannot.
Uh so, okay. Yeah, let's go ahead and
leave it like this for now. Uh and yeah
this is basically how you add uh AI
nodes. We can now do the exact same
thing line for line for all other ones
that we need. Entropic, uh, open AAI or
a billion others that AI SDK offers.
I just did a quick research if there is
a way to reliably display the models
that are available and I couldn't find a
way to just read the types coming from
the SDK package. So for now this is what
I want to do. This is obviously broken
and I cannot in good faith recommend
that you write this code. So go ahead
inside of dialogue for your new Gemini
folder, remove available models
entirely.
Remove the model from the form schema
and remove it from here too.
Go ahead and remove it from the form
reset. And then go ahead and remove the
entire form field for select. So I just
cannot in good faith tell you to do that
because it's broken. We're going to go
ahead and do it a much simpler way. And
then maybe later in the tutorial, if I
find a reliable way of doing this, I
will teach you how to do it. But again
I cannot in good faith tell you to do
this because it's obviously broken.
So, let's go ahead uh inside of where do
we go now? Node. DSX
remove available models. remove the
model from Gemini node data and instead
of the description generation
uh you can just go ahead and
uh you can just do
well you can use the one model
for example inside of executor.ts DS
that you found that works for you like
this like hardcode what you're going to
uh define your users to use for now
right because this is obviously not
working reliably so we have to hardcode
inside of the executor.ds DS on
something that will always work for us.
And that's why we can safely display
that here. So the user knows exactly
which one we are going to use on their
behalf. They will not be able to select
the model because I'm really not
satisfied with the way I've developed it
right now.
So in the executor.ts, it is very
important that you remove data.mod here
and just fall back to Gemini 2.0 flash
or whatever one works for you. Just try
to get it working. also remove the model
from Gemini data here entirely. So we
now no longer need that. Great. So code
should actually be simpler now.
Everything should be simpler now. There
we go. We just have variable name
system prompt, and user prompt. And in
here, we hardcoded exactly which one we
are using because that is the one that's
working for us. So just for fun, I'm
going to remove this. I'm going to add a
new one right here. I'm going to connect
it and oh, so this still says my API
call. Let's go inside of dialogue and
instead of calling it my API call, my
Gemini,
I don't know.
And do I use my API call anywhere else?
I do. Let's change this to my Gemini.
Something to indicate to the user like
hey, this is uh your variable. Like you
can do whatever you want. And then let's
fix this. So this should be AI response
because that is exactly what we do in
the executor, right? We return this
within AI response option.
Uh actually we do not
sure what is the best way of doing this.
Should I just do Yeah, we can just do AI
response.
I think
you can of course change this to
whatever you think is a better user
experience. It's you know you're free to
modify this however uh you want. Let's
uh
hm let's just keep it as text.
So Gemini.ext.
I think that is like the simplest
possible one. You cannot go wrong with
this. My Gemini.ext.
Perfect.
So, I'm going to go ahead and change
this to be my Gemini 2 just to see if
this works. My Gemini 2.ext.
Uh, how about we try something fun? So
yes, let's call this uh my Gemini 2. You
return only
uh a popular
HTTP
popular uh get AP icon
test for fetch
get app URL endpoint
free API URL endpoint.
uh
give me an endpoint to list a to-do by
ID of one. So hopefully it will know
what I mean. We'll see. Maybe it will be
a a complete failure. Maybe it will
work. And then let's try an HTTP request
here.
So this will be my request
get method. And can I just do a
response.ext here?
Gemini.ext.
Let's click save.
I have no idea how this will work. Let's
click execute workflow. I'm really
interested. Maybe it will be a complete
failure. Maybe it will work. Uh failed.
Okay. Fail to parse URL. Let's see what
did Gemini respond.
It actually did quite well, but it added
some extra text. That's the problem. So
I'm just going to kind of copy what I
expect
and see if I can make it.
Let's say that
no formatting.
Example,
no new lines, nothing. just URL
and let's change this ID of two. So, I'm
just playing around to see if I can make
this work because I really want to see
this workflow do something. Don't worry
I'm just going to do one more attempt
and then I'm going to show you how to
add Okay, failed layout. Looks like
depending on the model, some of them
don't listen to instructions.
Yes, it still added new line slash at
the end. Okay. If it didn't add that, it
would have worked. So, now that we have
this and we have this simplified model
uh simplified model, yeah, with no
select option, let's go ahead and do the
exact same thing but for OpenAI and
Anthropic. So, we are going to start by
going inside of our features
executions, components, Gemini. Copy and
paste it. Rename it to Open AI. Go
inside of the OpenAI folder. Go inside
of node.tsx.
Change this to open
AI
node data. Use open AI node data. Here
change this to open AI node type. Use
open AI node type. And rename this to
open AI node. Perfect. Go all the way
down and change this to open AI node
display name. and change the actual
display name to open AI node. And you
should no longer have any errors here.
Great.
Now, let's go ahead and let's rename the
logo to open a SVJ and open AI for the
name. Perfect. Now, let's go ahead
inside of our node components in the
source config.
Let's go ahead and duplicate the Gemini
one. Add open AI
and import open AI node. Then go inside
of the node selector instead of source
components node selector. Go ahead and
duplicate Gemini. Go ahead and add open
AI. Change the label to open AI. Uses
open AI to generate text and use open
AI. SVG.
There we go. Open AI node. Now, let's go
ahead and change the dialogue, add the
channel, and everything else we need.
So, I'm going to go inside of features
open AI dialogue.
Everything will stay exactly the same
here. I'm going to change this from
Gemini
to Open AI. So, open AI form values.
Open AI dialogue, but everything else
can really stay the same. Let's just for
fun change this to my open AI.
This will be open AI configuration.
Description can stay exactly the same.
I don't think we have to modify anything
here.
All of this is good enough. instead of
node.tsx.
Let's make sure to now import open AI
dialogue
open AI form values.
Just double check that you are doing
this instead of open AI folder so you
don't accidentally change your Gemini
files.
Now let's go ahead and add open AI form
values right here.
Open AI dialogue. And I think
automatically it should just work out of
the box because they are exactly the
same because they both use AI SDK which
will follows the same API. We just did
some slight modification to tailor it to
open AI.
Now that we have that configured, let's
go ahead inside of Open AI executor.ds.
Change this from Gemini data to Open AI
data. and change this finally to be open
AI executor. We cannot change the
channel yet because we didn't implement
it. Let's change this to be a warning of
open AI node.
Let's go ahead and change this to open
AI node.
Let's go ahead and for now use your open
AI API key. If you don't have it, just
put an empty string like I have for
Enthropic. There we go. And let's do
create open AI. I have no idea uh if
that is the correct one.
I'm trying to find I think it is. Yes.
Create open
AI like this
from AI SDK. Open AI. You should also
have that installed. So, AI SDK
open AI. Make sure you add that
because again, even if you don't have
the API keys for this, your users might.
And later, we're going to allow your
users to add any API keys they want.
And instead of this being a Google, this
will be open AI. Whoops. Open AI.
And let's change this to open AI
generate text. And then just go ahead
and change this to
I I have no idea honestly which one. I
am really not up to date. I guess
GPT4
I don't know. And then you can copy GPT4
here. Go inside of Open AI dialogue and
change the description. My apologies.
node and change the description here to
let the user know we are using GPT4 as
default because we removed the the
option to select the model
and I think that's it for the executor.
Let me just check. So we are using open
API key which will later be something
else. The API is exactly the same. We
just have to create the channel.
So I'm just going to collapse everything
and I'm going to go inside of ingest
channels. Copy and paste Gemini channel.
Change this to Open AI.
Go ahead and change this to be open AI
channel name. Open AI execution. Open.
Oops. Open
AI channel.
There we go. As simple as this. Then
let's go inside of source inest
functions.ds.
Add open AI channel.
Execute it. Make sure you have imported
open AI channel. Perfect.
Now that we have that, let's go inside
of components. My apologies. Inside of
features executions components open
AI. Let's go inside of actions.
Change this from Gemini token to open AI
token.
Go ahead and import open AI channel from
channels open AI.
And do we need to modify anything? Yes.
Instead of fetch Gemini realtime token
it's going to be fetch open AI realtime
token.
Perfect.
Now we can go instead of open AI node
we can do fetch open AI realtime token.
And this will now be open AI channel
name from inest channels open AI. We can
use that here and change this to fetch
open AI realtime token. And now we
should have uh real time synchronization
with open AI.
Let's go ahead and go back inside of
Open AI executor. And let's go ahead and
change the import from Gemini to Open AI
and replace all instances of Gemini
channel with Open AI channel. There we
go. No errors anywhere in our code. To
double check if we did this correctly
right click on the open AI folder, find
in folder, and search for Gemini. Looks
like there is one thing left the
placeholder. So this is inside open AI
dialogue. Change the placeholder to be
my open AI
like this.
There we go. So how about we just
quickly try this? I'm just going to
simplify this. I'm going to add open AI.
I'm going to connect the two. Even if
you don't have an API key, check if it
will fail. It's going to be fun. My open
AI, you are a math
matician.
What is 2 + 2? Click save. Let's click
save right here. Uh, one thing we forgot
to do. Executor registry.
Change open AI to Open AI
executor.
Looks like it is capitalized. I don't
like that. So, I'm going to go instead
of components, openAI executor, and I
will just make sure it's lowerase like
all of my other ones. Then I can replace
these two instances. I can then also
remove the to-do for Open AI.
So, let's go ahead and try this again.
I will click save right here.
I'm going to refresh just in case. I
will click execute workflow.
This works.
And this will actually depend on my API
key, but it works perfect. Yours might
fail if you don't have an API key, which
is perfectly expected. So, don't worry.
I just wanted to test if uh the model
works. So, if for whatever reason GPT4
is not working for you, uh you have a
list of available ones. Well, apparently
available ones because we just saw with
Google that was not the case. So just
you know add a string and you will see a
list of all available ones. So just
select one and try until it works.
Perfect. So that was for open AI and one
more left and we are done with this
chapter. I know it's a lot of repeated
work but I want to make sure you see
every single line of code that I write.
It's, kind, of the, gist, with, my, tutorials.
Uh, I don't really skip anything. I show
you every single line of the code. So
let's go ahead and copy open AI, paste
it inside of components in the
executions folder, and rename it to
anthropic.
If it asks you to update imports, you
can select yes. It's the simple cache
thing. It might open your next folder.
Just save that file, close it, and close
this folder. Don't worry about it. Let's
go inside of anthropic inside of node.
DSX let's start with renaming things
instead of open AI
everything will be anthropic
so anthropic node data anthropic node
type anthropic node and use entropic
node type here and then we have to fix
this
anthropic
cannot find the name anthropic uh
anthropic node yeah there we go Perfect.
So now that we have this, let's go ahead
and modify base execution node right
here
to use logos
anthropic SVJ
anthropic
like this.
Uh oh, looks like I somehow
overwritten my open AI.
So, don't do the same mistake I did.
Yes, I did something very incorrect
here.
Okay. A lot of mistakes now. Luckily, I
don't think I went too far. Uh, okay.
What I did accidentally, I somehow
deleted my Open AI folder and I seem to
have renamed it to Antropic. So, now I
just reverted it back to Open AI. So
you probably don't have to do this, but
I have to.
I just have to fix this. Okay, I'm so
sorry. Again, cop copy open AI folder
paste it inside of components, rename it
to entropic. There we go. Okay, so sorry
about that one. Uh, and now again, same
thing I just previously did. So, I'm
going to replace these instances of Open
AI through anthropic.
And then I'm going to go down here and
change this to anthropic. node.
There we go. Okay, that's what I wanted
to do. Uh, and then in here, change this
to entropic.
I think I accidentally also changed
inside of my OpenAI folder, instead of
node, I changed this to entropic, this
should be open AI. So, sorry about that.
Instead of copy and paste, I replaced
it. So, yeah, always be careful yourself
not to do that. uh instead of enthropic
uh after we add the node here and the
logo for anthropic we have to add it to
node components. So let's go ahead and
copy this add anthropic and simply add
anthropic node. After that we have to go
to node selector inside of source
components node selector. Go ahead and
copy this anthropic.
Anthropic
uses anthropic to generate text and use
anthropic.
SVG.
Go ahead and click plus and you will
find entropic node. Perfect. Now let's
go ahead and let's modify the dialogue
here. So you can go ahead and focus on
features
executions components entropic
dialogue.tsx
everything here will be the same replace
instances of open AI with anthropic. So
anthropic form values and anthropic
dialogue.
Change this variable to my anthropic for
example,
anthropic configuration.
My anthropic
and I think everything else can stay the
same. Perfect. Now, go back into node
right here. Make sure you are importing
anthropic dialogue and anthropic form
values. Make sure you use them here. And
make sure you are using the anthropic
dialogue. And now it should say
anthropic configuration tailored to
entropic. Perfect. Now let's do the
channel.
So we can wrap it up with the executor.
So instead of source ingest channels
copy either open AI or anthropic one my
apologies Gemini one and rename it to
anthropic.
Inside of here change the instance of
open AI or Gemini to anthropic.
Name it properly
and make sure to export the constant
anthropic channel. Perfect. immediately
go inside of functions.dts
and add anthropic channel.
There we go.
Now that we have that, let's go ahead
inside of source features executions
components anthropic. Let's go inside of
actions.ts
change this to be anthropic
token
import from anthropic channel. So
anthropic channel, replace these two
instances right here.
Then go inside of node.tsx, DSX
change use node status to use anthropic
channel name
and fetch anthropic realtime token and
remove the import for fetch open AI
realtime token and remove the unused
channel name for open AI.
Then let's finally go inside of executor
here.
Let's change this from open AI data
to be anthropic data. Change this from
open AI executor to anthropic exeutor.
And finally, change the ingest channel
to be anthropic
anthropic channel. Replace all instances
of open AI channel with anthropic
channel. Now what we have to do is we
have to go AISDK
anthropic
use create anthropic again if you don't
have this please install it. So
package.json
you can see my version right here.
Now that we have create anthropic we can
go ahead and create the anthropic value
here. Create anthropic.
Make sure that you change the
environment key to whatever it is in
your anthropic API key right here. It
can, be, empty, like, mine., For example,, I
don't have that key.
Now, let's go ahead and use anthropic
here. Change this to entropic generate
text
and change this again to I have no idea.
Sonet 45. Is that the newest one? I
guess.
Save that.
Then go inside of node.tsxindanthropic
and configure the hard-coded model right
here. Whoops
I didn't copy it properly. It would
appear. So
so I'm going to copy it now and I'm
going to paste right here. There we go.
So now I'm going to go ahead inside of
executor registry
and very simply I'm going to change this
to use the entropic executor.
There we go. That is how we add three
different AI models. So I'm going to go
ahead and remove this one. I will add
this one. And this one will most
definitely fail because I don't have an
API key. So my entropic
you are a math
mathematician. What is 2 + 2? Click
save. Save up there. Let's go ahead and
just refresh in case it needs a refresh
because we added some new channels. And
let's click execute workflow. What I'm
hoping, for, is, at least, to, see, the
real-time status. And then a failure
which is true. And the error should be
because of a missing API key, which is
exactly what it says right here.
Amazing, amazing job. So, let's go ahead
and check if that's what we intended to
do. We intended to add Gemini, OpenAI
and Antropic. Uh, my apologies for the
mess with the model thing. I just really
didn't like how I do it. I think it's
better to uh you know remove it entirely
than to have that broken version of it.
And this works just fine. It will not be
hard for you to you know improve on this
later. Think of it as a personal
challenge. Uh you already saw the code
for it. So you know how I would do it.
Uh just try and make it better, more
type safe. uh maybe try and make a fetch
API request to your back end um using
TRPC of course and then maybe your
backend can return you all the available
models. I think that should work this
way. They are always going to be up to
date which means you're going to need to
have some kind of spinner for your uh
select uh component. So yes, a bit more
complex but shouldn't be too hard for
you. You came this far. Amazing. Let's
go ahead and merge this now. So, 24 AI
nodes. I'm going to create a new branch.
24 AI nodes. Once I've created a new
branch, I'm going to go ahead and commit
24 files right here.
24 AI nodes. Oh, so chapter 24, 24
files. Great.
Uh, let's go ahead and commit and let's
publish the branch.
Once we've published this branch
let's go ahead and open a pull request.
And since this was a big one, I do want
to see what code rabbit will say, even
though it will probably be a lot of
repeated comments because we added three
identical things. So if it finds a
mistake in one of these, it will find
mistake in all three. So I will just try
to see the most useful comments from
code rabbit and I will show them from
you.
And here we have the summary by code
rabbit. New features. We added three new
AI execution nodes. Enthropic Gemini and
open AI for enhanced workflow
automation. Each AI node supports
customizable system and user prompts for
tailored interactions. We integrated
real-time status monitoring for AI task
execution within workflows. Perfect. So
let's take a quick look at the sequence
diagram. And the first thing I notice
here is how it immediately understood
that these three nodes are exactly the
same because it didn't create a sequence
diagram for any specific uh AI model. It
created for all three of them right
here. So it all starts with the AI node
using react flow. Double clicking on
that opens the dialogue in which we can
configure the well model. Right. Once we
enter the form, we validate it using Zod
schema. We then submit those form values
and we save the entire thing. Once we
manually execute or with any other
trigger since we've added Google form
and stripe, you can use any of those. We
finally call the node executor. The
first thing we do for all of these are
publish the loading status. We then
compile handlebars templates in case
user added any variables to the system
prompt or the user prompt. We then
initialize each AI client respectively
using for now our API keys and then call
the generate text and then we populate
that in the context.
So, what are the comments here? Well
they're actually not that bad. So, in
here we have a typo and we have the same
typo, in, I, mean, I, have, I, don't, know if
you have but I was typing container
instead of contain
in anthropic exeutor I left open AI
warnings instead of anthropic ones. So
yes, good catch by code rabbit here.
Again, same typo for me here in some
other dialogue in the Gemini one. And
then in here, it's telling us to make
sure that we check if we have the
credential value. This is a very good
point and it is exactly what we are
going to do later when we implement
credentials. So this is just temporary.
We are later going to actually fetch the
credential and we're going to check if
it doesn't exist and we are immediately
going to throw an error just like this
one. So good catch by code rabbit but we
are one step ahead. That's exactly what
we're going to be doing in here. It
suggests adding defensive checks for
array access. That's a good idea. I
could add these question marks. You can
add them too. So this way you won't run
into any uh errors with accessing deeply
nested objects.
And I think that all the other comments
repeated themselves as I said since we
have three identical uh codes. So let's
go ahead and merge this pull request.
Great comments by code rabbit here. Now
let's go ahead and go back inside of our
main branch. As always make sure to
synchronize the changes.
And once you've synchronized your
changes, confirm that you have them in
the graph right here. Here they are. 24
AI nodes. Amazing. So I believe that
marks the end of this chapter. We pushed
to GitHub and we reviewed our pull
request. Amazing. Amazing job and see
you in the next chapter.
In this chapter, we're going to add
credentials. credentials will be a great
addition to our previous chapter in
which we've implemented three different
AI nodes. The only problem is right now
those AI nodes are using our API keys.
But what we want is to allow the users
of our platform to bring their API keys.
So in order to do that, we need to
implement something called a credential.
We're going to basically create a
credential schema TRPC router client
hooks and then create the normal uh
views that we have already implemented
for workflows and workflow list. But
we're going to do the same for
credentials. So users will be able to
pageionate through their credentials
search for them or maybe sort them by
type. So let's start by adding the
credential schema and slowly going all
the way to this which is adding the
credential dropdown to each AI node
which will finally allow our users to
select which credential or in other
words which API key they want to use for
that AI node and then you will finally
be able to remove your API keys from the
environment file. So let's start with
the schema. I'm going to go inside of
Prisma schema.prisma
and let's go ahead and just above
workflow let's create model credential.
Let's go ahead and copy the ID because
it's going to be the same. Name of each
credential will be a required string.
Value will be a string as well. Let's
copy the timestamps
and let's create a relation with the
user. So user ID is going to be a string
and then user will be a foreign key
relation. So user a type of user
relation
fields user ID references
ID on delete cascade.
And let me just zoom out so you can see
how this looks in one line. And another
relation it's going to have will be with
the node that it will be assigned to. So
now to fix these errors, we also have to
add them to their respective schemas.
Let's start with the user one. So let's
find user. It's right here. Great.
And now let's go ahead and just do
credentials
credential like this.
And then if you go ahead inside of the
credential you can see that user is
completely resolved.
Now we have to do the same for the node.
So let's go ahead inside of model node
and what we're going to do is the
following. Let's add credential ID
to be an optional string because not
every node will need to have a
credential, right? Only those which use
API keys like AI nodes. So credential
will be a type of credential again
optional relation fields credential ID
references ID and this time we're not
going to add cascade because if we
remove a credential it shouldn't delete
the node as well. The node is just going
to fail. But that's fine. We don't want
to alter someone's workflow just because
they deleted a credential that they were
using somewhere.
So let's go ahead uh uh or perhaps maybe
this won't even allow to delete a
credential which is being used. So maybe
we can explore on delete
either no action or maybe set null. That
could be one of the options. But for now
just leave it like this. Great. So once
we added credential ID and credential
both optional to model node there should
be no more errors here either with the
user relation or with the node relation.
What we have to do now is we have to
implement something called credential
type. Now this isn't really required but
it will improve user experience. So
let's create an enum credential type and
let's give it open AI anthropic and
Gemini and then let's go ahead and make
another property inside of the
credential model type credential type
basically make it required.
So user will have to choose all right
for which one of these are you creating
an API key for and then later if you
have more models you can extend it. This
just makes it easier for the user to
categorize their credentials. But you
can of course choose if you want to do
this or not.
Once we have that added, let's go ahead
and push those changes. Npx Prisma
migrate dev.
We can name this migration credential
schema.
And once you press enter, it should
synchronize the database with your
schema. As always, make sure you restart
your next server and your ingest server.
And then inside of your local host 3000
if you have it running, just make sure
to refresh it. So, I'm just going to do
that. Instead of localhost 3000, just
make sure it refreshes.
Great. So, we've handled the schema.
Now, let's go ahead and let's add the
router. So, I'm going to go ahead inside
of source
features and let's go ahead and create
credentials.
Inside of credentials, I'm going to
create a server folder and let's go
ahead and let's copy the workflows
routers.ds.
And let's paste it here.
And now we're going to go ahead and
rename that workflows routers to
credentials.
So
let me just go ahead and select this
credentials
router. So make sure you're doing that
inside of your new credentials folder
right here. You can immediately remove
the execute one because we're not going
to need it. Which means you can also
remove these unused imports.
Great. We will have the create one and
you can choose for yourself. Do you want
this to be a premium procedure or not?
I'm going to say yes. This should be a
premium procedure. So only those who are
subscribed can add uh credentials.
Let's go ahead and add an input here
because this will not be automatically
named like workflow. This will be
something user has to fill in. And what
user has to fill in is the following.
They have to give this a name. So let's
go ahead and make this required. Name is
required
type which will be an enum credentials
type. You can import this from generated
Prisma which we've just added. And
finally let's do the value basically the
API key.
And let's change this to be value is
required. Then besides context here
we're also going to have the input.
From here you can dstructure
name value and type from the input.
And let's go ahead and create a new
credential here. So we can keep this
return as is. Actually instead of doing
prisma.workflow let's do
prisma.credential.create.
Let's go ahead and use name. User is
correct. Let's remove the nodes object.
And let's go ahead and pass in the type
and the value. And I'm just going to add
a little to-do here. Uh consider
encrypting
in production. So what's the deal with
encryption and API keys? Well, obviously
if you are storing other people's API
keys, you should consider encrypting
them. In fact, in my previous tutorial
which was B2B intercom clone called
Echo, I used Amazon Secrets Manager to
do this. But interestingly enough, when
I told people that I did this, they were
quite surprised because most of them
consider API keys to be something you
can easily rotate and delete if it gets
leaked. That is technically true. But
always think of if you have, you know
thousands of users and thousands of
users trust you with their API keys and
if your database gets compromised, uh
even though it's not exactly the end of
the world for them because they can
rotate API keys quite easily, some of
them probably didn't create an API key
just for your page. They have probably
been using the same API key everywhere.
And now because of your database leak
they have to get rid of all of those
other places where they have been using
that API key. So yes, it is obviously
not the best practice to store an API
key as a string in your database. Uh I
would highly suggest looking into Amazon
Secrets Manager to handle this. I have
the exact tutorial doing this in my
previous project echo. Uh I will leave a
link somewhere here on the screen so you
can take a look at it. But then again uh
I have heard of people just storing API
keys in the database plainly like this
because yes they can be very easily
rotated from the dashboard.
So for now let's go ahead and do this
but please add a comment like this at
least so you are aware that this is
something you should consider doing.
Perfect. So we can now successfully
create the credential. Now let's go
ahead and let's do the remove one
because it should be quite simple as
well. So we need the ID and instead of
workflow we're doing credential. There
we go.
And uh let me just see delete. All
right. So this will basically throw if
not available. Yeah, I think this is uh
perfectly fine as is.
Uh now let's go ahead and let's
implement the update one. So again uh I
like to leave these to be protected
procedures rather than premium ones
simply because it's better user
experience.
Then again we don't need nodes. We don't
need edges. But what we do need is name
type and value. So you can just copy
those from above and add them here. Then
you can destructure the name, the type
and the value.
Instead of workflow here, we can do
credential
find unique or throw.
And let's go ahead and actually remove
this entire thing. We don't need we
don't need it to be this complicated at
all.
So let's just do if there is no
credential.
Uh actually yeah that should not be
possible because we're using find unique
or throw. So what we can do is we can
just update it. Const updated credential
or let's just do return
prisma.credential.update
where
ID is
the ID. User ID is contact out user ID.
And let's pass in the data with the
name, the type, and the value. And let's
go ahead and add to-do consider
encrypting
the same comment we added above. So
consider encrypting in production.
And in fact, we don't even need this
then because this will throw if it
doesn't exist or if the user ID is
invalid. There we go. So a very simple
update procedure. We don't need the
update name procedure for this one. So
we have handled remove, we have handled
update. Let's handle get one. Again
protect procedure Z dot string for the
ID context and input right here. And uh
this time we don't need to do any of
this transformation. The code should be
much much simpler. Now
we can just do return directly
prisma.credential
find unique or throw
remove include and that's it. I'm pretty
sure that's the only thing we have to
do. Get one finished.
Uh now we have to implement get many. So
again protected procedure we will still
have the page page size number minimum
maximum default search
extract all of them here. And we still
need a promise all for items and the
total count. Let's go ahead and do
prisma.credential.find
many. I think everything here can stay
exactly the same. Make sure you change
this to credential too.
And let's see. So where user ID name
contains mode. Perfect.
And we could do the following here.
We could go inside of Prisma credential
find many.
We could manually select ID, name, type
created at, updated at, but purposely
don't add value for security since uh we
are now storing it as plain text. So at
least at this level
let's make sure we don't show it to
everyone right?
And we could in fact do the same thing
inside of our get one here. So select
let's go ahead and just do the exact
same thing.
ID name type created at and updated at.
But then again since we are using the
user ID they are the ones who are
allowed to see that. Uh so maybe we are
just creating problems here. Let's
remove the select. Sorry for changing my
mind uh so often. But this is the way I
develop apps, right? I change my mind
often and I try to come to some solution
that I like. So for now, yes, since this
is not a public API, it is very strictly
for this user, they kind of already know
what their API keys are. So let's leave
them here.
All right., So, the, logic, for, total, pages
has next page has previous page is
exactly the same. So no need to change
this at all. Uh one more thing will be
added here which will be called get by
type.
Get by type will be a protected
procedure.
It will have an input which will be an
object and the object will very simply
accept the type and new.
And then let's go ahead and do query
asynchronous
extract input
and context.
Let me go ahead and fix this. There we
go. And then inside of here, let's go
ahead and destructure
the type from the input. Let's go ahead
and fetch all credentials
from await prisma credential find many
where we have a matching type and user
ID is context out user ID. So for this
one we won't add any pageionation simply
because this will be used inside of a
dropdown.
So let's just go ahead and return every
single result
and we can just directly do return here.
And then you don't need to mark this as
asynchronous.
There we go. So we just finished the
entire router here. Let me just check if
I have any unnecessary
asyncs here. I don't think I need async
if I'm just directly returning. I could
be wrong, but I'm pretty sure I don't
need it.
It won't change anything if you do have
it or don't have it. If the case is true
that you don't need it, let's remove
these unused imports. Perfect. And now
let's go ahead inside of TRPC
routers_app
and let's add credentials.
Credentials router and make sure that
you import it. There we go. Now that we
have the credentials router, let me go
ahead and mark that as completed.
Now let's go ahead and implement the
hooks. Once again, we can copy this from
features
workflows hooks.
So I'm just going to go ahead and copy
this and inside of credentials, I'm
going to paste it here. Let's start with
use workflows.
I'm going to rename it to use
credentials.
Let's go inside of use credentials. And
now let's go ahead and just you know
change things up. So hook to fetch all
credentials using suspense which will be
called use suspense credentials
for the params. Yes, let's leave them
like this now for now and let's just use
drpc.credentials.get
many. There we go.
Hook to create a new credential.
use create credential
and let's go ahead and do TRPC and let's
create let's change both instances. So
TRPC.credentials.create
this will be credential created failed
to create credential.
Let's go ahead and replace this hook to
remove a credential.
Use remove
credential
and then again just replace these three
instances with credentials.
Change this to be credential
removed
hook to fetch a single credential
using suspense. So use suspense
credential
tRPC.credentials credentials like that.
Hook to update a workflow name can be
removed entirely and we can immediately
go to a hook to update workflow. Rename
it to credential. This will be use
update credential.
Replace all of these instances to be
credentials. Change this to be
credential
saved and this to be failed to save
credential.
We don't need the execute hook at all.
And that is it. Great. A lot of you ask
me whenever I do this kind of very
similar code, why don't don't I just
create an abstraction? Well, you
absolutely could. And we kind of did
you know, with entity components, right?
We have entity item, we have entity
list, right? This is an abstraction
right? But sometimes I just don't like
them, especially for like hooks like
this. Um, this is a tutorial, so
obviously I'm making it kind of easier
for myself uh by having both credentials
and workflows have the exact same hooks.
So, obviously for you, it seems like I
could have just created an abstraction
here. But chances are in real world in
production, your credential hooks and
your workflow hooks will probably be a
little bit different, right? So because
of that I recommend not always rushing
to create an abstraction. I think more
often than not it will lead to
complicated code. Sometimes having
explicit separations is better in my
opinion and I would rather have very
similar repeated code than magic
abstractions that are super hard to
maintain and understand. So that's why.
Great. One more thing we have to fix
this is the use workflow params. So
let's now go ahead instead of use
workflow params.
Let's change it to use credentials
params. If it asks to update imports
you can select yes. And the only one it
should update is this one instead of use
credentials
right here. So you can press save. It's
still importing this. That's fine. We're
going to change that now. So we also
need the actual params here. So, let me
go ahead and copy that workflows
params.ts.
Let me copy that file. Add it inside of
the credentials folder.
Page page size search. I think all of
this is true. But maybe
we will also need a type. Um, I don't
know. Let's let's leave it like this for
now. Great.
So instead of use credential params this
should now exist params. There we go.
But we have to rename them. So instead
of credentials folder params.ds just
change this to be
credentials
params. And the answer why don't I
abstract this is exactly the same.
Right? Again I'm making this easier for
myself and for you because this is a
tutorial. Right? But chances are you
will have different query options for
credentials than you would for
workflows. And that's why I avoid
creating this magic abstractions, right?
It's completely fine to have similar
code. I really don't like obsessing with
optimizing every single code repetition
that you can find. I find it way easier
to code in an environment like this than
just having a billion magic
abstractions. So, let's go instead of
use credentials now and let's go ahead
and replace this with uh oh, I didn't
rename it. My bad. Use credentials
params.
There we go. And I think this is the
only place that we actually use that.
Yeah, perfect. So now what I like to do
is I like to right click on credentials
find in folder, and let's search for
workflow. Perfect. Workflows. Nothing.
which means we have very successfully
created all the hooks required for this.
Um but I do think that we need to add
one more hook here
and that will be to fetch credentials by
type and that should be quite easy.
Let's go all the way to the bottom here
and let's add it. So instead of use
credentials,
let's add a hook to fetch credentials by
type. So use credentials by type. Only
prop it's going to accept is the
credential type enum from generated
Prisma. And then it will use uh use
query. Let me just check. Uh
is that the single query? Let me just
Why am I not having use query?
Oh yes yes yes. So this will not be use
suspense query. This will yes just be a
normal use query. I was um surprised why
don't I have use query already imported
here but I forgot that I use use
suspense query where possible but use
query is needed for this specific type
because it will be used in a dialogue it
will not be able to be prefetched I mean
technically it could but it's just
simpler this way
so that's it for hooks perfect now we
can go to the page which is basically
the server loader. So I'm going to go
inside of source app folder dashboard
rest and we already have credentials.
Perfect. Let's go inside of page.tsx
here. So we already have require out.
What the only thing we need to do now is
we need to prefetch our credentials. We
can do that by uh creating the params
loader. So we have all the you know
search filters, pagionation filters etc.
Uh and then prefetching those. So let's
first define type props search params
promise.
Let me add that here. Promise search
params from nooks.
Then let's go ahead down here.
Let's dstructure the search params.
Great. So great beginning. But now we
have to go back inside of features
credentials
inside of server here. Go ahead and
create params.ds.
Let's import create loader from nooks
forward slashs server.
And let's import credentials
params from dot dot /params and export
con credentials params
loader to be create loader and then pass
in the params in here. So this is the
exact same thing that we have in the
workflows. If you go inside of server oh
so it's called params-loader. Good idea.
We should call it that. So, let me go
ahead inside of credentials and just
rename this in the server to be
params-loader.
That's a better name. And then we also
need to copy prefetch.ds.
So, let's copy that and let's paste it
inside of credentials server.
Prefetch.ds.
Now, again, we're going to have to
modify this a little bit, but it's just
two of these. So instead of this being
the input, it's going to be credentials
dot get money get many. So prefetch all
credentials
and this will be prefetch a single
credential. So prefetch credentials
trpc.credentials.get
many prefetch credential as in one and
the RPC.credentials.get
get one. There we go. As simple as that.
And I think that we are now ready to go
back to our page.tsx
here. Perfect.
So now that we have that, let's go ahead
and do const params await create my
apologies. Credentials params loader
search params.
There we go. So credentials params
loader from features credentials server
params loader
and then let's go ahead and just do
prefetch credentials
make sure multiple of them right from
features credentials server prefetch and
pass in the params. So this one prefetch
credentials all of them right when you
hover over this uh it should say
prefetch all credentials.
There we go.
So now that we have this we can go ahead
and add the hydration boundary here. So
hydrate client.
Let me just see which one do we have.
hydrate client from TRPC
server
error boundary
from React error boundary
fallback
and let's just do error
suspense
which you can import from React with a
fallback
loading
and for Now, let's just go ahead and do
to-do
credentials list.
All right. So, what I want to do is I
just want to check if that's exactly
what I do with the workflows. So
workflows page. DSX
uses hydrate client, which is exactly
what I use here. So, hydrate client
which is basically hydration boundary
with the the hydrate thing. Perfect.
Great.
Once we have this uh we are now ready to
create the client side. So we just
finished this server loader. We now have
to create the client hydration.
So I'm going to go ahead and go back
inside of my features folder credentials
and I'm going to create components I
believe. Or maybe is it a UI folder
first? I'm not 100% sure. I think it's
just this. Yes. And let's go ahead and
do credentials.
DSX.
There we go. And now I'm also going to
open workflows components workflows.
DSX.
And uh well, I think that we can copy
everything from here and just paste it
inside. And then we're going to work our
way through refactoring this. So yes
we're not going to be using use create
workflow or use remove workflow or use
suspense workflows from this hook.
Instead, when we are working inside of
the credentials feature, our hook folder
has these two. So let's change the
import first use credentials.
Same thing for this use credentials
params.
Leave these to be errors for now. Let's
just work through renaming the main
things first. So this will be
credentials search
credentials.
This will be workflows uh not workflows
list it will be credentials
list.
Uh
okay let's leave this as is. This will
be credentials
header.
The title here can be credentials.
Create and manage your
credentials. And this will be new
credential.
Then for the pagination same thing. So
credentials pagination.
For the container, same thing
credentials container. And now we can
replace all these with credentials
equivalent. So credentials header search
and pageionation for the loading error
and empty all same thing. Credentials
loading error and empty. Loading
credentials.
All right. Empty view. You haven't
created any credentials yet.
Get started by creating your first
credential
workflow item
will be credential item.
We can leave this as okay. So a lot of
things changed. So let's start with the
simple change here. Let's find all
instances that use use workflows params.
So one, two, looks like three of them
right? Yes. three use workflows params
and let's change all of them to be use
credentials params
so I have changed this import I have
changed it inside of credentials search
and I have changed it inside of
credentials pagionation so if I search
for use workflows params I should find
no results here and you shouldn't either
and now we're going to do the same for
use create workflow so Uh this one
actually won't be needed at all here.
You can remove use create workflow
entirely. Let's just find where it's
used it. Use create workflow. Yes. So
credentials header will not have this at
all. Let's remove it. And handle create
will very simply just do router.push
push to credentials
new
because we're going to need a form to
create it and this is where the form
will be rendered
and we uh the upgrade model will also
not be needed here. So you can remove
this, you can remove the fragment of
wrapping it and you can remove is
creating
and now we won't have oh we can actually
I think we can do new button href yes
credentials
forward slash new and then remove on new
and then remove this and remove this.
much simpler now. That's why we've
created this so we have an option to
choose whether we want to do a function
or an HTTP redirect
credentials patching. Okay. Um let's go
back here. Uh do we still need use
workflow use router? We do. Let's now
find use remove workflow. Uh I think
these are only two instances. Perfect.
So instead of credential item, you
should rename it from the input from the
import and here to use remove.
I have no idea how I changed that so
badly. So use remove
credential.
There we go. Use remove credential.
Instead of credential item, let's now
call this remove
credential too.
The href should go to forward
slashcredentials
and uh image here. It can just be key
icon., I, don't know., Let's, just, leave, it
to be this. And let's just use remove
credential is pending. Leave this to be
workflow. We're going to handle the
details details later. Now we have use
suspense workflows. And I think this is
also used in not too many places. So
just three places. And let's change it
to be use suspense
credentials. And I click this to replace
all. So let's see exactly where. The
first place is here in the import. The
second place is in the credentials list.
The third place is in the credentials
pagionation. Make sure you have changed
all three to use suspense credentials.
Perfect. Now let's fix one by one.
inside of credentials list inside of
entity list here. First things first
this is now credentials.
So let's change that. This this and this
is all a single credential. We no longer
have workflow item nor workflow is
empty. So let's just do credential item
and credentials empty.
There we go. So no more errors in the
import. That's great. Let's scroll a bit
down to see what's going on here. Inside
of credentials, empty. We still have
create workflow. I see handle create
here. So on new, let me see. Empty view
only accepts on new. Got it. So we're
just going to go ahead and use the
router push here. So instead of handle
create
router push to credentials
and then just to new. We can then remove
this. We can remove upgrade model. We
can remove the fragment and everything
and make it just that much simpler.
Perfect.
I think that's a lot of things resolved.
We can now remove use upgrade model
import from here.
So now what I want to do is I want to
change um the icon. Yes. So the icon uh
let me try and find a nice way to do
this here. In fact, I think it's time to
render this so you can actually see what
we're doing because we just changed a
lot of code. Uh but it's all code we've
seen before. So we know how this looks
right? So let's just go ahead and go
back inside of our dashboard rest
credentials page.tsx
and let's render the credentials
uh list
from features credentials components
credentials.
And I think that now finally if you go
ahead and refresh this and click on
credentials, you should see a very
similar look, but it should say no
items. You haven't created any
credentials yet. Get started by creating
your first credential. And if I click
add item, it should redirect me uh to
credential ID new, which is technically
correct, but obviously we will change
that, later., Uh, but, at least, the, redirect
is working. Great. Uh so now also what
we have to do is we have to go inside of
credential uh my apologies inside of
page here and we have to add the
credentials container
like this
and let me just see
so credentials container
you need to import that from the same
list where you've imported credentials
list from features credentials
components credentials
So credentials container simply has the
header search and the pageionation and
some additional styling to make this
centered. There we go. So it was that
easy for us to create the same layout
that we have in workflows and that's why
I didn't render it until now because it
is exactly the same. It's nothing you
haven't seen before. Uh but yes, I think
now it might be time, you know, to start
seeing what we actually changed code
for. So you can start to notice if there
are any bugs so you have time to fix it
because what we need to develop now is
this the new page right. So we already
have uh the reason this is not showing
404 is because it is going to
slashcredentials/new
and if you take a look here in the
dashboard
we have that that is this but that's not
exactly what we want. So we can very
simply override
I mean like allow every single
credential ID to be loaded here except
new by creating a new folder and
literally calling it new. And now if you
go ahead and create a page dsx here and
go ahead and just do div form to create
new
you will see that now that's what's
rendered here because my current URL is
the following and yours should be too.
So this is my current URL
forward/credentials/new.
But if I change this to one, two, three
and go here, then you will see the
difference. Right? Now I'm using the
other folder.
I hope you understand. Right? So if we
hit anything that isn't keyword new
it's going to be using this dynamic
loader to load the ID of the credential.
But if I literally type in for forward
slash new, it will redirect to this page
right here. And this is our chance to
build the form to allow the user to
create a new credential.
So let's go ahead and just go back
inside of credentials page.tsx
and let's go ahead and use our
credentials error component here so we
don't forget that.
And let's use our credentials loading
here so we don't forget that either. Uh
so now if I go back inside of
credentials itself, it should have just
a tiny bit nicer experience for the
loading and for the error if it happens.
And now we can go ahead and entirely
focus on the new page. So this will be
an asynchronous server component. And
first thing we're going to do is we're
going to require out. So it redirects
the user if they are not logged in. Then
let's go ahead and give it some styling
here. So for this div, I'm going to give
it a class name of padding 4, medium of
px10.
My apologies. Uh on medium break point
give it a px of 10. And on medium, give
it a py of six and full height. And then
we're just going to go ahead and kind of
limit the maximum width that our form
container will be available to be in. So
that is this div right here.
I'm going to go through the classes now
don't worry. So, div class name MX auto
max width screen MD full width flex
flex column gap Y8 and height full.
Now, let's go ahead and render
credential
form.
Since this does not exist, naturally
it's going to throw an error. Now let's
go ahead inside of features credentials
components.
Let's create a new file credential.tsx.
So a single one, not multiple ones.
Let's mark this as use client.
And let's start by creating an interface
credential form props.
Initial data that will be accepted will
be an optional ID because this can
either be used as a new credential form
or as an update credential.
And besides the ID, we're going to have
a regular name type, which is a type of
credential type
credential type from generated Prisma
and value, which will be another string.
Great. Now, let's export const
credential form.
Now, let's assign the credential form
props here.
Let's go ahead and the structure the
initial data inside.
Great. Now let's go ahead and let's
prepare some hooks. So we're going to
need
uh four of them. Router from use router
which we can import from next
navigation.
Use create credential which we can
import from dot do hooks use
credentials. use update credential from
the exact same place.
And then finally, the premium one, use
upgrade model.
So this is much easier for us because
we've already created all of these
components before.
Let's define if this will be editing or
creating a new credential by very simply
checking if we have initial data
question mark ID.
Then let's go ahead and define the form.
The form will be quite easy. Const form
will use use form from react hook form.
It will use form values which I forgot
to implement. So let's leave it empty
for now. It will use zod resolver from
hook form resolvers zod. So these two
are the ones we've added. And now in
order to add form schema and form
values, we're going to have to add a zod
and define the schema. The default
values will either use the initial data
prop or they're going to fall back to
empty name, empty value and a default
type of open AI.
Now let's go ahead and let's create the
form schema. So the form schema will
have a name, type, and a value. We can
import Z from zod. Name will be a
string, value will be a string and type
will be an annium of credential type.
And let's go ahead and do type form
values here. Z.info
type of form schema.
There we go. We should now have all of
those errors completely resolved.
Perfect.
Now let's go ahead and let me create one
factory map here to properly load the
logo of each of our providers. So const
credential type options.
And now let's go ahead and create value
credential type
openai
label open AI
logo forward slash logos openai.
SVG.
Let's copy them twice. Let's change this
one to be anthropic and this one to be
Gemini.
So let's change the label and logo
accordingly.
There we go.
Now that we have that factory here, we
can go ahead and start building our
forms. So, let me go ahead and just add
some other imports that we're going to
need. Starting with all the necessary
form components. Form, form control
form field, item label, and form
message. Then, we're going to need the
import, my apologies, the input. Then
we're going to need select select
content item trigger and value.
And let me see. I think we might be
let's go ahead and also add instead of
use credentials use suspense credential
here because if this is an update we
will be we will need to load it I think.
Uh okay. Now, let's also go ahead and
let's add everything we need from card
card content, description, header, and
title. And let's go ahead and let's add
button.
And let's go ahead and let's add
uh do we need error view like this? Um
yeah, let's not add this. We're good
without that. And inside of next
navigation here, let's also add use
params. And let's also import image from
next image. I think that's all the
imports resolved now. Now we can go
ahead and build in peace. So I'm going
to go ahead and start
with return here. Card
class name shadow
none.
Now, let's go ahead and add card header
card title, and we're going to check is
this an edit. In that case, edit
credential
otherwise create credential.
And we're just going to do the same
thing for other things. Now, for
example, description, either update your
API key or add a new API key.
Then let's add card content here.
Let's render the form. Let's go ahead
and spread the form constant.
Let's render a native form element on
submit for now. Let's do form handle
submit like this. Let's give it a class
name space y 6. You can leave the error
as is for now. And let's go ahead and
render this form field with control form
dot control name of name render
of field
form item form label.
Let me fix the typo here.
name
form control input
placeholder my API key
and spread the field property.
So let me go ahead and try and zoom out
a bit so you can see how it looks in one
line.
Then add form message which is a
self-closing tag which will display any
errors if they appear.
In order to fix the handle submit, we
have to implement it. So I'm just going
to do const on submit here. Asynchronous
uh values type of form values.
Let's check if this is edit and if we
have initial data question mark id await
update credential dot mutate
asynchronous with an ID of initial data
do ID and simply spread the new values
else let's go ahead and do await create
credential
dot mutate async
pass in the values and let's go ahead
and do on error here.
Grab the error handle error and pass it
along. So what is handle error? We have
it here in use upgrade model.
So what we also have to do is mark this
entire thing in a fragment like so.
and render a model inside.
Now we can quickly go back inside of
page new and just import the credential
form from features credentials
components credential and you can
already see it start to form create
credential add a new API key perfect
let's go inside of credential here and
let's continue developing this so pass
in onsubmit here
now besides that single form field.
Let's go ahead and add a new one. So
form field again.
We can go ahead and copy these two.
This will be now for the type. We can
copy the render as well. But the content
inside will be slightly different
because this will be a select. So, we
start with form item form label
type and then we add the select
component.
The select component will have two
fields on value change and default
value. In here, let's add form control.
Let's add select trigger. Let's give it
a class name of full width. Whoops.
Inside of select trigger, let's render
select value with a self-closing tag.
Outside of form control, let's add
select content. And in here, let's do
credential
type options dom.
Let's get an individual option here.
Render select item here.
And let's pass in the key to be option
dot value and value to be the very same
thing.
And now inside of here
uh, let's do a div with a class name
flex items center gap 2 and render an
image. The image should have source
option dot logo alt option dot label
width 16 height 16 and render option dot
label.
There we go.
And now here just add form message.
There we go. And finally, one more thing
that we need is a super simple form
field. So after this one
let's just go ahead and add this.
So the same as the first one, a super
simple form field which controls the
value prop and it uses the API key as
the label and inside of form control. It
very simply renders an input with a type
of password, a placeholder with
something to tell the user that this is
supposed to be the API key and it
spreads the field property here. There
we go. And renders the form message, of
course.
Now just before we end the form
we should also render the submit
buttons. So let's create flex gap 4
button
type
submit
disabled will be either if create
credential is pending
or
if update credential is pending then
let's go ahead and very simply choose
what to render inside. If it's edit then
update otherwise create and finally
another button next to it with a type of
button variant outline which on click
will simply redirect to credentials. I
think we can even make this simpler
maybe using next link
adding an href to
credentials prefetch to make it faster.
remove on click and add as child. I
think that's a better practice to do.
Perfect. I think that's it. Uh so let me
check. Did I need use params at all? Um
we are going to need it later but not
now, not just yet. So let's check if
this works inside of credentials. Right
now I have no items and if I click add
item test credential type open AI test.
Let's click create
test credential created. So if you want
to you can redirect the user to that
newly created credential.
So you could go on success here data
router.bush push credentials
data ID.
You could do that. You could go back to
the list of all of your credentials
whatever you think is a better user
experience.
Uh, but if you now go back to
credentials, you should see your new
test credential right here. Let's test
if delete is working. There we go. That
seems to be working. And now we're going
to test if my redirect is working. There
we go. That is working too. Great. Now
let's go ahead and let's fix which icon
shows here.
So in order to render the proper image
we have to go back to credentials.tsx
right here. And we are very simply going
to go and find the credentials list.
uh more specifically maybe credential
item
and just above this let's create a super
simple factory
credential logos which will be a type of
record which the key is going to be a
credential type which we can import from
generated Prisma
and I'm just going to go ahead and do uh
oh I'm still using this Okay, still some
leftover workflow thingies here. Let's
fix this like the following. Leave that
to be a type import and this will be a
normal one. So then you will be able to
use the credential type here for the
factory. Just make sure this matches of
course your public folder. And once you
have the credential logos here, let's go
ahead and do the following.
Const logo will be credential logos
data.ype
which of course we have to modify here
because it's no longer going to be a
type of workflow or fall back to logos
openai.svg
and let's go ahead and render that. So
inside of the image, let's just go ahead
and use the image itself.
There we go. Image source logo alt data
type with 20, height 20. Uh let's see
what do we have to fix here.
Uh element implicitly has any type.
Okay, something is wrong here. Let's
start with giving it a proper type. So
this should be a type of credential.
But I think credential might already be
taken. So let's import type credential.
Oh, I already have it. Okay, credential.
But I'm not sure if credential itself
might be taken as a constant somewhere.
Looks like it is not. So make sure your
credential item uses data with
credential as the type. And now let's
see what the problem is here. Probably
because I need to import image from next
image. And we can now remove the
workflow icon. And there we go. Now it
will basically display exactly uh what
type of credential it is. So, if I
create a new credential with open AI
type
and if I go back to credentials
it should render that. Let's go ahead
and search for something. There we go.
That works too. Perfect. Let's go ahead
and try Gemini. Now
let's create
back to credentials.
There we go. Gemini. Perfect.
So now we have to create this page right
here which will be super simple. Don't
worry because we already implemented
every single thing that we need for
this. We just have to go back to
credential.tsx.
So a single one where the form is
and go all the way down here. Export
const credential view
const params use params
const credential ID
paramscredential
ID as string.
Use suspense credential.
Pass in the credential ID.
Destructure the data. Rename it to
credential
and return credential form.
But this time
give it some initial data.
This way credential form will be
rendered as an edit form. Be super
careful not to misspell credential ID.
So params.credential ID needs to be
exactly as you have written it here.
Credential ID. Capitalization matters.
Triple check that it works. Now that you
have the credential view here, let's go
ahead and go inside of our credential ID
page.tsx.
And in here, we should obviously render
the credential view like this.
So, let's go ahead and import it.
Credential view. And now that I think of
it,
why don't I just accept the
credential ID
prop
and use it. I think that's much simpler.
And then I can just use the loader to
pass in the credential ID.
Yeah, that's much simpler.
We can just do that. And then we don't
need use params.
Perfect. Now let's go back inside of the
page here. What we're going to have to
do now is we have to prefetch it. So
prefetch credential. Pass in the
credential ID. Make sure to import
prefetch credential from features
credentials server prefetch. So prefetch
a single credential. It only accepts the
ID. Perfect. And then in here, let's go
ahead and just do some styling. So the
styling will actually be, I think, the
very same like this. We can copy this
entire thing. Paste it like this.
Add two closing divs.
And then let's go ahead and let's add
the hydration client.
You can import this from the RPC server.
Then the error boundary from React
error boundary
for the fallback here. I'm just going to
reuse my credentials error. So yes, I'm
going to reuse the one from the
credentials,
not the one from my apologies from
components credentials
even though I'm mostly using the ones
from credential simply because I have no
idea how differently I would create the
one for a single credential. You can
import suspense from react
and you can also pass in the fallback
here to be credentials
loading.
So same import from the credentials
component.
All right, I think this should now work.
Let me go ahead and just try. So let me
zoom out just a bit. Oh, something's not
good here.
Let me see this issue. Okay, that's
something else. But inside of my
credentials, now if I go ahead and click
here,
there we go. And if I change this to
anthropic API like this and click update
and go back to credentials
there we go. updated less than a minute
ago but created nine minutes ago.
Anthropic API. Perfect. It is officially
working. We have the entire uh entity
structure for credentials. What we can
do now is we can go ahead and implement
that. So that here instead of our AI
nodes, we have a drop-down which will
allow us to select any of the uh
credentials that we have.
So, let's go back to one of the
dialogues. I think the easiest one to
try out is the Gemini one. So, I'm just
going to go ahead and add the Gemini
block. And I'm going to click save here.
I'm going to go ahead and keep it open
just so I can see what I'm developing.
Then, I'm going to go ahead and close
all of this code. I'm going to go inside
of source features executions components
Gemini dialogue. DSX.
In here, I'm going to go inside of the
form schema and I'm going to add the
credential ID. So, after a variable
name, I'm going to add credential ID
with a message credential is required.
Then I'm going to go ahead and make sure
that I give have it instead of my
default values here. So credential ID is
going to be default values
dot credential ID or an empty string.
I'm going to copy that and I'm going to
do the same thing in the form reset
here.
Now that we have that, we're going to
have to create the drop-down for our
credentials.
But this will actually be a little bit
easier because we can use our use
credentials by type hook. So right here
I'm going to do const
use credentials by type
and I'm going to select credential type
from generated Prisma.jemini.
So make sure you have imported
credential type from generated Prisma
and use credentials by type from our
features credentials hooks use
credentials. From here we can
destructure the data and we can alias it
to credentials
and let's also get is loading here
and let's alias that is loading
credentials.
Now let's go ahead and go down here and
what we're going to do is we're going to
develop a new form field. So uh if we
have any existing select ones that would
be great. So we save some time. So I
think we have one inside of our
credential. DSX.
So inside of credentials folder
credential.tsx DSX. We should have
select somewhere. Perfect. So find the
name type, form label type. Copy the
entire form field here. And now let's go
ahead and add it after variable name.
So just add the entire thing here. And
now we are going to resolve the errors.
So the errors are mostly
missing components.
So we can just add select select content
item trigger and value.
There we go. Now we can go here and we
can start to fix these.
So this should control the credential
ID
because that's the new one we've just
added. So this will be Gemini credential
like that and it will be disabled if
is loading credentials
or
if we have no credentials at all. So if
there is no credentials length
and now let's go ahead and give this a
placeholder
select a credential
and now instead of the select content
we're going to do credentials
question mark.mmap
and this will be
key
do ID.
This will be my apologies. Key will be
option ID. Value will be option ID as
well. So make sure you're using ID here.
And source will actually be hardcoded to
logos Gemini SVG.
Alt will be Gemini.
Option here will be credential or option
name. Let me rename the option thingy to
credential
and make sure to import image from next
image. There we go. So just like that
you have added a new property to the
dialogue.
I'm going to go ahead and open this. And
there we go. I can now select a
credential that I want to use with
Gemini.
Perfect. But we're not done yet.
What we have to do is we have to visit
the executor. So inside of features
executions components gemini
executor.ts.
Let's go ahead and first things first
extend this with a credential ID.
And then we can go ahead and check if we
have it or if we don't have it. And we
can throw an error immediately.
So just as we check if we don't have the
variable name or the user prompt, we can
also check if we don't have credential
ID gem and I know credential is required
immediately throw a non-retriable error
and publish an error status.
So how do we fetch the credential? Well
we can now remove this to-do because we
already do that. And now this is where
the magic happens. const credential is
going to be await step.r run get dash
credential
then let's return Prisma which we have
to import from lib database
dotcredential dotfind unique
where
pass in the id data dotcredential
id
if there is no credential
throw new non retriable for Gemini node
credential
not found.
Now we can remove this credential value
and we can just use credential whoops
dot value from here.
Now there is a question should we just
fetch this you know without user ID.
Well the thing is this isn't exactly a
public API. This is a background job
within injust. So even if we fetch this
credential uh we are not returning its
value anywhere. We are just fetching it.
Right? Imagine this is a web hook call.
We also wouldn't exactly know if it was
a user who executed this or something
else. Right? So I think this should be
completely fine to do and we don't need
user uh ID here to fetch for the
security row level level
and in here we check if we don't have it
and we throw the error. Perfect. So
let's try running it exactly like this.
I'm going to go ahead and call this my
Gemini
credential will be this one. Make sure
you, have, at least, one, Gemini, credential.
Uh test.
Hi there. How are you? Let's click save.
Let's click save here.
Let's go ahead and execute workflow. And
what I expect is for the workflow to
fail because of an invalid API key.
Exactly. Please pass a valid API key.
So now I'm going to go inside of my
credentials here. Inside of this one, I
will change this to be Gemini uh
personal.
Gemini personal like this
and let's go ahead and go inside of
environment here and let me copy this
one
and let me paste here and click update
and then in workflows let me just check
if you know the name was maybe updated
uh it's still reflecting this I think I
have to refresh maybe yes because I
don't like um it's now yeah after a
refresh sheet will appear because we
don't invalidate this one after
updating. Maybe we could do that, but
you don't have to update anything here.
You can just execute again. You don't
even have to save it. But it should work
this time. Get credential. There we go.
Successfully completed.
We successfully implemented bring your
own keys. Users can now create their own
credentials and pass them along. you can
finally remove this from your code. Uh I
would suggest you keep it till the end
of the tutorial because I forget my
credentials so many times and I
accidentally delete it while testing. So
yeah, you can but I would suggest still
keeping it here. But finally your users
can now add their own credentials.
So just one thing I want to check inside
of source features credentials. I'm just
going to do one find in folder and
search for workflow.
All right. Inside of credentials dsx.
I'm still calling these workflows. So
credentials pagionation.
Let me show you what file it is.
Credentials folder components
credentials. DSX.
I'm going to change all four instances
to credentials.
There we go.
And I think that now if I go ahead and
rightclick find in folder workflow
workflows nothing successfully migrated.
Great. So what we need to do to finish
this chapter is do exactly what we just
did for Gemini. Right. Let me just check
if that's all we need. Instead of
node.tsx tsx. I should also uh have
credential
ID
here.
There we go. So now I have the proper
types everywhere.
Now let's go ahead and implement the
same exact thing but for entropic. So
I'm going to go inside of anthropic
node.tsx.
I'm going to copy credential ID right
here. here and I'm just going to paste
it here. Perfect. I'm going to close
node in both of them. I will open Gemini
dialogue and I will open anthropic
dialogue. I will go inside of Gemini 1
and I'm going to copy the entire form
field here with the select option to
load the credentials
and I'm going to add that exactly in the
same place. So after variable name
of course it will be riddled with errors
because we don't have many of the
imports we need but we are going to
resolve that now.
Now I'm just going to go ahead right
here and I'm going to import image
credential type and use credentials by
type. So, all of these imports and I'm
going to add them to the anthropic
block.
Now, let's go ahead and go back inside
of the Gemini one and let's go ahead and
actually load credentials by type. So
I'm going to do the same thing here now.
There we go.
And a few things left to do is to modify
the schema. So copy credential ID.
Let's go ahead and paste it in anthropic
node. And already you can see no more
errors left. But there are a couple of
more things. The default values. So
let's just go ahead and add that.
Default values. Perfect.
And form reset. Perfect.
Let's check if we did this correctly
inside of workflows.
right here. I'm going to delete this
node and I'm going to add entropic node
like this. Well, I can just open and
see. The problem is it's fetching Gemini
ones. So, let's go inside of entropic
node and let's just make sure that when
we fetch them, we're using credential
type. ropic this time. So now it should
only fetch entropic API which also means
we have to modify this logo to
anthropic. SVJ and Anthropic as the alt.
There we go. Now we can select our
anthropic API. Perfect.
We can now close both dialogues and
let's instead open both executors.
So let's go ahead and go inside of
Gemini executor and let's copy the
credential ID and let's add it to the
enthropic data after variable name. Then
let's go ahead and throw an error if
credential ID does not exist. So after
variable name inside of enthropic, go
ahead and throw the error. Just make
sure that you're throwing it for the
enthropic channel. There we go. change
this from Gemini node to anthropic node.
And looks like I accidentally left over
open AI thingies here. So feel free to
fix that as well if you want to. We can
now remove this to-do. Then let's go
ahead inside of Gemini executor and
let's go ahead and
copy these two things. Basically an
option to fetch the credential. So we
can now remove this part. Paste it here.
We have to import Prisma from lib
database
anthropic node credential not found. You
can remove the credential value and you
can use credential dot value this time.
There we go. And I think that's it. Just
make sure you've imported Prisma. Make
sure you're using the entropic channel.
Great.
So this should now work. Uh, now I'm
going to go ahead and do the exact same
thing but for Open AI.
And looks like we forgot the title here.
So it still says Gemini credential. So
dialogue.tsx
in the entropic folder. It should not
say Gemini credential. It should say
anthropic credential. Great.
So, I'm going to go ahead and
do the same thing for OpenAI by going
inside of Node.tsx
and I'm simply going to add credential
ID
string. That's it. Then I'm going to
open the dialogue for OpenAI and I'm
going to open the dialogue for Gemini.
This time I'm going to start by adding
all the necessary imports. So, image
select credential type and use
credentials by type. And I'm just going
to go ahead and add all of them inside
of the open AI dialogue. DSX.
Then I'm going to go ahead and modify
the form schema by adding the credential
ID after the variable name.
Then I'm going to go ahead and fetch the
credentials.
I'm going to now add this to the open AI
dialogue. There we go.
Now I'm going to go ahead and make sure
the default values load the credential
ID
and the reset form loads it as well.
There we go. And now I'm going to go
ahead and I'm going to copy the entire
form field which renders the
credentials. So I'm going to copy that
entirely. I'm going to find the form
field after variable name and I'm going
to just paste everything here. I'm going
to fix the indentation.
And since we already added all the
imports above, there are not many
errors. Let's just change this to open
AI credential. Select a credential
logos. Open AAI SVG. Open AI.
Perfect. So, let me go ahead and test
this out. I'm going to add an open AI
node.
There we go. Uh, but it's still fetching
Gemini ones. I forgot to change that.
Credential type open AI. Make sure
you're doing this inside of Open AI
dialogue inside of Open AI folder.
And you can see that now I have only one
result right here. So naturally, if you
didn't have any open AI credentials, you
would simply have an error. I mean, not
an error, you just wouldn't be able to
fetch any. So just make sure you have at
least one of each so you can test
properly.
And now what we have to do, the dialogue
is finished. Let's open the executor for
Open AI. And let's open the executor for
Gemini to wrap this chapter up. So I'm
going to close both dialogue components.
Now I'm going to start with the imports
here. So import Prisma, let me add that
here.
Then I'm going to go ahead and add the
credential ID to my Open AI data.
Then I'm going to go ahead and throw an
error if data credential ID is
unavailable.
There we go. I'm going to make sure that
I'm using the open AI channel to emit
that news to the front end. Then I'm
going to go ahead and fetch the
credential and throw if it is not found.
So I can now remove this to-do and paste
that here. I can remove the to-do here
because I'm throwing. Now let's go ahead
and change this to be open AI node
credential not found. Finally, I can
remove this and I can use credential
dot value.
There we go. I think that is it. That's
all we have to do. So, we can now go
ahead and close everything here. We
successfully implemented the entire
thing we outlined here. We added client
we added entity components for
pagionation, for search, for other
things. And we even added credential
dropdown and we even added it to the
executor. So we have a proper bring your
own keys method. Now let's go ahead and
push this to GitHub. So 25 credentials.
I'm going to go ahead and create a new
branch. 25
credentials.
I'm going to go ahead and stage all of
my changes. 25
credentials.
I am going to commit and I am going to
publish this branch. And now let's go
ahead
and let's review our pull request. Since
this was a large one, I'm going to go
ahead and let Code Rabbit do it for us.
And here we have the summary by Code
Rabbit. New features. We added
credential management system for storing
API keys for an OpenAI and Tropic and
Gemini. Users can create, view, edit
and delete credentials through a new
management interface. We added search
and pagionation for credentials list. AI
nodes now support selecting stored
credentials instead of using environment
variables. So exactly what was the goal
of this chapter to allow users to bring
their own API keys.
Now in here I think the thing we should
focus on the most is all of these uh
comments that code rabbit left. So 14
actionable comments. Of course you can
pause the screen right here. I always
think these are super useful if you are
struggling to understand how our uh code
is working. So using these sequence
diagrams you can pause the screen and
you will uh figure out exactly what
happens when a credential is not found
when a credential is found etc.
Uh so let's go ahead and maybe we can
read through this here. So executor
integrations first verify that
credential fetching validation and error
handling are consistent across
openthropic and Gemini ensure credential
ID is properly threaded from dialogue
node and executor. TRPC router security
confirm that all endpoints properly
enforce context out user ID to prevent
users from accessing modifying
credentials belonging to others.
database schema integrity verify foreign
key cascading behavior cascade on user
deletion set null on credential. Yes
this is if you remember this is what we
exactly what we discussed when we
started developing. Uh that's why I
wasn't sure but even code rabbit says
that that's what would be the correct
choice here set null on cascade
otherwise you will not be able to delete
a credential which is assigned to a node
or vice versa. So because of that set
null would be the correct option. So yes
we could definitely add this in the next
chapter
and form validation and error handling
here. So let's take a look at the
comments here. So some things are not
true simply because uh there are newer
versions of zod than for what AIs were
trained on. For example native enum is
as far as I know deprecated and you
should use enum now. So this is correct
and it works correct for us. So we don't
have to switch to native enum here at
all. In here it's telling us that we
have different types of uh different
invalidation patterns. Somewhere we're
using query filters, somewhere we're
using query options. Uh I think this is
because that's the demonstration I found
on TRPC website. So that's why in
invalidation I'm using query filter but
in fetching I'm using query options. I'm
not really sure that it matters. I think
most will work. And yes, we forgot to
throw any toast errors here on error. We
could add that. Sure. Like this.
Then let's go ahead about this. I think
this is the same thing. Z.nyum is fully
supported and it works. And of course
it is addressing the security
vulnerability. Our values are stored in
plain text. And in here it is suggesting
using a library like this to uh encrypt
it which is interesting. I have not
looked into this library right here.
Perhaps you could look into it and see
if that is something that would be a
good idea for your project.
Now let's go ahead and I think the same
thing is here native enium. So
completely supported in our case. Same
security vulnerability API key stored in
plain text.
Now let's go ahead and okay again native
uh in here I think uh again yes it
suggests using value instead of default
value I think I'm using the example from
a chaten documentation so it works so
I'm not going to change it uh and this
is important I think I had a whole
monologue here about how we don't need
to check if the tenant is allowed to
access this credential but what I forgot
got is ID injection basically using
someone else's credential. I completely
forgot about that. So yes, we should
definitely also add a way to validate if
this user is allowed to fetch that
credential. So I was only thinking about
one side being secure, but I forgot that
the user can be malicious as well. So
yes, very very good catch here. I did
not think about ID injection. We should
find a way to always verify what user is
triggering the workflow and then confirm
whether user is allowed to fetch the
credential or not. Then in anthropic
executor if I'm missing a credential I
throw the non-retriable error but I
forgot to publish the status for the
error it seems. And I think I also
forget to do the same in Gemini. So same
problem in all three obviously user ID
context and I throw this without doing
publish. So yes I should probably update
all of that and then just repeated
comments for this case. Same thing maybe
some oh incorrect providers. It looks
I'm writing Gemini inside of Open AI. So
yes, small typos here and there. Uh
missing status publishes. Yeah, we
should definitely do this to give users
a better experience. Uh, and in here
yes, it's basically telling us that we
should consider encrypting our API keys.
So, what I did in my previous project
was I used AWS uh encryption. Actually
uh you can also do it yourself using
database level encryption. But uh I
using AWS secrets manager is actually
surprisingly easy. The hardest part is
configuring the IM profiles. So um yeah
consider researching into AWS secrets
manager or the open source package it
recommended above. Great great comments.
I will mostly focus on fixing the ID
injection in the next chapter and fixing
the missing channel uh updates. So for
now let's go ahead and go back into the
main branch. Let's go ahead and click
synchronize changes and let's click
okay. Then let's go ahead and open our
graph to make sure we have merged
everything. Perfect. That seems to be
correct. A lot of new things we've
learned today. A lot of things we've
learned what we should do once we go
into production as well, right? Um you
can see how the sentiment around storing
plain text API keys is right. So it is
considered a security risk of course. uh
then again you will meet some people who
will say that they can be easily rolled.
I wouldn't consider uh having to encrypt
them but I think users are trusting you
you with uh with their data. You should
find a way to encrypt them. Uh I will
try my best at the end of the tutorial
to give you some proper recommendations
about how to do this. And again you can
literally use my previous tutorial and
it shows exactly how to do it. Uh it
will depend on how much time we have for
this one. Amazing, amazing job and see
you in the next chapter.
In this chapter, we're going to
implement Discord and Slack nodes. So
far, we've learned how to implement
trigger nodes such as manual execution
Google form event, and Stripe event.
We've also learned how to create an
execution transformation node like
OpenAI, Anthropic, and Gemini. But one
type of node we haven't implemented yet
is messaging node. So in this chapter
I'm going to teach you how to implement
two of them, Discord and Slack. And this
will give you enough knowledge to add
any other messaging nodes that you
prefer like WhatsApp, Telegram or
anything similar because most of them
work in exactly the same way. But before
we do that, I do want to resolve some of
the issues Code Rabbit mentioned in the
previous pull request review.
The most important one of those is
credential ID injection. So what can
currently happen? Let's say the attacker
creates a workflow in our app and they
go ahead and select their Gemini
credential. While this uh manual
execution will at the moment really send
the ID that it's written right here
this is just front- end validation. This
can very easily be bypassed because
anyone who is handy with the console can
somehow inject uh any ID that they want
to be sent to our network request and
start a background job which would then
use someone else's uh credential ID. So
the only thing the attacker has to find
out is someone's credential ID. For
example, maybe they are making a video
and they open their credentials right
here. They click here and then you can
see I have this ID up here and there we
go. The attacker doesn't even have to
know the API key because all we actually
need is the credential ID. So if they
can somehow inject that right here
obviously this now doesn't make sense
because this is my credential ID. But
Imagine I have another account and I
just stole someone's credential ID. I
could very easily spend their tokens. So
let's go ahead and think of a way to fix
that. The problem is basically right
here. So I just open the Gemini executor
and in this step where we fetch the
credential. As you can see the only
thing we do is we just pass along the
credential ID. So this can be injected
use attacker can add anyone's credential
ID here and then we would simply spend
their API key. So let's do a quick fix
of that one that I think is the simplest
and the easiest we can do is by
revisiting our functions inside of the
ingest folder. So inest folder
functions.ds.
Let's go ahead and let's do const user
ID await step.r run find
user ID asynchronous.
Let's go ahead and fetch workflow like
this.
We can just repeat that. And the only
thing we actually want here. So let's
just do select
user ID true. That is the only thing we
are interested in. And let's return
workflow user ID. We are using find
unique or throw. So this cannot be
undefined. There we go. Now we have user
ID which has to exist. And then what
we're going to do is very simply uh
anywhere just add user ID. You can add
it anywhere because this is an object
right? If we were just using a plain
params then the order would matter. But
in this case, the order does not matter.
You can just add it wherever you want.
So just pass in user ID here in the
executor. And now let's go ahead and
find out how this executor types can be
updated. So I'm going to commandclick
inside of get executor. That will take
me to features executions lib exeutor
registry. And in here I find node
executor. And in here node executor
params gives me what I need. So I'm just
going to extend it by adding user ID and
making it a string. There we go. So now
as you can see, I no longer have any
errors here and I can safely pass along
user ID to every single executor. So now
let's go back and let's revisit
something here. So inside of source
features executions
let's go inside of components Gemini
executor.ds. DS we can now go ahead and
double check that this is correct by
also passing the user ID from data uh
and let me just check how do I now uh oh
yes very simple in the Gemini exeutor
not in data just user ID as simple as
that
so now if this user if the attacker
somehow manages to get a hold of uh
someone's credential ID they will have
additional problem they need to resolve.
They also somehow need to spoof the user
ID. So technically this isn't, you know
100% corre uh protected still. You could
still somehow inject the workflow ID
but, at least, it, is, no, longer, uh, given, to
you on a plate to just enter any
credential, ID., At least, we, are, making
the job a little bit harder right now.
So, what I would suggest after you do
this is just go ahead and try something.
Make sure you save this. Maybe do a
refresh. Of course, make sure you have
your app running here.
I'm going to refresh. And I'm just going
to try and run this just to make sure
it's still working, that I didn't
accidentally mess something up. So, this
is working. And let's see. That is
working. Beautiful. So what you would do
now is you would also go inside of Open
AI executor right here. And you can see
that it's super simple now because every
single one of these now has access to
user ID. Some of them might not need it
but in this ones where we do need it, it
is very useful. User ID and just user
ID. There we go. And yes, if you want
to, you can also go ahead and publish
errors like this whenever they happen.
And I think I forgot forgot to do that
in Gemini here. If credential is not
found, make sure you publish the error.
Just make sure you're using the proper
channels. There we go.
And instead of open AI, let me check it.
Did I accidentally I left Gemini note
here? So open AI node. And then the same
thing in anthropic in the executor right
here.
You just add user ID
and then in the credential you pass
along user ID and let's just go ahead
and copy this right here and make sure
you are emitting error for this stage.
Great. Uh I think that's exactly what
I've outlined here. Let's check. So we
fixed missing channel events. I fixed
one invalid node name in the logs for
OpenAI. and we fixed the credential ID
injection. Great. Now, let's go ahead
and let's focus on creating uh some new
nodes. So, the first thing I want you to
do is go to my nodebased assets folder.
And in here, you can find Discord and
Slack. So, just go ahead and add that to
your public folder. I'm going to go
ahead and go here. Inside of our public
folder, we should have logos. So, I'm
just going to go ahead and copy Slack
and Discord and paste it here. So, you
should now have Slack and Discord.
Great. I'm going to start with the
Discord node. So, I would suggest that
before you do that, you at least create
an account on Discord so you can test it
out. It doesn't matter if you're not
going to use it or not. It will kind of
give you the idea of how all of this
messaging platform works. Uh, it's free
it's simple, it's fast. So, just make
sure you have an account for Discord so
you can test this out properly.
So, I'm going to go ahead and copy one
of the existing executions here. Uh, let
me copy Gemini and paste it here. And
let me rename this to Discord like this.
And I just remembered, of course, we
also have to update our Prisma schema.
So, let's find our node types here.
Let's add Discord and let's add Slack.
Once we've added those, as always, npx
Prisma migrate dev. And let's give it a
name, Discord Slack nodes.
Discord Slack node. And that should
synchronize the database. Great. We can
now close that. And as always, I
recommend you restart your Next and your
Ingest server.
All right., Now, that, we, have, them, ready,
you can also refresh localhost 3000
every time you refresh your server.
Let's go back inside of this one.
Discord, we just copied it. So, I'm
going to start with node.tsx.
Make sure you are working in the Discord
folder. This will no longer be Gemini
node. This will now be discord node. So
we are just doing this whole rename
thing that we keep doing. Discord node
not data, just Discord node. Great. Now
let's go ahead and start by changing the
base execution node to use Discord SVG
and name Discord. There we go.
And I want to leave it like this for now
simply so we can see it being added
here. So now we have to visit our node
components located inside of source
config
and then inside of here let's add node
type discord discord node. You should be
able to import it from features
executions components discord node. Then
we have to go to our node selector
located in source components node
selector. Let's go ahead and copy this.
Give this a node type of Discord label
of Discord.
And let's go ahead and do send a message
to Discord. And make sure you're using
Discord. SVG.
And now if you go ahead and click on the
plus button, you should find Discord.
Here it is. Amazing. Obviously when you
double click on it, it will still use
the Gemini configuration. So let's go
ahead and fix that. I'm going to close
everything and I'm going to go back
inside of node right here. So, let's
start by changing the data right here.
Instead of all of these, we're going to
have web hook URL. We're going to have
content and username.
You can probably go with even less of
these, but I think these are sufficient
enough to make it fun and customizable.
Now let's go ahead and this is all good.
Um
for the description let's do the
following.
It will be node data.content.
If we have the content we're going to
say send node data.content
dots slice otherwise not configured. And
yes, obviously this is now throwing some
errors because uh well we're not using
the proper dialogue at all. So let's go
inside of the Discord dialogue. DSX
and sure let's start with the form
schema. So uh variable name will still
stay the same. Credential is not
required. None of these are actually
required. So let's start with a simple
one and an optional one. Username. So
username for the bot. The bot can be
named whatever you want. Then let's go
ahead and let's add content. So content
will be a string with a minimum and
maximum length here. So I don't know
where I found this online. Maybe it's
true, maybe it's not. Uh maybe it's an
API limitation. Maybe it's a bot
limitation. But yeah, you can add a
limit like this. And now we need a web
hook URL. So you can be as lenient as
you want with this.
For example, you can just make this a
string and make it web hook URL is
required. And this is not how you do
that. My apologies. So why use a string
here? Why not at least do URL? Uh well
remember we have an option to use
templating language. So if you mark this
as a required URL, you will not be able
to use uh any templates, right? Maybe
you want to load the web hook URL from
the previous node. Right? So that's why
we're using string and not zod URL here.
Great. So we now have that and let's go
ahead and fix these. So this will be
username. This will be content and this
will be web hook URL.
Let's copy them. Let's add them here.
There we go.
We can leave the batch watch variable
name.
Let's change this to be discord
configuration.
And let's go ahead and change the
description to be configure the Discord
web hook settings for this node.
Now let's go ahead and change each
field. So variable name should be my
Discord. That's good.
Um let's go ahead inside of the Oh, also
we should probably change it here. Yes
my Discord.
Now, in here, it's no longer going to be
credential ID. It will be web hook URL.
So, web hook URL. And this should be
much simpler.
So, we can remove select entirely. We
don't need it. Inside of form item, open
form control like this. And just render
a normal input. So, very simple. and add
a placeholder and spread field property.
So if you want, you can give a
placeholder just to give your user the
idea of what they're supposed to add
here. Then let's do form description.
And in here, you would basically explain
to your user how to do this. So get this
from Discord channel settings
integrations, web hooks, right? Whatever
you prefer, however you prefer to convey
your user this information.
Uh again, I have no idea how to make
this. Uh I just copied it from
somewhere.
Now let's go ahead and do the content
one. And this one will be a text area.
So let's go ahead and demonstrate to our
users how they can for example use
variables here. Summary and then maybe
AI response, right? Just to remind them
that they can do that. My Gemini.ext
I don't know, something like that. And
in the form description again we're just
you know explaining things. The message
to send use variables for simple values
or JSON variable to stringify objects.
Uh and then let's go ahead and let's add
let's copy the input one here because
it's the most similar one form field.
And let's replace the last one here.
And instead of web hook URL, this will
be username bot username optional.
There we go.
Let's go ahead and add a placeholder
here.
Workflow bot.
And then we can make the description be
well descriptive. So this will be used
to override the web hook's default
username. Uh, and of course, let's
rename this from Gemini dialogue to
Discord dialogue. And now we can remove
the use credentials by type hook. We no
longer need that. Which means that we
can remove image. We can remove select.
We can remove credential type and use
credentials by type. There we go. Much
simpler. Now
now let's go back to the node and we can
now import Discord dialogue from dot /
dialogue. And we can remove the Gemini
dialogue. Looks like I still have this
called Gemini. So I'm just going to go
here,
change this to be discord form values.
And let's go ahead and add them here.
And let's use them here. There we go.
And this is how it should look like now.
Variable name, then web hook URL. Uh oh
this is still called system prompt.
Whoops.
That should be called content. So inside
of dialogue, let's just make sure to
rename that.
It's not optional. It is content. Uh
let's be more descriptive. Message
content. There we go. I think that makes
it clear what you're supposed to write
here. Basically, the message that will
be sent, right? Uh very, very good. Now
that we have that, let's go ahead and
create the channel so we can create the
real time execution thingy here. So I'm
going to go inside of source
inest channels. I will copy Gemini
paste it here, rename it to Discord.
I will rename this to Discord channel
name, Discord execution. And this will
be called Discord channel. Now that we
have that, we can go back. My apologies.
We can go inside of inest functions. DS
and we can register the new Discord
channel.
So, just make sure you import that.
Great. Now, let's go ahead inside of
features. Let's go inside of executions
components. Discord actions. DS. Let's
go ahead and rename these to Discord.
So, Discord token and fetch Discord
real-time token and all instances to be
discord channel from channels Discord.
There we go. Then we can go ahead and go
back inside of node in the Discord
folder.
And we can fix this to be discord
channel name and fetch Discord realtime
token and remove fetch Gemini realtime
token from actions. And then we can
remove the channels Gemini import.
Perfect.
Now that we have that, we can finally go
inside of the executor right here.
So let's go ahead and let's start with
renaming uh the data here.
Instead of Gemini data, this will be
discord data. It will still have
variable name, but alongside that, it
will have three new items. This is a web
hook URL content and username. Perfect.
And let's rename this to Discord
executor. I like this. This time I don't
think we will need user ID. So we can
remove it because it will be unused.
Let's go ahead and start by uh changing
the inest channel to use Discord one
discord channel.
So replacing all instances
to use Discord
channel. Basically just replace every
single instance to use Discord channel.
And maybe some of them are unneeded
unnecessary. We're going to fix them
now. So that's what I did. I just
basically replaced eight instances of
what was previously Gemini channel to
Discord channel. So we start with
loading. We throw an error if variable
name does not exist. And then we can go
ahead and instead of throwing errors if
credential ID doesn't exist or if user
prompt doesn't exist with something
simpler.
We're going to check if web hook URL is
missing. So we throw an error like this.
And we're going to check if content is
missing. So message content is also
required. Make sure to emit an error.
And now let's go ahead and remove this
because we're not going to need that.
And let's do const raw content
handlebars
handlebars.
Data content and pass in the context
const content. Let's go ahead uh and
yeah. So the problem is
uh the way handlebars will compile this
will make it uh non-compatible with
discord. So I found that you can install
a package called HTML entities
and then you can decode it using that
package and that should improve how
messages arrive. So just import HTML
entities.
Let's go back here. So now that we have
content,
let's do decode raw content.
And let's go ahead and set the username.
So let's check did the user pass this?
If it did, let's decode handlebars
result handlebars
compile data username
context.
Let me fix this.
otherwise undefined and it will just use
the default from discord. So basically
we are allowing user to use variables to
set their system prompt. Uh oh, am I
still calling this system prompt? I
think I just haven't refreshed.
Oh no, it's actually the Gemini one.
Sorry. So let me add a Discord one. Here
it is. Yes. So basically you can use
variables here, here, and even here.
So now let's go ahead and remove
credential because we don't need it. We
don't need any of these. In fact
let's just empty the try entirely. Uh
actually empty it until publish success
like this. And let's just do const
result await step.run run discord web
hook
like this.
Let's go ahead and do await
ky. Make sure to import ky from KY. So
the same thing we used in our HTTP
request executor if you remember KY. So
you should have it installed.
And now in the Discord executor, let's
do await ky.post post data web hook URL.
Let's go ahead and pass JSON content
content. slice 0200
because this is
Discord's max message length
and the username if we pass it.
Great. And then we're just going to go
ahead and return context.
Whoops.
Data variable name Discord message sent
and set it to true.
Or if you want to, you can just say
message content and then pass in
content. slice02000 because that's
exactly what we are going to send.
Whatever you think it's more useful to
see.
And then we can go ahead and publish
this as success.
And we can just return the result
because we are setting this here.
Yeah, let's uh
uh yeah. Yeah, let's do it like that.
Okay. And now I think what we have to do
here is just change this check and move
it from here and just do it here.
basically inside of step.r run because
this is a new function scope. So
TypeScript will not work from the check
we did above. Same thing for this web
hook URL here. If you don't want to use
that, since we check it, let's go ahead
and move it right here.
There we go.
Perfect. So, that should now actually be
ready. And let's get rid of this Gemini
node. Discord node variable name is
missing.
Discord node web hook URL is required.
Discord node. Perfect. We can remove
generate text and create Google
generative AI. We can remove Prisma.
Great. So this is now ready to make a
call to discord spec hook URL and to
send some message content using KY uh
and to actually
update the context with what happened.
Now let's not forget to pass this to the
executor registry. So instead of
executions lib executor registry let's
add node type. Discord and add Discord
executor here. And we can copy and paste
this and just add it for Slack just so
we get rid of the error here. But make
sure you've imported the Discord exeutor
and feel free to use it twice for now.
We're going to change it later. Uh
great. So, I think this should work just
fine. Uh, yes. So like one thing I maybe
we don't have to do
is the return here
like this.
I think we can do it here.
I don't know. It it really does not
matter if you want to do it. But if you
want to do it here, you also have to do
con result. Basically you can leave
exactly as it was. But I was thinking
maybe it's simpler to move it outside of
the scope. Just make sure that you
understand what is the scope of this
function. Right? So you can see this is
where the function starts. Then we check
if we don't have this then we do the
post call. We check if we don't have
variable name and we throw the error.
And then we return this context data
variable name. But this doesn't end the
background job. We then publish the
success and we return the result
variable which is essentially just this.
So this is on line 87
where this ends. So just make sure that
you don't accidentally write this
outside of the scope because it might
return early or you might do something
incorrectly here. Great. So once we have
this ready, we have added it to our
executor. I think that's all we have to
do. So, let's just call this my Discord.
Uh, let's go ahead and do https
codewithonia.com.
Hello world
nodebase bot. Let's click save. Let's go
ahead and remove this. Let's go ahead
and add this instead.
Let's click save and let's watch this
node fail obviously because it should
not be able to make a post request to
that uh thing. So let's go ahead and
wait for this. This is good. And then
this obviously fails. And if we take a
look at localhost
8288,
we can see that it failed. Fetch failed.
Yes, because this is an invalid URL.
Great. So how do we create the proper
URL? So let's try and find the exact way
to do it. So get this from Discord
channel settings, integrations, web
hooks. So in here I have a brand new
Discord account and I have a brand new
server. So let me try and find this.
Honestly, I'm I'm I'm like learning this
as I go just as you are. So let me find
this is user settings
server settings.
Right click
maybe edit channel, integrations, web
hooks. Here they are. Create web hook.
New web hook. Oh, so okay. Just it
already created one. So just choose this
one.
You can name it whatever you want. For
example, Spideybot channel is general.
Copy web hook URL. And then we should
just add that here. Let's click save.
Let's click save up there. Let me go
ahead and open my general channel. So I
have no messages here. Let me execute
the workflow again.
And
there we go. We successfully sent a
message from our uh Nodebase project to
here. Amazing amazing job. And you can
see that it's actually super simple. It
just needs a web hook URL. We did we
didn't even have to install any Discord
SDK or anything. It was super simple.
The hardest part was just repeating all
the code that we already had and uh
customizing it to be Discord related.
And the integration for Slack is exactly
the same and I'm like 90% sure it's the
same for WhatsApp, Telegram, Signal
whatever you might want to use. Uh, and
it also kind of gives you the idea of
how you would do it for something more
advanced like how to add it to Google
Sheets. Well, very similarly, I I
suggest, right, you would just ping some
web hook that they offer or maybe if
it's a bit more complicated, you would
install some SDK and then the user will
have to add like their API keys or
something like that. But nothing you
haven't done before, right? That's why I
try to choose uh not like every single
node in the world, but enough of them
for it to be useful for you so that you
can add your own nodes because honestly
this video can go on forever. I can add
a billion nodes here. So let's go ahead
and now do the same for Slack.
And of course we can also check in here
in the completed tab how does the
finalization look like? And there we go.
So the variable is called my Discord and
message content was this.
So just to confirm that that is also
working as expected. Great. So let's go
ahead and do the exact same thing for
Slack. Shouldn't be too hard given that
it's almost exactly the same code. So
inside of features executions
components,
let's go ahead and copy Discord. Paste
it in components.
Rename this to Slack. And then I'm going
to go ahead and start with node.tsx.
And I will change these instances to
slack. And of course, I will change the
display name here. Then in the base
execution node, I will use slack image
and slack title. And this will actually
be exactly the same. So that's cool. And
now let's go ahead and do node
components inside of source config. So
source config folder node components.
Let's go ahead and immediately add slack
slack node. Make sure you import it. And
then node selector. Node selector is
inside of source components node
selector. So I'm going to go ahead and
copy this. I'm doing this inside of my
execution nodes.
Change this to slack. Send a message to
Slack.
Label Slack and Slack right here. And I
think that should be enough. We can now
click the plus button. If you scroll
down, you will find Slack right here.
But obviously, it uses a configuration
for Discord. So, let's go ahead and
resolve that.
Let's go inside of Discord
and let's go inside of dialogue.dsx dsx
and let's start by checking if anything
needs to change in the form schema.
So this will actually be a little bit
simpler because we're not going to have
username and we are not going to have
uh oh we're just not going to have
username. Okay, so we have content and
in here I have no idea what is the
maximum limit. So you can remove it if
you want it to be longer than 2,00
uh discord form values. Let's of course
change uh these three instances to
Slack. So Slack form values Slack
dialogue. And in here just remove
username. We don't need it. And here as
well. And then you can also remove the
input for the username here. We don't
need username.
And I think that is it. We only need to
obviously change how to do this. Um, so
there are a couple of ways that you can
implement web hooks within Slack. You
can do it by creating a new app or you
can also do it by creating uh web hooks.
My apologies, not web hooks, workflows.
Uh I have no idea what is the difference
between two but I did find that if you
want to use workflows it is almost as
simple as with uh Discord which was just
you know right clicking on the channel
edit channel integrations add new. So
I'm going to go ahead and change this to
get this from Slack channel settings and
let's go ahead and do workflows web
hooks. Maybe this would be workspace
settings. Actually
I think that's kind of what it would be
here. Uh, we can change the placeholder
here
just to tell the user like kind of what
we would expect to have. And let's go
ahead and change the title to be Slack
configuration and configure the Slack
web hook settings for this node. Change
this to be my Slack.
and in here to my Slack. There we go.
Now we can go back instead of node.tsx
we can change Discord dialogue to be
Slack dialogue and we can change Discord
form values to be Slack form values. Now
let's modify Discord node data to not
have username because we don't need it.
There we go. And now let's go ahead and
close this. Let's double click here. Uh
let's save this actually so I don't lose
this Slack node. Then let me refresh.
And then let me double click here.
Uh it still says
ah did I just change the entire thing
here?
Yes. Again I have modified Discord
dialogue instead of Slack dialogue. make
sure the same thing is not happening to
you here. Uh, so yes, looks like I've
been doing the entire modification
inside of a wrong dialogue. My apologies
for that. I hope you understood what I
was supposed to be doing. So, I'm just
going to revert my changes. Uh, my
apologies. This is already a long
chapter, so I'm getting confused. So
instead of Discord dialogue, what I'm
going to do is I'm just going to control
Z and just return things here.
There we go. like nothing ever changed
and bring back username here.
As simple as that.
And then I'm going to go back inside of
my node, which is my obviously my
Discord node.
And I'm just going to replace Slack
dialogue back with Discord dialogue and
Slack form values with Discord form
values. My apologies for that. I think
this is twice that it happened. uh these
components are so similar that I'm
getting confused. Now I'm going to go
back inside of source features
executions components Slack and let's go
inside of dialogue of Slack. Let's
remove the username here. Let's remove
the maximum here. Uh let's add a comma
here. Let's rename Discord form values.
Actually all instances of Discord to be
Slack. Make sure you are doing it in the
Slack folder. Don't make the same
mistake that I did. You can remove
username from here. From here, you can
change this to be my Slack.
Change this to be Slack configuration.
Configure the Slack web hook. Change the
placeholder to be my Slack.
You can go ahead and you can remove
form field for the username.
There we go.
Now that we have that
uh let's also change uh how you get
this. So get this from Slack workspace
settings
workflows web hooks.
Then let's go ahead inside of node of
the slack folder let's go ahead and
replace instance of discord dialogue
with slack dialogue and discord form
values slack form values. There we go.
All right. So, sorry about that. And now
you can see it says Slack configuration
right here. And it doesn't have uh the
username field. So, just to clarify, no
you don't have to modify anything in the
Discord folder anymore. We finished with
it. It's working. If you did, so sorry.
Just bring back the username here.
Rename this back to Discord. Uh, make
sure you're using username here. Make
sure you're using it here. This should
be called my Discord. These two
instances as well. This as well. Uh, you
also need the form field for the
username. Here it is entirely if you've
accidentally deleted it. And the
instructions here should be for Discord.
Again, my apologies if you've changed
the dialogue in the Discord folder
because you were following me. It was
supposed to be changing the ones in the
Slack folder. Excellent.
Now, let's go ahead and let's do the
channel for Slack. So, I'm going to go
ahead inside of inest channels. I will
copy Discord and I will paste it here. I
will rename it Slack. I will go inside
of that Slack. I will change this to be
Slack channel name. Slack channel. Let's
call this Slack execution. That's it.
Then let's go ahead and set up inest
functions.ts
Slack channel
and make sure you've imported it.
Now that we have that, let's go ahead
inside of source features executions
components Slack actions.
Replace all instances of Discord with
Slack. Replace three instances of
Discord channel with Slack channel.
Change the import to Slack. There we go.
Then in the Slack folder, go inside of
node.tsx.
Change this to be Slack channel name.
Fetch Slack realtime token and remove
fetch Discord realtime token and remove
Discord channel name import. There we
go.
Now that we have that, let's go inside
of executor for the Slack component.
So, first things first, let's quickly
modify the Discord data and remove the
username. We're not going to need it.
Then, let's replace all seven instances
of Discord channel with Slack channel.
So, just replace all of them and fix
this to use the Slack import. and then
you should just be using Slack channel
everywhere.
Let's rename this from Discord data to
Slack data. Let's change this from
Discord executor to Slack executor.
And now let's go ahead and change all
instances of Discord node text with
Slack node.
And I think that's all of it. Yes, make
sure you're not logging Discord node
anywhere.
Now, let's go ahead and let's remove
username since we are not going to need
it. This will be Slack web hook
and let's go ahead and do this. So
ky.post
data web hook URL and in the JSON let's
just do text and send that as content.
So yes, this one will accept text, but
um you will see um
so I'm not going to add a comment. The
key
depends on workflow config.
So you should probably add this
instructions somewhere in the dialogue
for the user. You're going to see what
I'm talking about. For now, make this
text, but this can actually be anything.
It will depend on how user sets up their
uh web hook.
So I think this is okay now. Now let's
go inside of executions lib executor
registry node type. Oh, we already have
slack. Perfect. Slack executor. There we
go. So now that we have that working
let's go ahead and remove this one.
Let's go ahead and connect this my Slack
https. Let's do code with Antonio again.
Hello world Slack. Click save. Save up
there. And let's just see this fail
because that's what it's supposed to do
right? We have added an invalid web hook
URL.
Uh, looks like this is not working, but
it could be because I'm missing a
refresh. Let's just try again because
this one just failed, which is correct.
So, let's see. Yeah, it just needed a
refresh. Perfect. And it failed for the
exact same reason. It cannot post that.
So, now let's go ahead and let's test
Slack.
So just the just as with Discord I've
created a new account here, new channel
uh new everything and I've created a
brand new channel called general here.
So as I said there are multiple ways you
can send a message to Slack using web
hooks but one that I found actually the
simplest is by uh again you'll have to
excuse me I am not that familiar with
Slack here in more you can find tools
create and find workflows and apps and
instead of workflows you can click on
new build workflow
and for an event here you should be able
to select from a web hook starts from a
third-party event. So, if you ever
become a big shot with this app, you
could probably collab with Slack and
then have Nodebase here. Obviously
that's why all of these other companies
have it so much easier to add
integrations because they have diplomacy
between these apps to make it easier.
But in our case, we have to click from a
web hook. And this is what I was talking
about. You have to set up variables
here. So, for example, you can set this
to be content. And then inside of your
executor,
this would be, let me just find it. This
would be content, right? If you set the
key to be text
this would be text, right? So, that's
kind of the tricky part. So, uh let me
just see in this word executor
what's it called? If it is uh it's
content, let's go ahead and make it
content here. Content
and leave this key. So the key depends
on workflow config. This is what I'm
talking about. So make sure the key here
is content
like that. And data type should be text.
Click done. And there we go. You can see
now example HTTP body. Exactly what we
are doing. And let's go ahead and click
continue.
Uh, and let me go ahead and try and do
add steps. Send a message to channel.
Search all channels. Select your
channel. And in here, you can insert a
variable here. Oh, you can also insert
all data or you can just do content.
Okay, I see. Yeah, you can see can play
around with this. So, let's just add
content. Save. There we go. Finish up
the button up there. Nodebase
workflow
like this and let's click publish.
Uh and now somewhere here we should find
let's click add to channel.
Okay. Nodebase workflow. Right click. Uh
we have copy workflow link. Let me
check. Is that what I think it is?
Let's see if I go here and if I add that
here.
Uh, I'm not too sure that's what it is.
As I, as I said, I'm not like too
familiar with this. I discovered this
myself. So, let me try and find copy
workflow. Share workflow.
H,
maybe it is this.
I'm not even sure myself anymore.
Workflows. Here it is. Copy workflow
link.
Starts with a web hook.
Oh, here it is. You have to go and find
starts with a web hook. Click on the
edit button and down here find web
request URL. Okay. And add it here.
And click save. Click save up there. And
let's see if this will now work. So
execute workflow.
Home general.
And let's see if it will work. There we
go. Hello world from Slack. Both nodes
successfully ran. Amazing, amazing job.
And in here, here we have my Slack
message content. Hello world from Slack.
So, I've just told you I've just shown
you two different messaging platforms
that you can use and integrate. And this
you can now use this as a guide on how
you would add a billion others. No
WhatsApp, Telegram, Signal, whatever you
prefer. 90% of them will work the exact
same way, right? Just some kind of web
hook.
Again, uh this is kind of a tricky part
with Slack one. So, I would suggest
uh that you go inside of your dialogue
for Slack and maybe
somehow add like multiple form
descriptions,
make sure the key is content.
So, your users then know how they're
supposed to uh yeah, how they are
supposed to configure the workflow
because it's not exactly perfect.
because if inside of uh this web hook
they don't set up the content variable
it will not work. So
make sure the make sure you
have
content variable.
I think that would be kind of I don't
I'm not sure. Um, another way you can do
it is by exploring uh Slack apps.
Uh, so that's this and then you would
have to create your app and then you can
send uh a web hook event as well. I
personally find this just a little bit
easier because apps have this weird
interface that I find confusing to use.
This isn't perfect either, but it's kind
of fast to do. It wasn't too difficult.
Uh, amazing. So, that's it. Uh, again
so sorry if I misled you with the uh
editing of the Discord folder and caused
you some problems there. I meant to edit
the Slack folder. They are so similar. I
don't even know which one I'm modifying
anymore. So, let's go ahead and check
what we were supposed to do here. We
added Discord node dialogue executor
channel and we tested it and we did the
exact same thing for the Slack node.
Amazing. Let's go ahead and push this to
GitHub. So 26 Discord and Slack nodes.
I'm going to go ahead and create a new
branch. 26 Discord Slack nodes.
Then I'm going to go ahead and commit
all of my changes here. So stage all
changes
26 discord slack nodes commit and let's
go ahead and publish this branch. Once
this branch has been published
let's go ahead and open a pull request
and let's have a code rabbit review for
any security issues so you can see what
you could improve or if we did something
critically wrong so that we can fix it
in the next chapter.
So I've actually realized that 90% of
this code was copied from the Gemini
node and Gemini node is something that
we have reviewed in the previous
chapter. So I'm not sure how much sense
it makes to you know let code rabbit
review the exact same code twice. We
already know uh the potential caveats
it's going to give us. uh for example
most of the time we're not using uh
entire try catch methods uh and things
like that basically what we already saw
so what I suggest is we read the summary
and we merge this and then we use code
rabbit review for the next chapter which
will be a completely new feature instead
of this one which is pretty much
identical to the previous one just some
slight differences with naming and
executor so we added discord web hook
execution node to automate message
sending to Discord channels. We added
Slack web hook execution node to
automate message sending to Slack
channels. We added HTML entities
dependency. This is so it improves the
formatting of the message when it
arrives into Slack or Discord. And we
updated the database schema to support
new execution type nodes. Amazing. So
now let's go ahead and let's just merge
this pull request. And once it has been
merged,
we can go ahead and go back inside of
our main branch. We can click on
synchronize changes.
And let's go ahead and double check
right here with our graph 26 Discord
Slack nodes that we have merged it.
Amazing. I believe that marks the end of
this chapter. Uh not much things left to
do. So, we are nearing the end of this
tutorial finally. Thank you so much for
being so far along with me here. Uh, not
too many things left to do. Amazing job.
And see you in the next chapter.
In this chapter, we're going to
implement executions history. So far
we've been able to run executions with
either success or failure states, but
the only way we've been able to look at
the result of those executions is using
the ingest developer server. So, what
we're going to do now is implement a
page where user will be able to look at
all of their current running or
previously run executions and track
whether they failed or whether they
succeeded. Let's start by adding the
schema for that.
So let's go ahead and open schema Prisma
and let's go ahead and go all the way
down and let's create a model execution.
Let's give it an ID
default CU ID. Now let's go ahead and
give it some timestamps. So each
execution will have a started at date
time and it will be automatically
populated. Then we're going to have
completed at which will be optional
because it doesn't have to complete it
can fail.
We are then going to have inest event ID
which will be unique
and this is not executable. So like that
and we're going to have output which
will be an optional JSON.
Now each workflow will have an
execution. So let's make sure that we go
inside of model workflow here and add
executions
execution like this. And in order to fix
the error, we need to create a proper
relation.
So let's go ahead and do workflow ID and
let's make this a string. And then let's
do workflow workflow relation fields
workflow ID references ID and on delete
cascade. So if the workflow gets deleted
the execution history will get deleted
as well. Of course you can decide for
yourself whether you want that or not.
that will be the behavior in my app. Uh
on delete has options like set default
set null. So in this case I'm using uh
cascade.
Perfect. Now let's go ahead and create
an enum so we can define exactly what
kind of status execution can have. So
enum execution status can be running
success or failed. And now let's go
ahead and very simply just add
uh status execution status
default
running.
So besides status timestamps in justest
event ID and output let's also make sure
that we have an option to track error
which can be an optional string. And in
order to increase the character length
we can add db.ext decorator. And let's
do the same for error stack because
these two can be quite lengthy. So
that's why we are adding this decorator
right here.
Great.
Now that we have this, let's go ahead
and uh let me see. So we have an index
here automatically added because of the
foreign key. And I think um this should
be enough. So now let's go ahead and
migrate that. So npx prisma migrate dev.
Let's give it a name something like uh
executions schema.
So after you've given it a name like
this, go ahead and press enter. And that
should synchronize your database. You
can now close this and as always restart
Nex.js and restarting inest. If you have
it running, go ahead and refresh local
host. So now that we have this, we are
ready to create this feature. Let's go
inside of the features folder and we can
actually copy credentials. So copy
credentials and paste it inside of
features. Go ahead and rename it to
executions.
Uh oh, it looks like executions is
already taken. So this should then be
let's see uh well we can actually keep
it inside of executions. That's right.
No need to copy it. Let's actually use
our executions folder. That makes
perfect sense to do. So let's go ahead
and start by creating server and then
let's go ahead and copy routers.ds from
credentials and paste it inside of
executions server.
So make sure you open the routers.ds in
your executions folder. So close all
other ones. Go ahead and rename this to
executions router.
And then let's go ahead inside of tRC
routers_app
and let's add executions executions
router.
You can import it from features
executions, server, routers and save the
file. Great. So this will be quite
simpler now. Executions will not be able
to be created via API nor will be able
to be removed. They cannot be updated
either. So none of this makes sense.
These are like trace logs, right? You
can only see them. You can't do anything
with them. And you can also remove get
by type. So only two of them get one and
get many. Get one will be a protected
procedure which will very simply use
prisma.execution
find unique or throw. The only
difference here will be that it will not
use user ID like this. It will simply
access its child workflow. Well, not its
child, its relation to the workflow like
that. So yes, you can now do this with
Prisma. Uh previously you were not able
to do this and it was a bit more
complicated if you wanted to achieve
this. But this is great. This is super
simple and you can now very easily uh do
permission check on a nested child like
workflow which is really really cool.
So now that we have that let's remove
credential type. Let's remove premium
procedure. We don't need any of that.
That's it for the get one. Now let's go
ahead and work on to get many. So get
many is quite similar except we don't
need to search it. So remove search from
here and remove search from here. You
can remove the name property entirely
here and you can remove it in the count
as well. Now let's go ahead and change
this to be prisma.execution.find
many. So the wear will have to be
modified to look within the workflow for
the user ID as well. the same thing we
did previously
and order by here. Let's use started at
instead of created at.
And let's go ahead and add include
workflow select ID true name true. So we
have some more information to show on
the user side.
And for this
let's also change this to execution and
very simply look within the workflow to
make sure we are fetching only that
user's executions.
Great. Everything else can stay exactly
the same. That was easy, wasn't it? So
now let's go ahead and copy everything
else that we have here uh in credentials
server. So that will include params
loader uh and a prefetch and let's paste
it here. So make sure you open prefetch
and params loader
from the new executions folder
and let's also go inside of credentials
and let's copy params file and paste it
inside of executions.
So now let's go ahead and first start
from the params file. So just make sure
you are inside of executions folder
params file. Let's go ahead and change
this to be executions params. And we can
go ahead and remove search
because search will no longer be
available.
But let's go ahead and add workflow ID.
Uh actually we don't need that. We can
just use page and page size.
Then let's go ahead inside of server in
the executions folder and let's go ahead
and do params loader here. So this will
now be executions. Did I rename it
executions params? I did. So executions
params. And to fix this, let's just
retype it. And then this will be
executions params loader using the
executions params. There we go. Now
let's go inside of prefetch.ts TS and
let's fix this as well. So this will be
TRPC.executions.get
many prefetch all executions
prefetch
executions TRPC executions
single execution
prefetch execution
DRPC executions
get one. There we go.
So we have all of the server parts ready
including the params. So now let's go
ahead and do hooks.
So I'm going to go ahead and copy use
credential params and use credentials.
I'm going to go instead of executions in
the hooks here and I'm going to paste
those two inside alongside use node
status. Uh let's start with use
credentials params. And let's go ahead
and rename it to use executions params.
If it asks to update imports, you can
select yes. And looks like the only
place it's going to update it is this
one, use credentials from our executions
folder. So this is the unsaved file and
this is where it updated it. So we will
get to that. For now, you can save that
file. Let's focus on our just renamed
use executions params. And let's go
ahead and just use executions params.
And this will be use executions
params. There we go. Now let's go ahead
and rename this to use executions.ds.
Let's go inside of use executions and
let's change this to be use executions
params.
Now this will also be a little bit
simpler. So we will have this hook to
fetch all executions using suspense. So
use suspense executions
use executions params the RPC executions
get many. We will not have a hook to
create new one. So we can get rid of
that. Same is true for removing one.
But we will have a hook to fetch a
single execution using suspense. So use
suspense execution again. TRPC
executions get one. We will not have a
hook to update any executions. So we can
remove that. And we will also not have
anything uh to fetch by type. So we can
remove that. So we only have two of
these. Let's remove all the unused
imports here.
Uh but I do think uh we might need let's
see we have use suspense executions
and use suspense execution.
I'm thinking whether we are going to
need any nonsuspense
hooks here but I think this should be
fine for now.
So now let's go ahead and create the
page, loader., So,, I'm, going to, go, ahead
inside of source app dashboard rest
credentials and I'm going to copy
page.tsx.
I'm going to go inside of executions
here. Uh, and well, it would might be
easier to copy the content of page.tsx
and then open executions page and paste
it in here. There we go. So, let's go
ahead and change things up from
credentials to executions. Just make
sure you are modifying executions
page.dsx.
So require out stays the same but params
should be loaded using executions params
loader. So make sure you change that
import to use executions here and remove
credentials params loader. And then for
prefetching, we're, going to, use, prefetch
executions
with said params. So you can then remove
prefetch credentials and make sure you
have prefetch executions from features
executions server prefetch.
Uh we can go ahead and replace the
credentials container with just an empty
fragment because we don't have an
alternative for this just yet. Uh and
for the credentials error, we can just
go ahead and do the same here. We will
add both of these later.
And same is true for this to-do list for
executions.
So let's go ahead and remove all the
unused things. There we go.
Now that we have that, let's go ahead
and check it out. So I'm just going to
make sure it's working. So when I click
on executions right here
it should load to-do list for
executions. Great. Now let's go ahead
and work on the client side. So I'm
going to go ahead and close the app
folder and go inside of features
executions and open the components
folder. And then I'm going to go inside
of credentials components and I will
copy credentials. DSX and I will paste
it in here in the components.
Let's go ahead and rename it
to executions.tsx
like that. And then we're going to go
ahead and slowly fix all of these errors
and names. So double check you are
inside of features executions components
executions. DSX. Let's start by changing
this from hooks use credentials to hooks
use executions.
And you can go ahead and remove use
remove credential because we don't have
it. And instead you can use use suspense
executions
instead of this instead of you use
credentials params executions params and
import use executions params.
Let's go ahead and see what we need and
what we don't need. For example, uh we
don't need this search component at all.
So, we can just remove it.
Instead of credentials list, it's going
to be executions list. It's not going to
be credentials. It's going to be
executions and it will be use suspense
executions.
So to the entity list, let's make sure
we are passing executions. Let's make
sure we are using
execution
everywhere.
And uh for now, let's just leave this as
is. We're going to replace these two
later. So when it uh comes to executions
header, it's going to be quite simpler
than this. Executions header. is not
going to have any prop because it
doesn't need the prop. Uh, and it's
simply going to have a title executions
like that and a description view your
workflow execution history and it will
not have any of those two props. Uh, so
looks like this is now throwing an error
because it needs to have one of these.
So yes, it's expecting something here.
So, can I add maybe
huh? Okay, I don't know. I will look
into it. But for now, yes, just ignore
this uh error right here. So, now let's
go ahead and see what's up with
pagionation here. So, let's rename it to
executions pagenation. This will be use
suspense executions
executions
use executions params and then replace
all of these three instances to use the
executions constant. Perfect. Let's go
ahead and rename this to executions
container. Let's go ahead and use
executions header
executions search which actually doesn't
exist and we don't need it. and
executions pageionation here.
There we go. Perfect. So now we have
this.
Let's go ahead and change these three to
be executions.
Loading
executions.
So loading executions error loading
executions in handle empty. We don't
need handle create at all.
You haven't created any executions.
Get started by
running your first workflow.
You haven't uh I'm not sure if created
is like the correct term to use.
Not sure what else it should be. Yeah
for now let's use it like that. Instead
of credential item, it will be execution
item. data will be execution
and you should be able to import this
from generated Prisma. So import type
execution
and instead of credential type
let's have execution status here because
we're going to need that. You can remove
entity search from the import. You can
remove use router. You can remove use
entity search but keep execution status
even though we don't use it yet. So
let's go back to the execution item
here. First of all, remove this remove
credential. We cannot remove anything
from here.
And uh so what should be the icon here?
Uh let's remove it for now. Let's just
focus on rendering. So href should lead
to executions. Title should be data uh
status.
and subtitle. Uh, let's go ahead and
just make it an empty string for now.
For the image here
let's go ahead and just do this. Let's
remove on remove and is removing. Okay.
Now, let's go ahead and create the
subtitle and the image. So, in order to
create the subtitle, we need to add
duration. So, how long did it take to do
this? So let's get the completed at and
if it exists and if it does let's call
math round new date data completed at
dot get time minus new date data started
at dot get time and then divide that by
a th00and
otherwise just set it to no and then
const subtitle
let's make with a fragment data
workflow.
Uh okay. So this will be execution and
workflow with id which is a string and
the name which is a string.
So why am I adding
this?
I basically extended the type of
execution to include two properties from
its related workflow. How do I know I
can do that? Well, if you go inside of
routers in the executions server and
look at get many, you will see that
that's exactly what we do. We include
workflow with ID and name. So that's
what I'm doing here. So that type safety
knows that I can access
data.workflow.name.
Let's go ahead and add a bullet point
bullet point here. Started. Let's add a
space.
format distance do now which we have
from date FNS
data started at
add suffix true
if duration is not null
in that case let's go ahead and open a
fragment here
let's go ahead and render a bullet point
again took duration
seconds like that.
So now we have the subtitle that we can
use which will give the user some useful
information about how long it took to
complete this execution if it completed.
But for the image, we're going to need
to create a little map here similar to
this one. So instead of this, it's going
to be a function. So constant get status
icon. status will be a type of uh not
string, it should be a type of execution
status.
Let's go ahead and switch based on the
status.
In case we get execution status success
let's go ahead and return
check circle to icon from lucid react
with size five text green 600
like that. And then let's go ahead and
just do the same for the other cases.
Uh so in case it fails, let's go ahead
and use X circle icon from Lucid React.
And in case it's still running, let's go
ahead and use loader to icon. Just make
sure besides the color and the size, you
also give it animate spin.
And in case we cannot find the status
for whatever reason, oops
let's go ahead and give it a default
of clock icon from Lucid React. So just
make sure you have added all of these
icons.
You can now remove image from next image
here.
So that's it for the execution. Oh yeah
we actually have to render that here.
Um,
so let's do get status icon data status.
There we go. And now that will render
one of these.
Now we can scroll all the way up here.
Instead of executions list, change this
to be execution item executions empty.
There we go. Great. So the only thing
left to fix is the entity header because
right now it's expecting either on new
or new button href but not the option to
not have any of them.
And I think the fix is actually quite
easy. So instead of entity header you
just have to add a question mark here
and save it. And that's it. Now it works
as expected. That was a bug actually.
Great. So, we now have all of those
components and we can now go back inside
of the app folder dashboard rest
executions page.tsx
and we can now add all of them. So
let's add executions container here to
encapsulate the whole thing. So from
that features executions components
executions
then let's go ahead and add whoops
executions
error
then let's go ahead and add executions
loading
and finally let's add executions list
and once you do that you should no items
obviously because even though we have
run some executions we never kept track
of them. So in order to see this happen
we have to go ahead and revisit our
functions.ds.
So I'm going to go inside of source
inest folder functions.ds.
So how do we make sure that every time
this execute workflow fails, we create a
new execution? Well, we do it by first
making sure that every single time this
is run, we start by creating an
execution.
So because of that, we are also going to
need to always have an ingest event id
event ID. But there is one problem with
this. Uh so event ID
um I don't like that it can be
undefined. So I discussed this with
ingest theme and they did tell me that
this will always be created for um for
every workflow. But if you want to here
is what I did which kind of gave me a
piece of mind.
So what I did is I went inside of ingest
utils.ts. And since I'm using send
workflow execution everywhere where I
need to execute this, I can very easily
make sure that every single one of my uh
inest jobs has an ID by simply using CU
ID2. I'm just not sure if we've used
this before. I think we did. Create ID
from parallel drive CU ID2.
And then very simply you can see it
accepts ID property. We can just do
this. And now for 100%
uh we can say that every single one of
our inest executions will have the
ingest event ID using event ID.
So let's go ahead and also check if
there is no injest event ID.
Let's go ahead and say event ID or
workflow ID is missing.
If you want to be more specific
obviously you could separate those two
errors. And then before we sort our
nodes, let's just do a super simple
create execution step. It's an
asynchronous function and just return
Prisma execution create data with
workflow ID and inest ID. That's it. We
don't have to pass the status because we
have a default status of running. So
again, this is my kind of architectural
choice. I'm not sure if you like this.
If you want to, you can make it optional
and not have a default, but since I am
always going to create the execution
when it's literally starting to run, I'm
always going to pass running as the
default. So in in my say in in my case
it makes sense in yours maybe it won't.
So yeah just kind of think about this
and how you would like this to behave. I
like it this way. So now every single
one of our execution will have this
happen. So what do we do if it's
successful? Well that's simple. We just
have to go ahead all the way after this
for loop and await step.run Run update
execution
async
return Prisma execution update where
ingest event ID is matching data status
execution status which you have to
import from generated Prisma
dot success
completed at new date output will be the
context.
Basically, everything that was
created after we ran all the nodes
after all the variables have been added
to the context, we're going to store
that here.
And I think we can also make it a bit
more specific by also adding workflow
ID.
We don't have to, I think, but we can do
it. So this is for the success case. But
what if it fails? How do we handle that?
Well, you can choose how granular you
want to be with this. You could go into
every single individual executor here
and then create failures from there. But
there is a way that you can catch
general failure of this function right
here. And you do it here. So on failure
like this and it's an asynchronous
function and you have access to event
and step
let's go ahead and do return prisma
execution update where
ingest event id is event data uh and yes
now you have to be a little bit specific
you have to access event again and then
ID
And in here, let's go ahead and let's
add the status and make it failed. And
we have to add some useful error
information. So, let's go ahead and
populate the error using event data
error message and error stack using
event data error.stack.
There we go.
So, we now have that ready. Let's go
ahead and close all of these.
Let's go inside of workflows here and
let's try running some workflows. So
they can either fail or they can be
successful, whatever you want. So I'm
going to execute this one and I'm going
to go inside of my executions right here
and I will refresh. And there we go. I
have one which is running. And then it
is a success. Amazing. So let's go ahead
and just fix this. I'm going to go
inside of my executions.tsx. tsx
and I'm going to go inside of my
execution item.
So, what I don't like
is that the title is just a large
yelling status. Um, so there is a way to
format that very easily. We can create a
function. Let's do it here. const format
status. status is a type of
execution status
and let's return
status character at first index plus
status slice everything after the first
index to lowerase. So basically we're
just going to capitalize it. Maybe you
can do that with CSS. I'm not sure. I'm
just used to doing this. So let's just
wrap this. And this should give us There
we go. That looks better. That gives us
like a readable thing. Uh so now let me
go ahead and fail this on purpose. So
I'm going to open this and I will make
sure I have a completely invalid URL
here. Save. I'm going to save up there
and then I'm going to execute that. So
now I should have one execution which is
a success and then I'm going to have one
execution which is a failure. There we
go. Both of these are now working and
I'm fairly certain that both of this
should now also show that yes you can
now see a new step create execution.
Uh and let's see finally is there. Okay
it looks like you can't see u uh those
steps where it created the where it
updated the execution to the failed
status. That's what I was trying to say.
But you can see this step which is
update execution when it succeeds. So
you should have the that extra step now.
Great. So what's not working yet is the
ability to look uh at the deeper view of
the execution. So let's go ahead and do
that and finish our executions.
So let's create execution.tsx
inside of features executions components
right here. So um we actually don't have
to copy it from credentials. Let's just
create a plain new one. Execution
dsx. So a single one. Then let's go
inside of uh executions right here. And
let's just copy
uh format status and get status icon.
In order to do that, we have to add all
of these icons from Lucid React. We have
to add the status type. And I think
that's it.
So now what else are we going to need
here? So uh let's finish the imports
while we are here.
We're going to need date FNS format
distance to now.
We're going to need link to redirect to
the actual workflow.
We're going to need use params.
We're going to need use state from
React.
We are going to need button from
components UI button. We're going to
need the entire card. So card content
description, header, and title. Then
we're going to need the entire
collapsible, which I believe is the
first time we're using this component.
So it's from chassis and UI. And we are
going to need use suspense execution
from features executions hook use
executions.
So just alongside use suspense
executions, we are using this one to
fetch a single execution.
Great. So now that we have all the uh
imports we need, we are ready to export
con execution
view
and let me just check in credentials.
Did I maybe have a better name for this
component? I didn't. It's just
credential. Okay, then this will be just
execution.
Let's go ahead and give this a param
hook. Let's go ahead and extract
execution ID here as string. And in
fact, I remember this is the same
mistake from before. We can just do
execution ID here.
That's
simpler if you ask me.
And then you don't need params nor
execution ID. We're just going to pass
this from the server component as a
prop. So you can go ahead and remove use
params. Perfect.
Then let's go ahead and fetch this. So
use suspense execution using the
execution ID and alias data to
execution.
Then let's go ahead and create a simple
state total show stack trace and set
show stack trace. Then let's go ahead
and go inside of executions.ds
and just copy the duration script.
So in order to generate a duration
instead of data just alias it to
execution. So execution completed at and
then do math round with execution
completed at and start that or fall back
to null.
Now let's go ahead and let's compose
this. So we're using card. Let's give it
class name
shadow none.
Then let's go ahead and add card header.
Let's add a div here with a class name
flex items center gap 3. Inside let's
render get status icon execution
dot status.
Then let's go ahead uh inside of this
and create a new div. And this div will
have a card title
which will format status
execution status and a card description
execution for
execution.Workflow
name. But now we have a type problem
here.
So let's go ahead and see h how can we
fix this? Well, this actually isn't just
a type problem. This is an actual
problem. Let's go inside of use suspense
execution and let's go inside of TRPC
executions get one. And besides this
where
let's go ahead and also add include
here.
So I'm going to add include
workflow ID true not include my
apologies. Select
uh name.
Let me just um check what I'm doing
wrong here.
Uh ek I see it needs to be inside of
include
and then this is inside. There we go.
So include workflow but only select ID
and name. So the exact same thing I
could have just looked down here. The
exact same thing we're doing in get
many. I forgot to do in get one. So that
should automatically fix any type errors
here. We can now access this in card
description. Perfect. So that's it for
card description.
Now let's go ahead outside of the card
header. Let's open card content.
card content will have a class name of
space Y4.
And in here, let's go ahead and let's
create a div with a class name grid grid
columns to gap 4.
Let's open a div, a paragraph workflow.
Let's go ahead and write a class name
text small font medium text muted
foreground.
Then let's add a link.
Inside of the link, we're going to refer
to execution.workflow.name
and href will go to workflows
execution.workflow
ID. There we go. Make sure you're using
back ticks and don't misspell workflows.
Let's go ahead and add a few more
attributes to the link property so it
will be prefetch so it's faster. And
class name text small hover underline
and text primary. And when I say let's
use prefetch so it will be faster, what
I mean is that it's going to prefetch it
automatically. So even when users if
users don't click on it, so it is a
compromise. It's not just a magical
speed up thing. That's why it's optional
to add. So for this exact scenario, it's
okay because it's just this single link.
But you should be careful when adding
prefetch to like a generated list of a
billion results because that will
definitely um populate your network
request tab. So be careful about using
prefetch.
All right. So now we have a link and I
think at this point it might be easier
if we render this so we can actually see
what we are doing.
So, I'm going to go ahead inside of my
app folder dashboard rest executions
execution ID page.
DSX
and let's go ahead and do the following.
In here I'm going to do div class name
padding for
on medium devices px10
py 6 hide full
and let's actually just copy what we
have inside of credentials for
credential ID here since it is
identical. So just copy it here paste
it. There we go.
And now in here
let's add hydrate client from the RPC
server. Let's go ahead and add error
boundary from react error boundary
suspense
from react like this. And let's go ahead
and add execution view.
And execution ID will be params dot
execution.
Oh, we have to await params. Oh, we have
it right here. Whoops. So, execution ID.
Great. Let's add fallback here.
Uh, I think we can just reuse executions
error
and
fall back here. Executions loading. So
I'm borrowing these created inside of
executions for the list, right? Because
they're the same. I I I'm not creative
enough to create different loading and
error states for single execution view.
So yes, execution view is imported from
execution file and loading an error from
executions file.
Great. Uh so some error is happening
here because instead of execution view
we forgot
to map this as use client. So yes your
execution.tsx
should have use client at the top. And
there we go. You can now see how this
looks. Uh but we are not prefetching
this. So let's just make sure we are
doing that. So inside of page execution
ID here
uh we can just prefetch. Yes.
prefetch execution execution ID. So make
sure you imported prefetch execution
from features execution server prefetch.
There we go. So now it should leverage
the server component and the client
component at the same time. Now we can
focus exclusively on execution view and
actually see what we are developing. So
in here we have the status icon, the
status, the name of the workflow, uh
with the link to go to that workflow. If
you click on it, it should redirect you
to that workflow. And now we're going to
add stack trace down here uh right after
we add the duration or when it started.
So let's go ahead uh still inside of
card content here after we end the link
and after we end this div
open a new div add a paragraph here
status and another paragraph format
status execution status.
There we go. Let's go ahead and give
this paragraph a class name. text small
font medium
text muted foreground and let's give
this one a class name of text small.
Now let's go ahead and duplicate this.
This one will be started
and this will be format distance to now
and use execution
dotstarted at
and add suffix true.
So if you take a look, you will see the
status here, when it started, the
workflow, right? Just a grid of
information for this execution.
Let's duplicate this again, but this one
will be conditional. So if execution
completed at exists
only then go ahead and render this
otherwise
render null.
So this will be completed
and this will be execution.comp
completed at for example this one will
not have that visible but if I go inside
of my successful execution
it should have it visible right here
completed 17 minutes ago and you can
also see how it says that it took 6
seconds to complete so that's a cool uh
thing to look at in my opinion
now let's go ahead and copy this again
so this one will also be conditional
and let's Just check if duration is not
equal to null
then let's go ahead and let's just
render
the duration
and add s as in seconds and change this
to duration.
So this will still be visible simply
because you can see uh simply because we
selected the success one right. So it
lasted for 6 seconds. But I think in the
failed one uh you are not able to see
that information.
And now let's go ahead and copy this.
And let's go ahead and do if
execution.est
event id event ID like this. And simply
render execution.estvent
ID. So this is for some debugging
information if you need it. Here it is.
Event ID. Uh but I think uh
event ID will always exist. So we can
just safely render it. Yeah, event ID is
not optional. And now let's check if we
have execution.
In that case, let's open up a div here
with a class name. Margin top of six
padding of four, background red 50
dark. Actually, no need for this. Let's
just do rounded medium space Y three.
Then open a new div inside. Then open a
paragraph, which will uh render the text
error. Let's give this paragraph a class
name of text small font medium text red
900 margin bottom of two.
Below that another paragraph with
execution error rendered inside and then
a class name text small text red 800 and
font mono like that. Great. But now
we're going to go ahead and make it a
little bit uh more fun. Let's also give
this Okay, it already has rounded MD.
Great. So outside of this div, but still
inside of the whole error container, I'm
going to check if we have execution
dot error stack.
If I do, I'm going to render a
collapsible
and I'm going to use open show stack
trace on open change set show stack
trace. I'm going to add collapsible
trigger here and I'm going to render a
button inside.
And if show stack trace is active
I'm going to render
hide stack trace
otherwise show stack trace.
I'm going to give this button a variant
of ghost.
Size of small
class name text red 900
hover bg red 100.
Great. Let's give this collapsible
trigger as child property.
Outside of the collapsible trigger, add
a collapsible content. And inside
let's add a pre-tag
and let's render execution
dot error stack. Let's go ahead and give
it a class name. Text extra small font
mono text red 800
overflow auto
margin top of two padding of two bg red
100 and rounded.
There we go. So let's go ahead and check
this out. If I click this, it will show
me the error. One thing I don't like is
that this is not taking uh enough space
in my opinion. So, let me see. Maybe I
put it in an invalid container. So, just
a second here. Uh card we're doing
status. So, this is the grid thing.
Perhaps this should be
outside of this. Let me just check if
I'm correct. If I maybe end this div
here and then go to the end and remove
one div.
Yes, I think that's what I wanted to do
basically. Let me revert this.
Go ahead and find this div
which starts the grid. Right now, this
div ends all the way here, right before
the card content. So remove that div
and instead close it just before you
start doing conditional execution error.
There we go.
And this is just TypeScript server
error.
There we go. This looks better now. And
you can see more details inside. But now
let's go ahead and just do the same
thing for
output. So if we have execution output
let's add a div with a class name margin
top of six padding four background muted
rounded medium
paragraph
with the text output.
Let's go ahead and give this a class
name
text small font medium margin bottom of
two a pre-tag
JSON stringify execution
output and then null end two which are
properties to make this uh JSON more
readable text extra small font mono
overflow auto there we
So, we can only test this in a
successful node. So, I'm going to go
back here. Success.
And there we go. Output. My Slack
message content. Hello world. Slack.
Amazing. You can now see the history of
your executions.
I believe that is it. That's all we have
to do here. one thing that I like to
check but this time I'm not sure I will
be able to check simply because there's
so many things inside of this executions
folder. Uh I think I made a like a
mistake with adding these nodes in here.
I think I should have a separate uh
feature called nodes and then just have
all of them inside and also keep their
channels with them because this is kind
of uh neither here or there. And what
I'm referring to as executions is very
inconsistent, right? Because I also have
triggers for some reason, but they are
technically just nodes, right? So it
might be like give yourself a challenge
and I would improve this structure now
at the end of this whole project by
creating a new feature called nodes and
I would keep both triggers and what I
call executions in that place and then I
would no longer refer to them as
executions. Executions would just be
what we just defined in the schema
right? The result, success or failure
right? And everything else would be
nodes. And then some nodes will be used
as triggers. Yes. And other nodes will
be used as executors. So yes, the words
are kind of confusing. The terminology
is not that simple. But yeah, I think
most of you feel like something is off
here by having these nodes in here. And
also inside of the executions folder, I
have this lib where there's the executor
registry. Yeah, there could definitely
be a better place for this. Uh it's not
you, it's me. I made an invalid
architectural decision here. You can
give yourself a task to refactor that. I
would highly suggest it to get even more
familiar with the code, but do it at the
end of the tutorial so you don't run
into any bugs. Excellent. So, uh let's
go ahead and check if we maybe forgot
something from our task here.
We added the schema, the router, the
hooks, page loaders, client entity
components, pagionation, loading error
empty, and we added execution records in
inest if they fail, if they succeed, and
when they start. So, let's push this to
GitHub. 27 executions history. And then
we're going to see what Code Rabbit has
to say. So, new branch, 27 executions
history.
I'm going to go ahead and go inside of
my source control. I'm going to stage
all of my changes.
27 execution
history commit. Let's go ahead and let's
publish branch. Once the branch has been
published,
we can go ahead create a new pull
request here
and review it using code rabbit.
And here we have the summary by code
rabbit. Release notes. New execution
dashboard with workflow run history with
pagionation. Track execution status with
visual indicators running successful or
failed. View detailed execution
information including timing, duration
and output, access error messages and
stack traces for debugging failed
executions.
And let's go ahead and take a look at
the diagram. Uh so what is up here is
essentially just the prefetching and how
it works. We've already seen that a
couple of times. So this is the
interesting one. When we trigger a
workflow using the send workflow
execution util, we go ahead and
immediately create an execution with a
default status of running. After we
process all the workflow nodes
successfully, we update the execution
with a status of success completed at
and we pass along the output which was
transformed through all the nodes. But
in case the workflow fails, we update
execution to failed with error and error
stack. Let's take a look at the comments
here. saying here in schema prisma it is
recommending adding a composite index of
workflow ID and started at with
descending sort that's a good idea to
add actually uh simply because we are
using uh started at order by instead of
get many query so yes it could
definitely improve performance if this
uh database record grows large
in here it is actually not correct we do
not need to await prefetch executions.
There is nothing that this will uh
return. This is a void. So there is no
need to await this. Uh prefetching is a
relatively new concept to LLM. So a lot
of them get this wrong. But no, you do
not need to await prefetching
in here. I think we already had this
comment once uh the first time we
implemented it. basically an improvement
of our zod rules for page size and page.
Other than that, we are golden. Let's go
ahead and merge this pull request. Very
good suggestion to add the index to
speed up those queries later on. We can
now go back to our main branch. Go ahead
and click synchronize changes. And once
that is complete, as always, I like to
double check by clicking on the graph.
There we go. 27. Amazing. Let's go ahead
and mark this as completed. Amazing.
Amazing job and see you in the next
chapter.
In the next few chapters, we're going to
go over what's left to do in our project
before we can deploy. And the way we can
do that is by easily searching for the
word to-do. And everywhere where we have
a to-do is probably something we should
take a look before deploying. And one
that's quite obvious is the credential
value which is currently stored as a
plain text both in the create procedure
and in the update procedure right here.
I still stand by with what I said when
we developed this. The best solution to
encrypt this would be by using a
third-party service such as AWS secrets
manager.
But there is a thing we can do which
isn't a third-party service. It's way
simpler than AWS Secrets Manager and it
is marginally better than just storing
the value as plain text in your
database. That being said, it's also not
perfect. So that's what I want to do in
this chapter. I want to not store plain
text strings when it comes to users
credentials. And then we're going to go
ahead and look for other to-dos that we
have. And no, I did not forget about
Google and GitHub signin. That's also in
our to-dos. So for this chapter, I want
to focus on encrypting our credentials.
The way we're going to do that is by
using cryptor, an npm package. So let's
do npm install cryptor.
Let's go ahead and open package JSON.
And you can see the version 6.4.0.
You don't have to use the same version.
I just want to make sure you are aware
of my versions.
Let's go ahead inside of environment
file.
Let's create encryption section here.
And let's add encryption key.
Now in here you would use something to
generate this key. You can use one
password last pass or a million services
on Google when you search for uh
encryption key generator for
development. You can just put my secure
key in production. Please do not use
that. Please put something secure here
because if someone else can guess this
key, they can easily decrypt all of the
values in your database.
So once you have encryption key ready
let's go ahead and do the following.
We're going to go inside of source lib
and in here we're going to create
encryption.ds
like this. Inside of encryption.ds file
let's go ahead and add the following
code. import cryptor from cryptor.
const cryptor will be new cryptor
process. environment and then go ahead
and use the encryption key that you've
added here. I always recommend that you
copy and paste from your environment
variables so you don't accidentally
misspell any words.
Then let's go ahead and export const
encrypt by using a function which
accepts a string and simply returns the
result of cryptor.enrypt encrypt and
pass in the string. And the exact same
process is for decrypt.
Let me just go ahead and fix this.
Decrypt
and decrypt. There we go.
Now that we have this ready, let's go
ahead and find our credentials routers.
So inside of our credentials router
here, let's find create which is a
premium procedure and let's go ahead and
remove this and let's use encrypt
from lib encryption value. There we go.
So just make sure you've added encrypt
here. And same thing is true in update
here.
So you can remove this encrypt
value. There we go. So that's it for
storing into our database. What we have
to do now is we have to revisit all the
places where we use this value. So that
is inside of execution components. So
let's open entropic executor. Let's go
ahead and open Gemini executor and let's
go ahead and open open AI executor.
So right now all of these will fail
because when we store a new credential
uh it will let me show you actually how
it looks like. Yes. So here is the
cryptor uh npm package. If you want to
encrypt the word bacon, this is what
will be stored in our database. So this
huge hash and then later when we decrypt
it, it's going to be back to bacon. So
if we tried using our API keys right
now, they would all fail because they
are encrypted in our database. So we now
have to decrypt them. Let's go ahead and
once we find the credentials. So uh it
doesn't matter what executor you are in
you have to do all three of them. So let
me start with Gemini one. Gemini folder
executor.ts.
And here it is. Once we add credential
value, let's simply go ahead and do
decrypt
like this
and import decrypt from lib encryption.
That's it for Gemini. Then let's go
inside of open AI folder executor
and let's do the same thing here.
Decrypt from lib encryption.
Make sure you've imported it.
and entropic is left. So again, decrypt
like that. Perfect.
And I think that is all we need really.
So if I search for to-do now, there we
go. Um and yeah, I have this to-do
instead of d. So this is completely off
topic, but if I search for to-do, I have
this one which is inside of my HTTP
request dialogue.d tsx
inside of source features, executions
components, HTTP request, dialogue. We
actually don't need to do this. Uh I
used to think that this would be a good
idea like to validate if users JSON is
correct, but that would defeat the
purpose of our templating availability.
It's the same problem we had with
endpoints uh being Z. URL and then it
would break if you want to add a
variable. So because of that, I'm going
to remove this. We're not going to solve
that because we don't need it. So the
only one that's actually left here
besides GitHub and Google login is
retries which is still very useful for
us in development at the moment. So
let's leave it like this. All right. So
what to do now when this is uh done?
Well, first things first uh your
credentials will no longer work because
these are now completely broken. So what
I suggest is you go ahead and delete
these
And let's go ahead and create a new
credential this time. And let's go ahead
and do npx prisma studio.
So this should fire up the studio. And
let's go ahead and find the credential
here. So no credentials. Great. And now
I'm going to do encrypted
Gemini.
Gemini like this. And since I still have
it in my environment here, I'm just
going to copy it. This is why I told you
to leave it here because you will still
need it in the tutorial. So, let me just
add it here. Okay. And I will click
create.
Looks like it was created successfully.
And now inside of Prisma Studio, I'm
going to refresh here. And let's see if
this is working or not. So what I'm
expecting here is to see a completely
different value. Let me just do a
refresh one more time. It's fetching
rows in this table. Basically my Google
generative AI API key starts with AI. I
think that's an accident. Uh but you can
see that my value is something
completely different. So if someone were
to break into my database, they would
not be able to see users API keys. they
would just be able to see this. Again
this is not the most perfect solution in
the world. There's a lot of things
missing like key rotation. And I would
recommend solving this by adding a third
party library like AWS Secrets Manager
but this is insanely better than just
storing plain text inside of here
right? Uh just make sure you're using a
good secret and that you never leak it
because this is type of encryption that
can be obviously decrypted which makes
it very uh well dangerous to access if
someone gets ahead of your encryption
key.
Great. So you can see the value is
completely different than what I've
entered here. And now uh we have to test
if, that, works., So, I'm, going to, go, inside
of my workloads. I'm going to create a
completely new one. And I'm just going
to add a manual executor here.
And I'm going to connect it with a
Gemini like this. Let me open this my
Gemini. Select a credential. Encrypted
Gemini. Test. Hello world. Let's click
save. Let's click save up there. And
let's click execute workflow.
And let's see if this will work. That
succeeds. And let's see. This seems to
succeed as well. Let's go inside of
executions right here.
Uh less than a minute ago. So that
should be the one. There we go. Hello
world. How can I help you today?
Amazing. So it is successfully working.
We have encrypted our values. Amazing
job. So uh since this was super simple
change, uh we don't have to really
review this code. Instead we can go
ahead and focus on the next chapter
where we are going to be adding GitHub
and Google signin which is also going to
be super simple. So just for you know
keeping track of everything I'm going to
create a new branch 28 encrypting
credentials.
Then I'm going to go ahead and add all
of my changes here. Let me just expand
this so I can expand this. I'm staging
all of my changes. 28 encrypting
credentials.
I'm going to commit and I'm going to
publish the branch.
And as I said, since this one is
particularly simple, I'm just going to
open a pull request. And I am
immediately going to
merge it. So, let's go ahead and confirm
merge.
After I've done that, I'm going to go
ahead and go back inside of my main
branch. I'm going to click down here
synchronize changes, and I'm going to
click okay. I'm going to open my graph
and just confirm that I have a new merge
pool 28 encrypting credentials. Great. I
believe that marks the end of this
chapter and we just improved our app by
a lot, right? So, it went from being
barely uh recommended by anyone
security-wise to at least not storing uh
uh plain text values. But again, please
explore AWS Secrets Manager or look at
my previous project if you are
interested in uh key rotation and how
this would look with a very good
security system in place. But this is
very very good. And again, API keys are
not exactly passwords. They can always
be rotated. But you should provide your
users who are trusting you with their
API keys with maximum security that you
can afford. Uh, excellent. And we pushed
to GitHub. Amazing, amazing job. And see
you in the next chapter.
In this chapter, we're going to add
GitHub and Google out to our project.
So, let's go ahead and make sure we are
logged out. This way, we can visit the
login screen. There we go. So we have
buttons continue with GitHub and Google
but right now they're not doing
anything. So let's go ahead and open
better out documentation and under their
authentication here you can find GitHub.
So let's go ahead and first grab our
GitHub credentials. So I recommend
looking at the documentation because
it's really good. It's up to date and it
will help you navigate through all of
the links you have to visit. So go ahead
and click GitHub developer portal.
So in my case, I have a bunch of OOTH
apps here because I create these tokens
all the time. But for you, it might be
completely empty depending on how often
you use this. So to use GitHub signin
you need a client ID and a client
secret. So let's go ahead instead of
OALF apps, click new OL app. And in
here, let's go ahead and call this
Nodebase development and set the
homepage URL to be this
localhost 3000 slab API/
callback GitHub. For production, you
should set it to the URL of your
application. Exactly. That is why I'm
calling this one nodebase development.
Uh, and let's go ahead and see uh what
we need to add.
Uh oh my apologies.
This is authorization call back URL and
this is just localhost 3000 the homepage
URL. And now click register application.
And in here you have the client ID. So
you can immediately go ahead and add
github_client
ID in your environment here. So I'm
going to add github_client
id and then I will prepare github
secrets
uh I'm I'm not sure what it is client
secret. All right client secret.
In order to generate the client secret
we need to click on a button generate a
new client secret. And most of the time
you will need to do two-actor
authentication here.
And once you approve two factor
authentication, you can copy this your
client secret. Make sure to copy it
immediately and add it here. There we
go. So now we have GitHub client ID and
we have GitHub client secret. Great. Now
let's go ahead and go inside of our
source lib.ds.
In here we have email, email, and
password. And just below it, let's add
social providers.
Let's add GitHub client ID. And we can
go ahead and copy this.
And we can do the same thing for client
secret.
And just copy GitHub client secret. I
always recommend that you double triple
check. Copy from here, paste it here.
Copy from here, paste it here. because
people often miss typos and the errors
are very cryptic and then you have no
idea what's wrong. 99% chance it's a
typo. So just make sure you're doing it
correctly. Uh great. And now we have to
add the function here to sign in with a
GitHub provider.
So we have to do that in uh well let me
see inside of source features out okay
we have login form and register form
let's go inside of a login form first
and besides on submit I'm going to do
con sign in GitHub
like this asynchronous
const data await out client sign in
social provider GitHub
and I think we don't actually need the
data
and um
I'm not sure if I can also do now on
success here I can great
on success router push
forward slash hash
on error
toast error.
Something went wrong.
All right,
let's go ahead and just copy this and
already prepare it for Google as well.
Sign in Google provider Google. That's
it. That's all we need. Then we're going
to go ahead here in continue with GitHub
and give it an on click sign in Google.
My apologies. GitHub obviously.
And let's do this one.
Sign in
Google.
So make sure you are on the login screen
because that's where we just added this.
So, I'm going to click continue with
GitHub right here. And there we go. I
now have to authorize my GitHub profile.
And let's see once I authorize it if I
will be uh logged in. And I am logged
in. Amazing. And I think that if I try
to do new workflow, I have a prompt to
upgrade to pro. Amazing. So now I'm
going to go ahead and sign out. And if I
try login with Google, something went
wrong because we haven't set that up.
But just before we do it, let's make
sure that we copy sign in with Google
and sign in with GitHub here.
And I don't I'm not sure if you noticed.
Yes, but we just use uh login page to
create a new account. So yes, you don't
need separate functions. You can just
copy exactly as they are. So you can
have your usual sign in with GitHub and
sign in with Google and you can just add
them to the register form. It really
does not matter. So if it has an
account, it's going to log in. If it
doesn't, it's going to uh create an
account. So that's kind of a cool
feature. You don't have to create
anything different for register.
Great. So Google is notoriously more
complicated uh than this, unfortunately.
But let's go ahead and click on Google
and let's start by visiting the Google
Cloud Console.
So inside of here, let me just zoom in.
I think the best way to start actually
is by creating a new project. So click
on your projects right here and click
new project and call this nodebase and
click create. Then wait a second for
this to be created and then select that
project. This way you won't add any new
services or API keys to your previous
projects and it will all be stored in
here. And now let's follow what we have
to do. So we need to go to credentials
and authorize the redirect URIs. Okay.
So I can find credentials here. Maybe
they've improved. Maybe it's easy to do
now. And basically this is what happens.
You click on credentials and you get
ready to do them. And then you see that
you also have to do the consent screen.
So I guess let's first click on
configure consent screen or just go
instead of oath consent screen
and then that takes you here Google out
platform not configured. Get started
with configuring your applications
identity and manage credentials. So then
you click get started. This is what I
was telling you about. They just it
feels like they're in the middle of a
refactor and they take you from one
place to another. So I I have no idea
how I even got here to be honest, but
looks like I'm inside of Google out
platform overview create branding. All
right, let's call this nodebase. Let's
go ahead and select an email. Next. For
the audience, make sure you select
external. So everyone will be able to
use this, not just your test users.
Click on next.
Add your email address here. Finish. and
I agree to the Google API services.
Click continue and click create.
Uh, the reason I don't like Google Cloud
is because it's very hard to create
tutorials this way, right? I have no
idea how I got to this place. I just
clicked on a bunch of different warnings
and pop-ups, right?
Uh, okay. So, I guess we now have
branding. That's good. Do not add app
logo. If you add app logo, you will need
to verify your app and that can take a
long time. Do not add your app logo.
Just do it exactly like this.
Okay. So now I'm just figing figuring
this out along with you. I think that
maybe I can even uh now go back here and
maybe
if I go inside of APIs and services, I
can now go inside of credentials. There
we go. I can now go inside of
credentials. Can I go inside of O
consent screen? So OT consent screen
takes me here.
Let me click create O client to see what
that is. Okay, I think that might be
credential. You see what I'm talking
about? It's it's everywhere. I don't
know what I'm doing at this point, but
let's go ahead and create the O client
ID because it sounds like something we
need. So web application, let's call it
Nodebase
for authorized JavaScript origins.
Let's add localhost and for authorized
redirect URLs. There we go. So, we are
in the correct place. This is the
credential it seems. HTTP localhost 3000
API al call back Google.
So, make sure you're using the pro
proper protocols here. HTTP and click
create.
There we go. Okay. Uh now we have the
client ID. So let's immediately add that
to our environment.
Google client ID
Google secret.
My apologies. Google client secret.
And just below we can find the client
secret. Here it is.
There we go. And we can click okay. And
we're still not done yet. There is one
thing we also have to do which I almost
always forgot forgot to do but this will
like create problems in production. Uh I
have no idea where to find it. Maybe in
audience. Here it is. Audience. Go
inside of audience and click publish
app. Your app will be available to any
user with a Google account. And click
confirm. I have no idea why they don't
tell you to how to even do this right
there. There is no flow to follow. There
are no steps to follow. So, you just
kind of have to figure it out. If you
don't do it, people will not be able to
use your app. I have no idea why they
made it like that. I have no idea what's
why this Google Out platform thing is
now called like that. Everything feels
like it's h all over the place, but I
think we got what we need. Okay. And now
what we have to do is we have to go back
inside of AL. DS
lib al.ds.
And let's copy this.
Add Google.
And make sure to replace these two with
Google. And as always, please double
check. Copy and paste. So, Google client
ID, Google client secret. There we go.
Let's go ahead and try it now. So, uh
maybe this actually won't work as I need
it to work for uh one specific reason.
So if I go ahead inside of I have no
idea how do I access Prisma 5555. Yes.
So it might be a good idea to like reset
your entire database whenever you do
this simply because um
I have three users it seems here. So I'm
just going to go ahead and delete my uh
users for now. Yeah. I think the problem
is um
if you have the same email. So I I've
hidden my email here. Usually you would
see the email here. I've hidden it. So
because it's it's my personal email. Um
if you have the same email for your
Google account and GitHub account, I
think it won't create a completely new
account. It will just link to your
existing account. So because of that
try and either use a completely new
Google account when testing Google or
just delete all users in your database
and that should cascade everything. Uh
but I just remembered when I click
continue with Google, I think I have an
account that's completely unused here.
So, let me try. There we go. Tutorial
mailing John Doe. Continue and sign into
Nodebase. Looks good so far. No errors
being thrown. Let's see. There we go. It
works. And if I go inside of my Prisma
Studio and just refresh, we should now
see a new user called John Doe. There it
is. And you can see that when I use o
out it gives me a name to the user but
since in our normal register form we
don't have name field we just use uh the
email and you can see that we also have
the icon for the user. Uh amazing
amazing job. So that is now working. We
can now officially log in and register
with Google and GitHub. So you can play
around with this. Uh both should be
working.
Uh let me see what it did. Okay, that
seems to work too. Great. Everything
works perfect. Uh amazing. So let's see.
We added GitHub out. We configured the
token and the secret. And we did the
same with Google out. And we added
functions to the login screens. Uh
another very simple pull request here.
Uh the most complex part was figuring
out Google ALF as usual. So let's create
a new branch 29 GitHub Google ALF. I'm
going to go ahead. Whoops.
And stage all of these changes, all
three of them. Uh, okay. 29 GitHub
Google AL. Let's commit and let's
publish the branch. Again, super simple.
No need to
uh review in depth really.
So, I'm just going to open a pull
request, create a pull request, and then
I am immediately going to merge it.
There we go. And once it is merged, I'm
going to go back inside of my main right
here, and I'm going to click synchronize
changes. And I'm going to click on okay.
And I'm going to open my graph just to
confirm that I can see 29 GitHub Google.
Great. So that marks the end of this
chapter and the only thing we have left
is to deploy the app. Amazing amazing
job and see you in the next chapter.
In this chapter, we're going to finally
deploy our project to Versel. Let's
start by preparing our code for
production. This will be quite easy as
we only need to change a few things.
What I want to do first is do the
following.
Let's go ahead and make sure nothing is
running.
Then let's go ahead inside of
functions.ts.
This file is located inside of the
ingest folder. Source injust functions.
So this is the last to-do that we have
remove in production. What we can do
instead is just process.vironment
node environment is equal to production.
Let's use three. Otherwise, let's use
zero.
As simple as that.
Now, let's go ahead and do one more
thing.
Let's go inside of our terminal and
let's run npm run build. This is the
same command that will be running once
we add our project to Verscell, but it
is way easier to debug if a build fails
in your local environment than on
Versel. That's why I recommend that you
don't add anything to Versell until you
can get a successful build locally.
Why should a build fail and how can it
fail?
Most likely because of this step right
here, linting and checking validity of
types. So basically what this just uh
did is is validated my entire code for
any type errors. If you have any type
errors in your code, this will fail and
it's perfectly normal. It happened to me
a million times. You just have to take a
look at the exact error it is telling
you and then you have to go ahead and
fix that. Now, there is a way to skip
that part if type errors are not
important to you
by going inside of next.config
config and inside of here
TypeScript
ignore build errors and set it to true
and save that file. And once you do
that, you should see the same result as
me. Basically, I would recommend you
don't do that. Just go ahead and fix
your type errors. You've come this far
you can certainly fix a few bugs here
and there. And once you have this
successfully running, you are ready to
deploy. So one last thing I have to do
here is push this changes. So I'm going
to do that this time without any pull
request. I'm just going to push it
directly to my main branch. So I'm going
to stage this file 30 deployment
commit and synchronize changes.
And once that is pushed to GitHub, we're
going to go ahead and head to
versel.com.
Go ahead and create an account here.
Click add new project and it should
automatically
connect to your GitHub. So here it is
Nodebase. I'm going to click import
right here. Uh okay, this is my uh
specific situation. So I am deploying
from a private GitHub organization. You
are probably not doing that. So very
easy fix for me. I just have to switch
uh my account here. Just a second.
All right., I, am, now, in, a, new, account.
Again, for you this will not be a
problem at all. The problem is I have as
you can see an organization. So it's not
the private repository is not a problem.
The problem is I'm using an a a p a
private organization. So that's a
premium feature on Versell. But for you
it will be as simple as this. You will
see this page. Everything should be
working for you. Uh so make sure next.js
is select. You don't have to modify
anything here except environment
variables. And thankfully, there's a
super easy way to add all of them at
once. Just go ahead and copy everything.
Go ahead and click paste. That's it.
So, obviously, we're going to have to
change some of these things. But for
now, let's just go ahead and deploy. And
the one thing we don't need actually is
Angro URL. You can get rid of that. As
as I said, encryption key should be
something different. You should call
this my production key. Well, not that.
You should make it your production key.
Uh there's a million services online
that you can use to help you with that.
Uh so in here, it looks like I have some
warnings, but that should not break the
app. So I'm just going to pause the
screen and we're going to see the result
of the build.
And here we go. After a successful
deployment, you're going to see
congratulations. You just deployed a new
project. Uh and then to your
organization name or to your profile
name. So what you should do now is click
continue to dashboard right here. And
this is important. This is your domain.
So don't confuse it with these domains.
These are something else. These are
specific preview domains for that
deployment. But your main uh domain is
this one, this shorter one. So you can
go ahead and visit it. Feel free to do
that. It should work just fine. Uh but
it does need a few changes. What you
should do is you should copy the URL and
then you should go inside of settings.
So inside of your project, right, click
on, let me just refresh here. Whoops.
Click on nodebase. Go inside of
settings. Environment variables right
here. And then we have to change some
things. So database URL is correct.
Better out URL should be changed. So
let's go ahead and edit better AL URL to
use this and just remove the trailing
slash and click save. And then you have
to do the same thing for all other
places. So let's see. I think we have a
polar success URL is a local host and
next public app URL is also local host.
So we should change both of them to use
our new uh app here. So let's click save
here and same thing for next public app
URL. So, edit that
and save it here. There we go. But we're
not done just yet. We also have to
update uh our uh keys, client ID, client
secret for GitHub, and for Google. And
obviously, we should also do a new
database URL because everything that we
just had was for development. So what
I'm going to do is just show you how to
change GitHub and Google client ids so
that they work with uh your production.
So head back to GitHub developer
settings new oalf app nodebase.
You can just do nodebase or nodebase
prod whatever you prefer. This is now
your homepage URL. And for your
authorization callback URL I already
forgot what it is. Let me just go ahead
and go to better out
authentication GitHub. Here it is. So
forward/ API al call back GitHub.
There we go. This is authorization
callback URL for production. Go ahead
and click register application. Copy the
client ID. Then go inside of here and
find GitHub client ID. Go ahead and
change it.
Did uh let me just check. Did I copy it?
Okay. GitHub client ID
and click save.
Then you're going to have to generate a
new client secret. Copy that
and find GitHub client secret here
and change it and click save. Great. Uh
so that is GitHub taken care of. Now we
have to do the same for Google Cloud
Console. Uh maybe it would be a better
idea to like create a new project. Maybe
not. Uh I'm not even sure. But yeah, I
think personally I would create a new
project. So I'm just going to go and
call this Nodebase prod.
And I will click create right here. Once
it's been uh created, I'm going to go
ahead and select it. And then we're just
going to go through the entire process
again. So let's this time start with
oath consent screen that will redirect
us to Google out platform. Let's click
get started right here nodebase.
This next external
next let's add email.
Next I agree continue. Create.
And once it's created, let's go ahead
and go inside of
clients, I think. Yes, create client.
Let's go ahead and select web
application. Name will be Nodebase.
Well, it can just be Nodebase. And now
for authorized JavaScript origins, well
this will be your real URL now. And for
authorized redirects URL, you can visit
better out. So this will be forward/ API
out callback Google here. There we go.
Let's go ahead and click create right
here. And now you have a new client ID.
So go ahead and find Google client ID.
Edit it. Paste it here. Save it. And you
should have the new client secret. So
find Google client secret edit.
Uh is it exactly the same or maybe some
slight changes I didn't even notice.
Okay, just make sure you add it here and
click save. And you should also edit
your encryption key obviously to
something secure my secure production
key. Please don't write this. Just
search for uh encryption key generator
and paste it here.
And great once you have all of that
changed. So we changed better out URL
polar success URL. We can show okay
database URL should also be changed but
fine it can be the same for now. Better
out polar success next public app URL
GitHub client ID, GitHub client secret
Google client ID, everything was changed
for production and you can use either
the redeploy button from here and you
can go inside of deployments here. Click
here redeploy
and just I can't dismiss the button.
Okay just
redeploy. Okay
there we go. So, I'm going to pause the
screen and after redeployment, you
should be able to use GitHub and Google
and it should be uh in a much better
state for production than what it was.
And here we have the redeployment
successfully working. So we can now
again visit this application. Uh nothing
much should change now but we should
have uh working GitHub and Google. Now
you will again see the authorized screen
now because we just changed the client
tokens right. Uh both Google and GitHub
should work here.
And if you want to change your database
I, don't, know why, I, didn't, show, you, this
because it's super simple to do uh using
Neon, you have branches. So you can just
go ahead inside of your project or you
can just create a new project if you
want to like nodebased production. But
they have branches here and you can just
go ahead and click create a new branch
and maybe call this I don't know
production even though they already have
production. We've just been using it
here. But you can see they they even
prepare development for me. I didn't
even see that. Uh and the cool thing is
you can even expire the branch. You can
u choose what data to include. A lot of
very cool things. So I think what might
be the best solution is to actually use
the development branch which we already
have here and just use the Prisma
connection string here
this one, and add it to your project
here. And then you will use the
development branch when you develop
locally and you will have the you
already have the correct database URL
for production since that is the default
one here. Amazing. So that's it. That is
the entire project developed. Amazing.
Amazing job. Thank you so much for going
through the entire project with me. I
think it's almost 24 hours long.
Amazing. Amazing job. Thank you so much.
Uh and see you in the next one.
š» Source Code: https://cwa.run/nodebase šØ Free Assets: https://cwa.run/node-assets š„ Part 1: https://youtu.be/ED2H_y6dmC8?si=CM7e7wtAVl25nw25 š Resources: Try Inngest: https://cwa.run/node-inngest Try Polar: https://cwa.run/node-polar Try Better Auth: https://cwa.run/node-auth Try Sentry: https://cwa.run/sentry Try CodeRabbit: https://cwa.run/node-rabbit Try Neon: https://cwa.run/node-neon In Part 2 of this tutorial, we're completing Nodebase by building the execution engine and all remaining integrations. You'll learn how to implement workflow execution with variables and templating, build trigger nodes that respond to real-world events, integrate multiple AI providers with encrypted credential management, and create messaging integrations. We'll also cover execution history with error tracking, additional authentication providers, and deploying the entire platform to production. Key features: š Visual workflow builder šÆ Trigger nodes (Webhook, Google Form, Stripe, Manual) š¤ AI integrations (OpenAI, Claude, Gemini) š¬ Messaging nodes (Discord, Slack) š HTTP request node ā” Background job execution with Inngest š³ Polar payments & subscriptions š Better Auth authentication šØ React Flow canvas šļø Prisma ORM + Neon Postgres š Type safety with TypeScript + tRPC š Sentry error tracking + AI monitoring š§āš» CodeRabbit PR reviews š Next.js 15 App Router š± Production-ready SaaS Timestamps 00:00 Intro & Demo 01:50 18 Node Execution 01:06:58 19 Node Variables 01:29:53 20 Node Templating 02:01:36 21 Node Realtime 02:46:27 22 Google Form Trigger 03:57:27 23 Stripe Trigger 04:30:54 24 AI Nodes 05:49:42 25 Credentials 07:24:48 26 Discord Slack Nodes 08:18:41 27 Executions History 09:19:06 28 Encrypting Credentials 09:31:44 29 GitHub Google Auth 09:47:36 30 Deployment