Loading video player...
Learn to build AI agents with Langbase,
one of the most powerful serverless AI
clouds. This hands-on course will teach
you how to create context engineered
agents that use memory and AI primitives
to take action and deliver accurate
production ready results using Langbase.
Maham from Scribba developed this
course.
Hello everyone and welcome to this
course on building serverless AI agents
with Langbase. Over the next few scrims
we're going to build something powerful
AI agents without frameworks and these
agents are going to be context aware
that is the AI agent can dynamically use
relevant information to generate
accurate meaningful task focused
responses. We're living through an
incredible shift in software. Large
language models opened the door and now
the focus has moved to AI agents built
on top of LLMs and everyone's talking
about agents. But most platforms make it
way harder than it should be with
bloated frameworks, messy deployments,
endless YAML configurations, and slow
feedback loops that actually kill your
momentum. That's why this course breaks
it down to the simplest, most
straightforward way to build AI agents
with Langbase. Now, Langbase is not a
framework. It's a serverless AI cloud
platform specifically designed for
building, deploying, and scaling AI
agents easily. By the end of this
course, you will be able to build
serverless AI agents with memory and
agentic rag that enable realtime
contextaware autonomous behavior. You
will be able to use langbased memory
agents which use their proprietary
reasoning models to handle terabytes of
data without you needing to train
anything. You can easily deploy your
agents with just one click and scale
effortlessly from hobby projects all the
way to production without changing your
code. Before you start, make sure you're
comfortable with JavaScript and
TypeScript. If not, I recommend
completing the Scrimbars learn
JavaScript course. Also, if you feel you
need more background knowledge on AI,
intro to AI engineering and learn rag
would be good courses to help you brush
up. Also, you'll be needing to sign up
on Langbase by clicking the link on the
slide and get your API key. We'll need
it for the upcoming lessons. When you
sign up, you land in the Langbase AI
studio where you can manage your AI
agents, memories, and your API key. Add
your LLM API keys in your Langbase AI
studio account and Langbase has a wide
range of LLM so you can pick the one
that best fits your needs. Before we
dive deeper, let me quickly explain to
you what exactly is an AI agent. You can
think of an AI agent as an autonomous
software powered by LLMs that can
perceive, reason, decide, and act. It's
not just answering questions. It
performs task, uses tools, handles
workflows, and adapts based on context
and memory. Building and scaling such
agents has historically been a
challenge, and using full frameworks
makes things slow and rigid. Instead,
Langbase uses a primitives-based
approach. Now, AI primitives are small
composible building blocks like Lego
pieces that let you focus on building
your agent while Langbase handles the
infrastructure for you. These AI
primitives include pipe agents, memory
agents tools workflow threads
chunker, agent runtime, parser, embed,
and don't worry if you don't know what
those terms mean. We'll be explaining
each of these AI primitives in the later
part of the course. I'm Mahham Codes,
your instructor and an AI content
developer. You can connect with me on X
and LinkedIn. Click the links on the
slides and you'll be directed to my
profiles. I'm excited to guide you on
this journey of building AI agents. In
the next lesson, we'll dive into the
core concepts you'll need to build your
first context engineered AI agent with
Langbase. So, let's get started.
Hey folks, before we start coding AI
agents, let's cover some essential
concepts that will guide you throughout
this course as we'll be building an
agentic rack system. Let me explain to
you what that means. Now, agentic rack
combines two powerful ideas and rag.
Agentic basically means an AI agent that
is an autonomous program that can
understand what you ask, make decision
and take actions based on context and
memory. Whereas rag is retrieval
augmented generation and instead of just
guessing answers, the system first
retrieves relevant information from a
large set of documents or data then uses
that information to generate accurate
informed responses. On the other hand,
context engineered AI agents use context
and tools to perform tasks autonomously
and in an agentic rag setup. A retrieval
step is added to bring in relevant
information that the agent reasons over
to generate accurate contextaware
answers. To build this contextaware
agentic rack system, Langbase offers
everything we need. We'll be using
Langbas pipes. Now pipes are serverless
AI agents that run online. They can
automate tasks, analyze information,
carry out research or help users by
answering questions. Langbased pipes are
available as serverless APIs. You write
the logic in Langbas takes care of the
infrastructure deployment and scaling.
This is the easiest way to build, deploy
and scale AI agents without worrying
about servers or maintenance. You can
build AI agent pipes using either the
language SDK to interact with the
language APIs. This is the code method
or you can use the langbas AI studio the
UI to build, deploy and collaborate on
AI agents. In this course, we'll focus
on coding with the SDK, but feel free to
explore the documentation if you'd like
to build AI agents using the Langbas AI
studio. Click the links on the slides to
explore both. Next, we'll be using
memory agents from Langbase to build a
contextually aware rag agent. These are
AI agents with human-like long-term
memory. You can train them on your data
and knowledge bases without managing
vector stores or servers yourself.
Langbased memory agents make semantic
retrieval augmented generation much
easier. And just like with pipes, you
can use either the SDK or the Langbas AI
studio to build memory agents. But in
this course, we'll be using the Langbase
SDK as I mentioned before. So let's take
a look at the Langbase SDK. As we know
that Langbase is an API first platform
and the Langbase API key you will get
will help in authenticating SDK calls.
The SDK supports Python, TypeScript,
Node.js, React, Nex.js and you can learn
more in detail in the official Langbas
talks which have comprehensive guides to
help you learn about it. As we progress,
we'll dive deeper into the agentic rag
and how to build these powerful agents
in Typescript using the Langbase SDK
pipes and memory agents. Before jumping
into code, let me quickly show you what
you'll be building, which is a memory
based contextaware AI agent that answers
questions based on documents you upload.
Here's how it works. You create a memory
instance to store your documents and
then you upload data into that memory
and it converts it into a memory agent.
When a user asks a question, relevant
chunks are fetched from that memory.
Then we're going to create an AI agent
pipe that uses those chunks as context
to generate an answer. And finally, the
AI agent pipe will return the response
based on the retrieved data and the
query. To give you a fast preview of the
agent we'll build throughout this
course, let's use command new by
Langbase that can vip code this entire
agent for you. You just type something
like chat with PDF and it generates the
code using Langbase SDK and the AI
primitives. We'll be building the
backend step by step so you can
understand exactly what's happening
behind the scenes. That way you'll be
able to easily scale your AI agents. In
the next lessons, we'll start coding our
first AI agent with memory using
TypeScript and the language SDK. So
let's go.
Before we continue with coding, let's
set up something you'll use throughout
the course. Environmental variables.
Now, environmental variables allow you
to store API keys securely and use them
in your code without typing them into
every file. This will make your
exercises and challenges much easier to
follow. And we'll be using the language
API key to build AI agents. So, you need
to add that in Scribba. Now, how to add
environmental variables in Scribba? For
that, you need to go to Scribba homepage
and click on your name or avatar and
open the settings. Inside the settings,
click the scrim environment and you'll
be directed to this interface where you
can add your API keys in this panel.
Once added, you can access them in your
code using process.env.langbase
API key. We'll use this syntax in the
upcoming lessons. Now, if you're using
another editor and you're working
outside of Scrimbar, for instance,
you're using VS Code editor, you need to
create a ENV file in the root of your
project and add your API keys there. For
practice on Scrimbar, always use the
Scribba's environment system. If you
want to learn more about it, click the
link in the slides and you'll be
directed to a scrim teaching you more
about the Scribba environmental variable
system. Now, one thing more you should
know before you jump into the coding of
the AI agents is how you're going to run
code in Scribbar. So, in the upcoming
lessons, you will run files with this
npxtsx file name.ts command in your
terminal. Before running this command,
make sure to install your dependencies
by running npm install command. And
that's all you need to prepare. With
environmental variables set up, you're
ready to continue building agents. So,
let's jump back into coding and put this
into practice.
Welcome back. In the last video, you got
a sneak peek at the memory agent on
command new, the one we'll be building
step by step throughout this course.
Now, it's time to actually create it.
We'll build a simple Node.js app in
Typescript using the Langbase SDK to
create an agentic rack system. In this
lesson, we'll just focus on creating an
agentic memory using Langbas SDK, the
foundation for contextually aware system
or agentic rag. And by the end of this
lesson, you'll have a memory created
directly through the Langbase SDK,
results printed to your console, and the
memory automatically showing up in your
Langbase AI Studio account. So, let's
get started. First thing first, create a
new directory and initialize Node.js
project by running this command. npm
initi y. This will create a package.json
file. Then with this command npmi
langbase env you will install the
dependencies. We'll use the langbase SDK
to create the memory agents and env to
manage environmental variables. So let's
install these dependencies. As I
mentioned before that you'll need a
langbase API key to create the agent. So
create a env file
with your Langbase API key in it. If
you've set up LLM API keys in your
Langbase studio profile, the AI memory
and the agent pipe will automatically
use them. Once this is done, create a
new file by the name create memory.ts
and in this file, import the env package
to load the environmental variables.
Then import the langbase class from the
langbase package. After that create a
new instance of the langbase class by
this const langbase object
new langbase. This will contain your API
key. Now if we look at the langbase
documentation, it says that to create
memory on langbase, we need to use this
langbase.mmemories.create
function. So let's do that. In an async
function
main, we'll create a new AI memory. Add
const memory equals to wait
langbase dot memories dotcreate
function. This will create a new AI
memory on langbase. Inside this langbase
domemies.create create function we'll be
defining the name of the memory its
description
and the embedding LLM model you'll use
to create memory embeddings and finally
we call main so the script actually
executes this is how you're going to
create memories on langbase now here's a
challenge for you create a memory named
knowledgebase using the memoriescreate
method and use openAI text embedding
three large model for embeddings.
Finally, log the created memory to the
console. I'll pause here so you can do
it on your own. Give it a try. Don't
worry if you're not able to do it. We'll
solve this together once you're back.
All right, let's do it together. We're
going to create an async function main
with a const memory object equals to
await using the line base dot memories
dot create method inside it defining the
name of the memory which is going to be
knowledge base giving it a description
then we'll be using open AI text
embedding three large model for
embedding so for that defining the
embedding LLM model as open AI text
embedding three large and the final part
of the challenge is to log the created
memory to the console for that creating
console.log
function with
AI memory in it and lastly we're going
to call main to execute the script. This
is done. Let's save the file. Now to
create this agentic memory, run this
command in your terminal. npxtsx.
The name of the file is create
memory.ts.
It's asking me to install the following
packages. Let's proceed with yes. This
will create an AI memory and log the
memory details to the console. Like at
the moment, I'm getting this indication
that the AI memory has been created with
the name knowledge base. This is the
description. The owner login is
mahamedev. That's the username of my
Langbase AI studio account. This is the
embedding model that I used to create
this AI memory. Chunk size of 10,000 and
a chunk overlap of48
with the URL where you can access your
memory. If you visit your language AI
studio account and inside the memory
tab, you'll see the memory has been
created for you. Good job. In the next
scrim, we'll upload documents to this
agentic memory. So stay with me.
Hey folks, welcome to this lesson on
uploading documents to memory. In the
last scrim, you learned how to create an
agentic memory in Langbase. Let's take
it a step further. We're going to upload
documents to that memory. By the end of
this lesson, you'll have a document
uploaded into the agentic memory and the
document will be showing up in your
Langbase AI studio account. First thing
first, create a new folder in the
directory by the name docs. This is
where we'll store all the documents we
want to upload. For this demo, I'll add
a mock text file by the name Langbase
FAQ. TXT with some FAQ content in it.
Let's paste that content and save the
file. This mock text file is going to be
inside the docs folder. Now, [snorts]
Lang supports multiple file formats like
text, PDF, markdown, and CSV. So, you
can use whatever makes sense for your
project. Next, in the root of your
project, create another file by the name
upload docs.ts.
Then we're going to import
to load the environmental variables.
After that, we're going to import the
Langbase class
from the Langbase SDK. Then we're going
to import
read file from FS promises to read our
FAQ file asynchronously. We'll also
import the path module to help us safely
build file paths. The relevant imports
are done. The next step is to create a
new instance of the langbase class with
this con lang base equals to new lang
base with your API key in it. Next,
we're going to define an async function
main. This lets us use a weight inside
which makes working with asynchronous
operations like reading files or
uploading them much easier. Then inside
this async main function, we're going to
create a const current working directory
object with process. CWT to get the
current working directory. Process CWT
returns the folder where your node.js
script is running. And we'll use this as
a base path when locating files. Inside
the same async function, we'll be
setting the memory name by this const
memory name object and the name of the
memory that we created earlier is
knowledge base. So giving that this is
where we'll upload and store the
documents. Now to read the FAQ file,
we're going to create a const line base
FAQ object with await read file
path dot join current working directory.
In this case, the current working
directory is docs and inside that docs
folder, we have this lang base faq.txt
file. So giving that file path. Now read
file actually reads the file from the
disk and path.join safely builds the
path to our FAQ file. After this is
done, we'll be uploading the document to
Langbase memory. Now this is the most
important step. If we look at the
Langbase SDK documentation, it states
that to upload documents to a memory, we
need to define a
langbase.mmemories.cuments.upload
function. So let's do that. I'll be
creating a const FAQ result object with
a weight using the langbase dot memories
dot documents dotupload function that
will be responsible to upload the
document to the langbase AI memory and
inside this we'll be defining the memory
name that tells langbase which memory to
put this file in.
Then we'll also be defining the content
type, the document name which will be
the same name of the file we created
inside the docs folder. We'll also be
adding the document with the actual file
contents we are going to read. And
lastly, inside the meta function, we'll
be defining a category and topic. These
are extra tags and are completely
optional but useful later for filtering
or organizing documents. This is how
you're going to upload a document to
Langbas memory. Now, here's a challenge
for you. Upload the demo FAQ document to
the memory and print the success upload
document message to the console. I'll
pause here so you can do it on your own.
Give it a try. Don't worry if you're not
able to do it. We'll solve this together
once you're back.
All right, let's do it together. As
we've created this langbase.mmemories
dodo documents.upload function, inside
this I'm going to define the content
type for the FAQ document. That is going
to be text/plain.
Since langbase faq is a text file, so
I'm giving that content type. Inside the
document name, I'm going to add langbase
faq.txt.
That's the name of the document which is
inside the docs folder. and we want this
document to be uploaded to our AI memory
on Langbase. Then inside this document
tag, I'll be adding the actual file
content we just read. In this case, it
is going to be Langbas FAQ. So let's do
that.
The category would be support and the
topic would be Langbase FAQs. This is
done. And the last part of the challenge
is to print the success message to the
console. For that I'm going to create
this console.log function with this
message that if the upload is succeeded
print a check mark with this faq
docuploaded message otherwise we print
an error message. Lastly we're going to
call main. So the script actually
executes. Let's save the file. To upload
this document to the memory we're going
to run this command in your terminal npx
tsx. And the name of the file is upload
docs.ts. DS and when you run this
command you'll see that in your Langbase
AI studio account and inside the memory
tab and the knowledgebased memory you
created earlier the document has been
uploaded with a ready status. Great job.
In the next we'll see what memory agents
on Langbase do after you upload the
document for context engineering. So
stay with me.
Welcome back. So far you've created a
memory and uploaded documents into it.
But what actually happens after you
upload a document to Langbase? That's
where memory agents come in. A memory
agent isn't just storing raw text. It's
running through a whole pipeline of
processes to make your data reusable,
searchable, and contextually aware for
your AI agents. Let's walk through it
using this simple diagram. The moment
you upload a document, Langbase parses
it. That means it breaks down the
structure and extract meaning not just
the text. Next, the content is split
into smaller meaningful chunks. And
instead of dealing with one giant file,
the system works with manageable
sections that keep context intact. Once
your document is split into smaller
chunks, each chunk needs to be
translated into something a computer can
actually work with. That's where
embeddings come in. The chunks are sent
to the embedding LLM model that
generates embeddings. Now an embedding
is basically a numerical representation
of the meaning. Words and sentences are
converted into long list of numbers that
are like vectors and these numbers
capture semantic meaning. So two pieces
of text that are similar in meaning will
have embeddings that are close to each
other in this highdimensional space. By
converting your chunks into embeddings,
Langbase makes it possible for the
system to later compare your query
against all store chunks and instantly
find the most relevant ones not just by
keywords but by meaning that is semantic
search. So embeddings are a bridge that
let AI understand your text at a deeper
and semantic level. The embeddings are
stored in a vector store and indexed for
faster retrieval. Now indexing is done.
So when you query the memory later, the
system can instantly look up the most
relevant chunks. All these steps happen
within minutes and all of the data is
automatically prepared that is the index
data and stored in a vector store. So
you can quickly ask questions from that
data. And here's the best part on
linebased you just don't get the
pipeline as one black box. You get
separate AI primitives like the memory
workflow threads parser chunker
embed tools and so on. That means you
have the freedom to build scalable AI
agents by composing exactly the AI
primitives you need instead of being
locked into one fixed framework. And
using these AI primitives, there are
reference agent architectures that
leverage Langbas to build, deploy, and
scale autonomous agents. With these
agent architectures, you can define how
your agents use LLM, tools, memory, and
durable workflows to process input, make
decisions, and achieve goals. You can
learn about them by clicking the link in
the slide. Now, let's talk about what
happens when you ask a question. For
that, Langbase can rewrite or refine
your query to improve the entire
retrieval process, making sure the
system looks for the right information.
Then it fetches the most relevant chunks
from the index. That is it retrieves
those relevant chunks. Then those chunks
get reranked prioritizing the most
contextually accurate results. And with
the right context in hand, the agent
passes it to the LLM to generate a
useful grounded answer. And finally, the
system evaluates the response for
coherence and alignment with the source
memory. So to recap, when you upload a
document or data to langbased memory,
which is agentic memory or memory agent,
it transforms it into something your AI
agents can actually reason with. This is
how we move from raw data to
contextually aware responses. In the
next lesson, we'll see how to actually
query the memory and retrieve results.
So follow along.
Hey folks, welcome back. In this lesson,
you learn how to perform retrieval
against a query using Langbase. And if
you've been following along, here's the
journey so far. We created a
contextually aware memory agent and
uploaded data into that memory. The
memory agent parsed, chunked, embedded,
and indexed the data in a vector store.
Now, we're going to move forward to the
third step of building a contextually
aware rack system that is retrieving
relevant data against a query. And by
the end of this lesson, you'll see those
retrieved memory chunks printed to your
console. Now, how retrieval works? As we
uploaded a mock langbased FAQ document
to the EI memory in the previous scrim,
if a user now asks a question like, "How
do I upgrade my individual plan?" Here's
what happens behind the scenes. The
question is embedded into a vector
representation. Embeddings are compared
with all the stored embeddings in your
memory index and a semantic vector
search is run to find the most relevant
chunks in the language AI studio. You
can test this easily if you click the
memory tab from the sidebar menu and
open your memory agent. Click retrieval
testing and after clicking that enter
your query into the input box like how
do I upgrade my individual plan and you
will see the system return chunks along
with the similarity percentages. Now,
here's a pro tip. You can adjust chunk
size and overlap under the settings to
improve retrieval accuracy. At this
stage, you now have the relevant chunks.
And if you pass these chunks into an
agent's context and ask the same
question, the agent will generate an
answer based on this context. This is
retrieval augmented generation that is
rag cycle in action. But before
generation, let's take a look at the
retrieval step in code. As in the
previous step, we uploaded the document
to the AI memory. Now, we're going to
create a new file in the project
directory by the name agent.ts. And
inside this file, I'm going to import
the environmental variables from your
env file. This is how we securely store
the Langbase API key. Next, again, I'm
going to import
Langbase class from the Langbase
package. This brings in the Langbase SDK
so we can interact with the Langbase
APIs. Then again we're going to create a
new langbase
client instance using our langbase API
key from the env file. After this we're
going to define a reusable function to
run retrieval against a query. For that
I'm doing export async
function with run memory agent and the
query is going to be string. This
function will run the retrieval process
against a user query. Now if we head
over to the langbase SDK documentation
to retrieve memories on langbase we need
to define this langbase domemies.
retrieve function. So let's do that. In
a const chunks object I'm going to
define this langbase
dotmemories dot retrieve function. Now
inside this langbase domeies. retrieve
function the important work will be
done. We're going to add a query that is
going to be the user's question. Then
defining top K as four that tells the
LLM to fetch the top four most relevant
chunks. After that, we're going to
specify which memory to search in. As in
the previous lessons, we created a
knowledge base memory. So, we're
specifying that. And finally, we're
going to return the retrieved chunks so
we can use them in our agent. This is
done. Let's save the file. Now to run
the retrieval process, you need to hook
this run memory agent up. And for that,
I'm going to create a new file in the
project directory by the name index.ts.
And in the start, I'm going to import
the function we wrote in agents.ts file.
So let's do that. Import run memory
agent from the agent.ts file. After
that, I'm going to define an async
function main to run our code. Inside
it, creating a const chunks object with
a weight run memory agent that sends the
user query. How do I
upgrade
individual plan? And finally, with
console.log, print the retrieve chunks
so you can see what came back. and
closing this with main. So the script
actually executes. This is done. Let's
save the file. Now it's time to run the
script in your terminal. For that, I'm
going to enter this command. npx tsx
index.ts.
That's the name of the file. And you'll
see the retrieved memory chunks printed
to your console. These are the most
relevant sections of your data for the
query. And that's retrieval in action.
In the next lesson, we'll see how to
pass these chunks into an agent for
generation and completing the entire
agentic rag cycle. So stay with me.
Welcome back. Up till now we created a
memory agent, uploaded data into that
memory and upon user query, relevant
chunks are fetched from that memory. Now
we're moving to the fourth step of
building a contextually aware agentic
rack system that is agent processing. In
this lesson, we'll be creating an AI
agent pipe on Langbase that uses the
data chunks as context to generate an
answer. And by the end of this lesson,
you'll have an AI agent pipe set up in
your Langbase AI Studio account. To
begin with, create a new file by the
name create pipe.ts in the root of your
project. Then inside this file, import
the env to load the environmental
variables. Then import langbase class
from
the langbase package. Then again create
a new langbase
client instance with your langbase API
key in it. Now to create the pipe agent
using the langbase SDK, if we head over
to the documentation, it states that we
need to define this
langbase.pipes.create
create function. So let's do that. We'll
be creating an async function main and
inside this async function main, we'll
be creating our AI agent pipe. As we
uploaded the langbase FAQ into the AI
memory, so the AI agent pipe we'll be
creating would be a support agent. For
that creating const support agent using
await
and langbase dotpipes dotcreate function
and inside this function I'm going to
give the AI agent pipe a name
and description. Now name and
description are like human friendly
metadata for the pipe and that will be
shown in the language AI studio account
and is very useful for organization.
When you create a pipe agent, you can
give it initial messages. So for that
we're going to define a messages array.
Inside this messages array, we'll be
defining the pipe agents role and its
content.
These work just like a conversation
history. you'd send an LLM or an agent.
Now, inside the role, you can either
define a system or a user prompt. A
system prompt is like giving the AI its
job description before the conversation
starts, and it defines how the AI should
behave, its role, tone, and boundaries.
For instance, you could give this system
prompt to the agent that tells it that
you're a helpful support agent that
always answers briefly and accurately.
Whereas a user prompt is the actual
input or question from the user that the
AI needs to respond to. For instance,
how do I upgrade my individual plan? Is
an example of a user prompt. Now, here's
a challenge for you. Define the support
AI agent pie by giving it a name,
description, and a system prompt. I'll
stop here so you can try it on your own.
Don't worry if you're not able to do it.
We'll do this together once you're back.
All right, let's do it together. Since
we're creating support agent so I'm
going to give this name to the agent
AI support agent giving this description
that this agent is here to support users
with their queries. Now the next part is
adding a system prompt. So for that the
role is going to be system instead of
user. And this is the system prompt that
you're a helpful AI assistant. You will
assist users with their queries. And
lastly, we're going to log the created
pipe agent to the console with this
console.log and ending this with main.
So the script executes. Let's save the
file. And to create this AI pipe agent,
let's run this command in your terminal.
npx tsx. The name of the file is create
pipe.ts.
This will create the support pipe agent.
And if we head over to the Langbase AI
studio account inside the pipes tab,
you'll see that the AI support agent has
been created. If you click that agent,
you'll see the system instructions you
gave to the agent. That's all for now,
folks. In the next lesson, you will
generate comprehensive rag responses by
connecting this agent to the memory. And
that's the last step in creating a
contextually engineered agentic rack
system. So, let's go.
Hey folks, in this lesson we're going to
look at the last step of creating a
contextually aware agentic rag system
that is generating rag responses. In
this step, we'll be connecting the AI
agent pipe we created in the previous
scrim to the knowledgebased AI memory
with langbased FAQ docs uploaded into
that memory to generate contextually
aware responses. In this lesson, the
goal is to take those chunks we
retrieved from memory, build a system
prompt with that context, call the
Langbas AI agent pipe, that is the AI
support agent we created in the previous
scrim, and return the LLM completion.
So, let's get started. In the agent.ts
file we created at the time of
performing retrieval, at the moment, it
is retrieving relevant chunks from
memory. There's a run memory agent
helper to do that. And now in the same
file we'll add code to generate a
grounded answer using those chunks. As
we've imported langbase the SDK client
used to call Langbase APIs. We'll also
import
memory retrieve response which is the
TypeScript type for the objects returned
by the memories. retrieve function from
the langbase SDK. Then to generate a
response from chunks and query, we'll be
defining an export async function
run AI support agent. And this run AI
support agent takes chunks and query as
input. And this run AAI support agent
function even receives chunks with an
array of retrieved memory chunks with
this memory retrieve response function.
And it even receives the query that is
going to be the user question or prompt
string. Then create a const
system prompt object
with a weight gets system prompt
function to generate a system prompt for
the LLM. Now the get system prompt
chunks builds a single system prompt
string that includes instructions and
the collected chunk text. That prompt
sets the agents behavior and provides
the context it must use. Next, we'll run
the agent with this const
completion using the await
langbase.pipes
dotr run function. In it, we're setting
the stream false. That means the call
returns a full completion rather than
streaming partial tokens. As we're
running the agents, we're also defining
the name of the agent we created earlier
that is AI support agent. Then inside
this langbase.pipes.run
function, we'll be defining a messages
array with a system message and the
content to be system prompt. Other than
the system prompt, we'll also be
defining a user message that is going to
be the actual content or the actual user
question. And finally, the AI returns a
completion that is the answer. Next,
we'll create the system prompt to build
the context. For that, defining an async
function. Get system prompt. This
function builds the instruction for the
AI and it takes all the chunks pieces of
text from memory and glues them together
into one big string. Then we're going to
define the system prompt.
So, the LLM or agent generates accurate
rag responses. These are like rules that
is asking the LLM to be helpful and
accurate. only use the given context and
always site sources. This ensures the AI
doesn't hallucinate or make up
information. And finally, as we defined
in the previous scrim that in the
agent.ts file, we added code to retrieve
chunks from memory. It searches for the
four most relevant chunks for the same
reason we gave topk the value of four.
And the four most relevant chunks it is
going to generate for us will be related
to the question. Let's save this file
and to run the support agent with the AI
memory chunks inside the index.ts file
we created earlier in the previous
scrim. Other than the run memory agent
that we're importing, we'll add run AI
support agent that we defined in
agent.ts file. And we're going to remove
this for now and create an async
function main with a user query.
How do I upgrade to individual plan?
Then with this const chunks object,
we call the run memory agent query
function that gets the best context
chunks and then we pass those chunks
with const completion to run AI
support agent function that takes chunks
and query as input. With this function,
the AI generates the final response and
finally we log the result to the console
ending it with main. So the script
executes. Now to run the code, I'm going
to run this command in my terminal. npx
tsxindex.ts.
This runs the program for you and you
will see the AI response in the console
with the sources cited. Good job. You
just built a retrieval augmented
generation rag agent and it first
retrieves relevant info from memory.
Then it uses that info to answer
questions accurately and it always cites
its sources. Stay with me as we dig
deeper into the context engineering part
of AI agents.
Welcome back. So far we built a
contextaware agentic rack system. Here
is what we did step by step. created an
AI memory and uploaded data into it
which wasn't just stored as raw text but
processed through a pipeline that made
it searchable and contextually aware.
After uploading the data to that memory,
it converted it into a memory agent and
as we discussed earlier, these are the
processes in that pipeline. Then we
retrieved relevant chunks when a user
asked a question like how do I upgrade
my individual plan and passed those
chunks into an AI agent pipe got back an
accurate response grounded in both the
query and the retrieve data. In this
entire process we use two AI primitives
langbased memory and AI agent pipes. But
Langbase offers many more AI primitives
you can combine to engineer powerful
scalable AI agents. So let's quickly go
through them. Workflow is one of the AI
primitives by linebased that helps you
build multi-step AI applications and
supports sequential and parallel
execution. It lets you add conditions,
retries, and timeouts. Gives you
detailed step-by-step logging. You can
think of it as orchestration for your AI
processes. Next up, threads is another
AI primitive that manages conversation
history and your context. This primitive
is essential for chatbased application
where the AI needs to remember what was
said before. The parser AI primitive
extracts text from different document
formats like PDF, CSVs, and it is useful
for pre-processing documents before
feeding them into your AI pipeline. Next
up, we have the Chunka AI primitive that
splits text into smaller, manageable
pieces. This AI primitive is useful for
building rag pipelines and lets you
focus only on relevant sections of large
documents. The embed AI primitive
converts text into vector embeddings.
And this one enables semantic search and
similarity comparisons, making it easier
to find relevant information based on
context rather than just keywords. Now
tools is another most important AI
primitive that allows you to extend the
capabilities of your AI applications.
They give your AI agents extra powers
like web search API calls or running
code. If you know about MCP model
context protocol, then adding tools to
your MCP server makes highly versatile
AI workflows where agents can fetch live
data, automate tasks, and stay context
aware. And if you don't know about MCP,
then you should definitely take this
intro to model context protocol course
on Scribba. Click the link on the slide
and you'll be directed to the course.
Next up, we have the agent AI primitive
that works as a runtime LLM agent and
you can specify all parameters at
runtime and get the response from that
agent. Now, all these primitives are
documented in the Langbas documentation.
Click the link in the slides to dive
into the examples and learn how to use
each one in code. Now using these AI
primitives, Langbase also provides eight
reference agent architectures like
augmented LLM, prompt chaining, agentic
routing, agent paralization,
orchestration workers, evaluator,
optimizer, augmented LLM with tools and
memory agent. Now you don't need any
framework to build AI agents. All you
need are composible AI primitives for
it. Again, check the link in the slides
to explore each architecture in detail.
If you're into wipe coding agents with
an agent app, don't miss the next
lesson.
Hey everyone, congratulations on making
it to the final lesson of the course. So
far, we've built a full contextaware
gentrack system step by step. We started
it by creating an AI memory on lang base
and uploaded documents into that memory
that converted it into a memory agent.
saw how memory agents processed that
data, retrieved relevant chunks, and
built a pipe AI agent, generated drag
responses, and explored langbased AI
primitives. Now, let's look at a faster
way to build all of this using command.
Now, what is command? Command is
computer human AI by line base. It's
like having an ondemand AI engineer. You
just describe your idea and it builds a
production ready AI agent for you. All
you need to do is prompt your AI agent
idea and command builds a fully
functional agent complete with its API
and the agent app that are deployed on
Langbase which is the most powerful AI
serverless platform. Now inside command
you get an agent IDE which is a powerful
code editor for editing, debugging and
observing the agent. You even get an
agent app. Every agent you build on
command has a production ready sharable
app. You get a readytouse API for your
agent with code snippets and command
supports scalable production ready
deployments. Now inside the input box of
command new you can prompt the agent and
the agent app switch between these two
modes for specific updates. And with
version control you can even track
changes and revert to previous versions.
Command also supports forking the agents
that is you can copy other agents and
make them your own. You even get live
deployed URLs to share your agents with
the world. And with the agent flow
diagram, you get visualized flows for
understanding complex agent logic. And
inside command, you also get memory
agents that are ready to use rack
pipelines. And you even get
automatically generated documentation
for your agent through the agent readme.
Now let's create your first agent with
command. All you need to do is prompt
command for wipe coding AI agents. Just
describe what you want to create in the
prompt box. The more specific you are,
the better the results. Enter an initial
prompt for your agent idea and command
will continue from there. You can keep
refining and adjusting your agent as you
go. Let's use this prompt for this demo
that build an AI support agent that uses
my docs as memory for autonomous rag.
Now when you enter a prompt, command
begins the agent creation process and it
lays out the foundational structure of
your agent and starts generating the
necessary code to bring it to life. This
includes the agent code and the agent
apps code. Now the agent.ts file
contains the main logic of your agent
and its workflow. Whereas the app folder
inside the directory contains the app
and the front-end code, the React
components for your agent. Now command
generates the agent code in real time in
the agent IDE where all the code
generation and editing takes place. You
can toggle between files and edit them
manually or prompt command to make those
changes. It intelligently detects when
an agent requires access to private or
extended data that is rag. In such cases
it automatically creates memory agents
like in this case for the prompt that we
gave command created a support doc
memory for us. It will store the company
documentation and provide it to the
support agent when needed. Once this
memory agent has been created, click the
memory and then you can upload documents
to that memory agent. Once uploaded, the
documents are parsed, chunked and
embedded making them searchable and
retrievable by the agent. As we
discussed in the previous scrims while
we were working on the code, after
you've uploaded a document and command
has generated all the necessary code,
the next step is to deploy your agent.
If your agent uses specific LLMs or
tools, you may need to add API keys in
the environmental variable section. And
if you are a langbased user and have LLM
keys saved in your profile key set,
they'll be automatically imported here.
Once the agent has been deployed, you
will have access to agent app which is a
production ready app to interact with
and share the agent. The agent API that
is a readytouse scalable serverless
endpoint for your agent. and the agent
flow diagram that is a visual
diagrammatic representation of the
agent's logic to understand how it
works. You can also edit the agent's
code or download it if you prefer to
self-host it. Now, alongside the agent,
command automatically generates a
fullyfledged application for your agent.
That's the agent app. And they are
accessible after you deploy the agent.
These agent apps are production ready.
They auto update when the agent changes.
And they're fully hosted, instantly
sharable. mobile and desktop ready and
customizable using the app prompt mode.
You can test the agent with a prompt
like how do I upgrade to individual plan
and it should respond with an answer
based on the documentation you uploaded.
Other than the agent app, you can use
this AI agent through the agent API as
well. Go to the API tab and retrieve
your API base URL and API key that you
can use in your applications, websites
or literally anywhere you want. After
deploying the agent, command also
automatically generates a visual agent
flow to help you understand how your
agent works. By clicking this icon right
here on the top right corner of the
agent IDE, you get access to the agents
flow diagram. Agents can quickly become
complex with multiple decision paths,
tools, and branching conditions. The
agent flow diagram provides a clear view
of the agents logic including its
decision paths, tools used, and
branching conditions. That was all for
now folks. If you want to learn more,
I'll highly recommend visiting the
language documentation by clicking the
link in the slide and learn how you can
build, scale, and deploy any type of AI
agents. I hope you had a wonderful time
learning how to build serverless context
engineered AI agents using Langbase. So,
what are you waiting for? Go ahead, sign
up on Langbase and reach out to me on my
social profiles if you have any more
questions.
Learn to build AI agents with Langbase, one of the most powerful serverless AI clouds. This hands-on course will teach you how to create context-engineered agents that use memory and AI primitives to take action and deliver accurate, production-ready results using Langbase. Maham from Scrimba developed this course. ✏️ Study this course interactively on Scrimba: https://scrimba.com/build-serverless-ai-agents-with-langbase-c0cg73hgmh?utm_source=youtube&utm_medium=video&utm_campaign=fcc-langbase Scrimba on YouTube: https://www.youtube.com/c/Scrimba ⭐️ Contents ⭐️ - 00:00 Intro to Building Serverless AI Agents - 03:57 Core Concepts - 08:11 Setting Up Environment Variables in Scrimba - 10:00 Create a Memory with Langbase SDK - 15:07 Upload Documents to AI Memory - 21:43 Memory Agents on Langbase - 25:32 Perform RAG Retrieval - 30:48 Create an AI Agent Pipe - 35:14 Generate RAG Responses - 40:43 AI Primitives for Context Engineering - 44:23 Vibe Coding AI Agents with Command.new 🎉 Thanks to our Champion and Sponsor supporters: 👾 Drake Milly 👾 Ulises Moralez 👾 Goddard Tan 👾 David MG 👾 Matthew Springman 👾 Claudio 👾 Oscar R. 👾 jedi-or-sith 👾 Nattira Maneerat 👾 Justin Hual -- Learn to code for free and get a developer job: https://www.freecodecamp.org Read hundreds of articles on programming: https://freecodecamp.org/news