Loading video player...
Some areas again I think just like
totally
totally gone.
>> There will come a point where no job is
needed. You can have a job if you want
to have a job for sort of personal
satisfaction but the AI will be able to
do everything.
>> Everybody in the world is now a
programmer. This is the miracle. This is
the miracle of artificial intelligence.
>> Will we still need humans?
>> Uh not for most things. uh you know
we'll decide
>> in the next year probably you know I
don't know maybe half the development is
going to be done by AI as as opposed to
people and then that will just kind of
increase from there.
>> Look despite what Elon and his Silicon
Valley fan club keep saying to sell
their toys AI won't steal your job. But
the developer next to you who integrates
AI into their workflow while you're
ignoring it just might. The industry is
evolving fast. Writing code isn't enough
anymore. You need to understand the
future and master the tools shaping it.
Because best devs aren't rejecting AI,
they're using it to improve their
skills. So if you ever feel everything's
moving too quickly and worry about being
left behind, you're not alone. and I've
got you covered. I'm Adrian and welcome
to my first ever AI assisted development
masterass where we'll build a Nike level
full stack e-commerce platform from
scratch with the help of Devonai, the
smartest AI engineer out there.
Throughout this build, you'll master the
best prompting practices for optimal
results, creating clean, productionready
code, identifying where AI excels and
where it falls short, and combining your
skills with AI to code faster, smarter,
and better. In the next few hours,
you'll build and deploy a billion-doll
e-commerce site at lightning speed,
including full authentication with
social login and email verification, a
sleek, modern landing page with promo
features, dynamic product listings with
advanced filters, sorting and variance,
beautiful yet streamlined detailed
product pages, persistent shopping cart
for guests, Stripe powered checkout, and
of course, all of it will be fully
responsive.
with a scalable codebase. This isn't
just another copypaste tutorial. We're
building robust database schemas. Secure
authentication flows powered by better
off and scalable architecture prepared
for multibrand stores. The stack,
Nex.js, TypeScript, Drizzle OM with
Postgress, Tailwind CSS, Zestand, and
Devon AI as our co-engineer. You've
heard the hype, the skepticism, and even
the outrage.
>> Currently, the only thing that Devon can
do is bang out crappy little demos.
>> I will pile drive you if you mention AI
again.
>> But enough talk because now is the time
to use it. In the next couple of hours,
you, me, and Devon will build what
normally takes weeks, maybe even months.
You'll witness firsthand where AI flies
through tasks and also where it flops
and how a skilled developer, that's you,
can fill those gaps. This isn't just
some AI magic show. It's a coding
masterass with strategy, speed, and real
world skills. You have my promise that
by the end, you'll be able to ship
faster, build bigger, and outpace those
developers that are still coding like
it's 2020. Let's master it together.
So, can AI really build this entire
application on its own? You already know
the answer, but let me demonstrate. I'll
prompt the AI to build a full ecommerce
application.
Let's say something like develop a
complete full stack Nike e-commerce
website using Nex.js, Drizzle RM,
Postgress, Better Off, TypeScript, and
Tailwind CSS. And as you can see, it's
doing a lot of thinking. And at the end
of the day, it actually provided me with
a valid zip folder. But hey, let's open
it up and see what's within it. There
are some files and folders that resemble
a real application, but if you try to go
ahead and run it, you'll run into a lot
of errors, error after error after
error. So, if you're a vibe coder who
hasn't learned how to properly code and
fix those bugs, it's not going to be
easy for you to just build something
using AI. And hey, if you're a real
developer, it also might take some of
your time to go through all of these
bugs. But I should be fair. We need to
give it another shot and ask AI to build
just the landing page because the
initial prompt was far too open-ended.
So let's say something like build a Nike
e-commerce landing page. And as you can
see, this is pretty funny. Nothing to be
scared of. These landing pages aren't
looking that good. And this is exactly
where developers still have the edge.
All you have to understand from this is
that AI can code, but it can't architect
a solution unless you tell it exactly
what to build. If you skip that step,
you end up with spaghetti code, random
folders, and an app you can't even run,
let alone maintain. So, let's start with
sketching the system first. And you
might wonder, why are we using Devon and
not some other AI tool? Well, if you
heard about Devon before, you already
understand the hype.
It is the first fully autonomous
software engineer companion built by
Cognition Labs. Most tools out there are
just helpful assistants, but they wait
for you to guide them. But Devon, Devon
is independent. It can plan, write code,
debug, and even deploy on realworld
projects. Usually when you run into a
problem or need help, you turn to an AI
tool, explain the issue, give it the
context, even give it a code snippet,
and then paste it to your project to
test. And if it doesn't work, you tweak
it and try again and again and again.
But with Devon, it's a whole different
game. Just tell it what needs to be
done, and it'll refer to read and
understand your entire codebase. build a
step-by-step plan, write clean and
structured code by following linting and
coding standards. It'll even run the
build, fix the issues it encounters
along the way, write tests if needed,
and finally push the code and open up a
pull request for you to review. So,
Devon isn't just a smart chatbot. It is
a full stack AI engineer companion with
a built-in Linux terminal, code editor,
web browser, and a memory of what it's
learned. It also reads docs, installs
packages, and it's always up to date, so
you won't be working with old packages.
Now, another thing that I found super
interesting when exploring Devon is that
it isn't just here to automate tasks. It
also teaches you how to collaborate.
See, when you explain to an interviewer
how you co-built an application with
Devon and AI, you're actually showcasing
many strengths. Your architectural
decision-m, your ability to work with AI
tools, and your teamwork mindset. And
that's exactly the qualities that future
employers care about. And don't just
take my word for it. This is already
happening in the real world. Goldman
Sachs, for example, has employed
hundreds of Devon agents across their
teams to boost productivity and speed up
their workflows. And that's why I
personally like Devon. It integrates
just as well with the traditional
industry workflows. And of course, like
any serious AI tool, Devon isn't free,
and it really can't be. Someone's got to
keep those monster servers running
because your old laptop definitely
can't. But for what you get, it's worth
it. And I'll also leave the link down in
the description with possibly some
discount. But here's the thing. If you
don't have a credit card or don't want
to spend any money right away, that's
totally okay and you don't have to do
that to be able to follow along with the
video. The concepts I'll be talking
about are mostly concerning code
architecture. So there's going to be a
lot of stuff that we'll explore and go
deep into architecting the system, not
just getting AI to do it. So without
even coding or following along, you can
sit down behind your laptop or a smart
TV and just watch me use Devon so you
can see how the future of development
will look like. And honestly,
Cognition's future, the company behind
Devon, looks even better after acquiring
Windsurf. They're building a full
ecosystem consisting of an AI editor, AI
autocompletion, AI code reviews, and
more. So Devon will integrate right
within it. I'll try to get you free
access once that actually drops. So
yeah, it's worth it now, but it's going
to become even more exciting later. So
click the link down in the description,
head over to Devon, and create your
account. But before we touch even a
single line of code, we need a plan. Not
just for the app itself, but for how
we're going to work with Devon AI.
Because building with AI isn't just
throw a vague prompt and pray for
productionready code. That's like
telling an intern just build me
Amazon.com and then wondering why you
got half a broken website. The real
skill is planning the architecture and
delegation. That is knowing which parts
AI can handle reliably and which parts
need human ownership. Think of Devon as
a developer who works fast but needs
very detailed instructions. Your job is
breaking down the project into clear
bite-sized tasks and writing the kind of
briefs you'd give to a teammate. So,
let's architect this e-commerce
application. Because it's not just a
bunch of random pages, it's a system.
And to make sure Devon doesn't get lost
along the way, you need a blueprint. At
the start, you might be tempted to split
everything into dozens of microservices
on day one. Don't. That's like buying 10
AI tools for writing one email. It's an
overkill. So start with a modular
monolithic architecture. One repo, but
clear boundaries inside with cataloges,
carts, orders, and payments. It's easier
to run and change. Later, if you truly
need to scale something independently,
just peel it off into its own service
step by step. and we'll do the same
today by beginning with a monolith
that's structured for growth. So what's
the big picture here? Let's talk about
it. Starting with the front end, you'll
develop this application using Nex.js,
TypeScript for type safety and tail CSS
for utility styling. Since you're using
Nex.js, you have different rendering
strategies to choose from. For things
like the homepage and product details
pages, serverside rendering is perfect
because it means that Google can index
them for SEO and users can see something
meaningful right away instead of staring
at a blank screen. For other pages,
we'll use other strategies like ISR or
even PPR because we can render them at
once, cache them at the CDN edge, and
then quietly revalidate in the
background every few minutes. This gives
you the speed of static sites with the
freshness of dynamic content. And making
all these considerations is very
important when deciding on a text stack.
Now, behind the front end, you've got
the API layer. In Nex.js, that often
means using server actions or building
out your own app API routes. This is
where you centralize your logic,
fetching products, adding items to the
cart, creating orders, and even
processing payments. Keeping this
consistent and well structured makes the
app easier to maintain as it grows. At
this stage, you shouldn't overengineer
it, but you should still create some
boundaries. For example, cart logic
shouldn't be tangled up with product
catalog logic and payments should have
nothing to do about marketing content.
This is that separation of concerns that
futureproofs your app. And with server
actions, instead of wiring up fetch
requests everywhere, you can directly
call serverside logic securely. So once
the API layer is done, we want to focus
on the data layer for this app.
Something like Postgress will be your
best friend. It's relational, battle
tested, and scales well. You can use it
for structure things like products,
users, orders, and payments. And a clean
schema keeps everything consistent. On
top of Postgress, you'll probably want
an OM like Prisma or Drizzle. It'll give
you some type safety migrations and a
more elegant way to query data without
drowning in raw SQL strings. And it's
very easy to switch to some other
database in the future as the query code
stays the same. And then there's the
final layer, a layer that nobody sees,
but everybody feels, and that is
delivery. This is where CDNs or content
delivery networks come in. A CDN isn't
just about caching your images anymore.
With Nex.js, your static pages and even
those ISR pages we talked about can be
cached right at the edge closer to your
users. This means that your app feels
fast everywhere, not just near your main
server. You as a developer and as an
architect have to keep these things into
account. The key takeaway here is not to
just think about where to deploy. Think
about how will my users experience this
app globally. So when you zoom out, the
architecture looks like this. A modular
monolith at its core. Clean boundaries
for each domain. Nex.js on the front end
with smart rendering strategies, a tidy
API layer using server actions, a strong
relational database underneath, and a
CDN that makes the whole thing feel
snappy worldwide.
That's the mindset of an architect. not
a what shiny tool can I add, but what's
the simplest structure that'll scale
when I need it to.
So, now that we have architected our
application, we have to figure out how
to convey it to Devon. You most likely
won't realize just how bad at prompting
you are until you're already frustrated.
You open chat GPT, type, "Hey, generate
me a login page." And it confidently
generates something, but it's not what
you had in mind. It's mostly off, bad
UI, broken logic, barely usable output.
So, you try again. Slightly better,
still not usable. And after four
attempts, you give up and build it
yourself. Does this sound familiar?
Problem isn't the AI. The problem is
that you're prompting like a user but
you're trying to develop like an
engineer. So if there is one thing you
need to know then it's that development
with AI is not equal to just googling.
When you code with AI you're not just
asking questions. You're delegating
work. And AI is not Google. It's more
like a junior engineer on your team. And
like any junior dev, if you just tell
them to build a dashboard and walk away,
they'll do their best, but you'll
probably end up rewriting half of it.
But if you give them clear specs, what
the goal is and what tools to use and
what the structure should look like,
they'll deliver exactly what you need,
often faster than you could have done it
solo. This is what good prompting is.
It's not magic words. It's technical
clarity. It's the skill of writing just
enough direction so AI becomes your most
productive teammate. But that's not what
most of us do. We kind of treat AI like
the Stack Overflow search box. Vague
prompts like build a chat app in React
won't cut it. You'll get generic code,
halfbaked logic, and questionable
architecture at best. It's like asking
someone to build a house without saying
how many rooms, which materials, and
where to put the doors.
That's the key. AI doesn't do well with
build me an Amazon, but it's excellent
when we feed it with one precise feature
at a time. So instead of just asking
from now on, handle your prompts like
this. Start with what you're building,
which is the objective. How it should be
structured. These are the tech
decisions. What the steps are, these are
the tasks. And finally, what you expect
back, which are the output requirements.
Only then you'll see a difference from
the generic prompts that you're addicted
to. Here's the exact structure that I
use when I'm prompting AI to help me
build realworld apps. It's simple,
clean, and well tested. Link for this
entire prompt will be in the video kit
down below, completely for free with
even more details and explanations. So,
definitely check it out. But yeah, it'll
look something like this. Objective,
structure, tasks, operate requirements,
and then finally some notes. But this is
too generic. So let me show you exactly
how I approach it with examples. So AI
can consider themselves a developer. For
example, saying you are a full stack
engineer assigned to build a modern web
application from scratch. That way,
it'll position itself more as a full
stack engineer than a generalist
responding to your chat. Next, specify
the objective. Explain the goal of the
project in one to two lines. For
example, build a minimal blog editor
that supports markdown formatting and
autosave. Not too long. This is only the
objective. Think of it like a main
headline in the newspaper. Then, specify
the structure. define the text stack and
structural decisions. For example,
Nex.js with app router, tailwind CSS for
styling, better o for o posgress,
drizzle, rm and so on. And then finally,
very specifically define the tasks,
break them into different logical chunks
like set a project with tailwind CSS,
posgress and drizzle, create a post
stable with title, content, slug and
created. Build the markdown editor using
React Markdown. Add the autosave feature
and then render blogs at
forward/blog/lug.
Right after that, repeat yourself again.
Define the output requirements. What do
you expect the AI to return? For
example, output the full React component
and tell me where to put it. And for any
other information, you can also provide
some additional notes like any extra
details or constraints. For example,
keep it minimal, follow industry code
guidelines, and don't use heavy
packages. Perfect. Now you know not only
how to architecture your application,
but you know how to convey that
architecture to the AI, one small
defined task at a time. And once again,
I'll leave the full instructions for
prompting in the video kit down below.
So finally let's dive right into setting
up our project.
In this lesson I'll show you how to spin
up a new Nex.js application but not a
usual way. Normally setting up Nex.js
Postgress Drizzle Zustan TypeScript
ESLint Tailwind CSS and making it all
work together seamlessly does take a
couple of hours. If nothing breaks, you
jump between docs, install one package
after another, and maybe waste some time
fixing some errors that come up. Even
when I teach that stuff on YouTube, most
of the time goes into the setup instead
of explaining why I choose these tools.
But what if you could skip all that
boring setup and just automate it? Sure,
you could build your own CLI and wire
everything together, or just use
something smarter like Devon AI. With a
single command, be it a smart command,
Devon sets up the entire application
with all the latest tools. It's so quick
and effortless, you'll wonder why you
ever did it manually. So, let's fire it
up and see if Devon really gets it
right. But let's start the proper way by
creating a new GitHub repo so we can
sync our work with Devon. Devon runs its
logic and actions inside a virtual
development environment. It's a machine
and the repository acts as the
foundation for writing and editing code,
running it on the servers or installing
dependencies, and even more importantly
understanding your project's structure
and goals.
Just like you sync your code between
your local machine and GitHub, now
there's one more device in the mix,
Devon. To keep everything in sync, your
machine Devon and GitHub all need to
stay aligned to follow the same source
of truth. So when you make a change and
push it over to GitHub, Devon picks it
up and stays updated with those changes,
too. So, let's create a new public
GitHub repo by heading over to
github.com/new
and give your repo a name. In this case,
we're building a Nike e-commerce
application. So, I'll just say
e-commerce for you. The owner will be
yourself, your own profile. But in this
case, I'll select JavaScript Mastery Pro
as the entire organization within which
we're running Devon. So, we are using it
for other projects as well. So, it's
just going to be convenient to just use
it directly on the organization. If you
want, you can give it a description, set
the visibility to either private or
public, and then just create a new repo.
Once you create it, head over to Devon
within your account and head over to
deep wiki within the sidebar. Here you
can add a new repo. And now you'll have
to configure your Git integration to be
able to see your repo. So, head over to
integrations. And you can see that there
is Slack, which is super useful. Linear
for tracking PRs, and then there's
GitHub. So, I already enabled it for a
couple of accounts, but let me also go
ahead and show you how you can connect
it to another account. Just click add
connection, then choose your profile,
configure, and then just give it to
access only to some repos or all repos
if you want to. And you can review the
request right here and give it
permissions. Once you do that, you
should be able to see your repos. So
search with the one with the proper name
and don't try to index it first. You
first have to add it to your machine. So
find the repo with the same name you
just created and add it to the machine.
It'll say that this will clone a repo to
Devon's machine snapshot. Every Devon
session starts with a new copy of this
machine snapshot. So we're going to add
this repo and click start. And right now
Devon is working within its own virtual
environment which is running on a Ubuntu
device. And you can see that right here
where we have our setup agent. There's a
working terminal. And then finally,
there's also a browser running on that
virtual machine. Now, instead of just
going ahead and verifying all of these
commands immediately, like running a git
pool, installing dependencies, and all
that good stuff, I actually want to
immediately set up an application
environment within which we're going to
be working. So, let's write our first
very important prompt. I'll start by
saying something like create a nexjs app
with typescript eslint
tailwind css
better o neon posgress drizzle or
zastand I think this is a very
production ready stack and now that
we'll use AI to generate it while it's
actually doing its thing very soon
that'll give me so much more time to
dive deep into the explan explanations
of why I chose specific technologies.
But for now, let's also improve this
prompt just a tiny bit by saying it to
do something else other than just
setting it up. So I'll say also
define a product table
seed sample Nike items and render it on
homepage that queries the DB
with drizzle and renders
the list. Very soon this should allow us
to visually see some things on the
screen as well. So let's run our first
dev command. You can see two things.
Number one is that it's thinking. It's
giving us a response and you can also
cancel the prompt. Pretty usual stuff
with all of these AI generation chat
bots, right? But you'll see the level of
depth and detail that Devon goes through
before giving you something. It starts
with saying that he'll help us create a
comprehensive NexJS e-commerce app.
Okay, it checked out the directory and
then it's going to start setting up the
package as here's where Devon
immediately stands out.
It's asking us some questions. It says,
I want to update the setup config. And
you can choose whether you want to
reject or update that. So, I'll just say
update. Go ahead. And now it's going to
start by configuring the secrets. Of
course, it's going to be just dummy
variables for now. But I think you can
immediately notice that very soon we'll
actually need to add these ENVs
ourselves like database URL and some
authentication keys. Basically what's
happening is Devon is now going through
the steps on the left one by one. It
already pulled from our repo and now
it's configuring our secrets. Now some
of you might be a bit skeptical like hey
we're just starting out and it's
immediately asking us for our env. Well
not at all. It just created an envy
filled with dummy variables. So your
actual environment file will never be
pushed over to GitHub and in the same
way will never be pushed over to Devon's
machine. It'll always stay on your local
machine. That's a very important thing
to note. So let's go ahead and create
the env. It's going to ask us to update
the set of config.
And now it's going to install the
dependencies. So it immediately
recognized that we want to have all of
these different libraries installed and
it found the packages for all of them.
So I'll just click run. For step three,
we want to install Drizzle kit to enable
database migrations. And while that is
happening, you can see all of the inputs
and outputs within the terminal on the
right. And finally, it's actually diving
into code. So first, it's going to start
by creating a database schema and
configuration files. And this is
actually following proper drizzle
configuration. Now it's going to create
a database schema and the products
table. And here it is. I mean this is
just some code that we would have to
type out manually. But in reality after
you finish the database architecture
process and thinking and the schemas in
your head. This is just the process of
typing it all out and you do it for
every single application you create. So
I'm very happy that we can very easily
put it down right here. And keep in
mind, you don't have to immediately
accept it. If you want to make some
changes to it, you can very easily
interrupt it and then let it know what
changes you want to make. But for now,
this is good enough. I'll click create.
And then if we need to modify it, we can
always do that as easily as changing a
line in the code. It also created the
index file for the database connection.
And at this point, you might be
wondering like why is it asking me to
click this create or update button so
much? Like is this how AI assisted
development is going to look like? Not
at all. This is the case just for our
initial command because it is so deep
and it covers a lot of ground. So Devon
just wants to verify that it's getting
what we asked properly and immediately
generated an array of Nike products as
well which is always good to see. Now
it's going to create a Zestan store for
state management and also the
products.ts file which uses TypeScript
for type safety. Oh, and if at any point
you want to see how this file looks
like, you can just click it and it'll
open it up right here within your
browser. Let's also create the
authentication configuration and a few
more files. I'll let it do its thing and
I'll just keep clicking run and create a
couple more times until we have a fully
functioning repo. Then we'll dive deeper
into every single decision that Devon
makes. And I'll have so much time to
explain the why and the how everything
is working. But to be honest, I just
want to see how quickly can we get a
full complex e-commerce Nex.js
application running just using Devon
because typically it would take me a
couple of days to set it all up
properly. Okay, it started diving into
TSX which I'm very happy to see. And one
concern that you might have is the code
quality, right? The cleanliness of the
code. I was initially very skeptical of
these AI tools writing the code because
sure sometimes the output can look good
but hey how does it look behind the
scenes right is it actually clean and
we'll check out the entirety of the code
that will be generated throughout this
project and I'll show you just how clean
the code is I was actually surprised
that it created cleaner code than some
of my teammates or myself would also
write when developing some of these
projects so it is incredibly clean. And
there we go. Step number three,
installing dependencies, is now
completed. And now we can maintain those
dependencies and keep them up to date.
That was quick. So immediately we are
setting up a llinter. Yep, talking about
code cleanliness, it doesn't just
generate it once and make sure that it
looks good. We'll also want to make sure
that it is properly linted at all times.
And another huge thing is that there's
an optional step of immediately setting
up tests. Since this is a new project
without test setup, for now we'll skip
this step. But hey, at a later stage we
can very easily generate tests for our
entire application. Now later on we'll
run this app locally. So then we'll have
to have the neon Postgress database set
up. But for now I'll just click update.
And now it's going to even create some
additional nodes with important info
which is basically just the readme MD.
So our project is well documented as
well. Wait, don't yet click the finish
button. First, let me explain the text
stack. And we have to do a couple more
changes first and then we'll be able to
finish it. But don't click it just yet.
Setup is complete. And our NexJS
e-commerce app is now ready with all of
these different technologies. But hey,
why did I choose this specific stack?
That is the question that matters
because in the future, you won't just
have to type things out manually. You'll
have to think how to approach building
applications like an engineer with
purpose and intent. So let me guide you
through my thought process of coming up
with this tech stack. First of all, the
elephant in the room, Nex.js. It gives
us fine grain control over rendering
strategies which is super useful for
e-commerce apps. For product details
pages, you can use static generations,
which means pre-building the page at
build time, so it's served instantly
with no server compute. And that's just
great for SEO and performance. But for
product listing pages where content
might change frequently, but not per
user, we use partial pre-rendering or
PPR or maybe even serverside rendering,
SSR, which lets us keep the page dynamic
while still maintaining performance. Now
the second most important part of the
tech stack is of course the database and
in this case we're using Postgress.
Postgress is a relational database which
means that we can use joins to fetch
related data entities like products or
categories or even orders with users in
a single query using different types of
joints. It supports something known as
asset compliance which is important for
transactional workflows like payments or
inventory management because it ensures
that our database transactions are
atomic and consistent. In simple words,
it provides the structured data with
transactions safely and allows us to
write queries which are ideal for logic
like filtering, sorting, and checkout.
And then taking this even further, we're
pairing it with Drizzle OM. It's a
schema first approach written in
Typescript. It is type- safe at compile
time. So as you're writing a query, your
editor knows exactly which fields are
available preventing you from making
some invalid queries. There's also
better off which we use for
authentication. It supports email,
password, ooth, magic links, all of it
without locking you into a hosted
solution. your code, your users, and it
also has a built-in rate limiting
feature which protects against brute
force attacks. And finally, we're using
Zastand for global state management as
it's a simple hookbased solution. Unlike
Redux, it doesn't require actions or
reducers or huge boilerplate. It also
avoids unnecessary rerenders helping
with performance and it's great for
shared states like cart user sessions
filters or modals all of which we'll
have within this application. So there
you have it. Now that AI is actually
doing the typing we can focus on what
matters which is understanding the
reasoning behind the choices that we
make. So now that you understand the
decisions behind choosing specific parts
of the text stack let's start using it
with Devon. So now that Devon has
generated all this code, let's
immediately push it over to GitHub. Just
click the git icon on the right side and
you can add a message something like
initial project setup. Commit it. It's
asking us whether we want to stage all
of our changes. So definitely say yes or
always. And then sync and push them.
Once you push your changes, publish your
branch right here. And then finally,
let's finish with the project setup.
It'll run some commands to test it all
out. And there we go. Setup passed with
all checks. So, click complete. This
brings us to the end of the repository
setup. It's going to save it into your
profile and your repo. So, let's give it
a few seconds. And if you head back over
to GitHub, check this out. Your entire
initialized application is right here
pushed within a couple of minutes. Feel
free to take a few seconds to add a
short description, topic, and very soon
a website URL because we'll also deploy
this project to the internet. But for
now, you can very easily browse the
files that were created. But instead of
just browsing them, I think you're more
interested to see it all in action. The
only thing we have to do before actually
seeing it in action and running it
locally is provide our own key. That's
the only thing that Devon cannot do for
us because it will be a security
concern. So there are still some things
that we have to do ourselves. From
experience, the best way to set up a
simple Postgress database online is
through some of the serverless
providers. In this case, I decided to
use Neon because it allows us to well,
as they say, ship faster. So just go
ahead and create a free account. When
you're in, simply create a new project.
You can name it something like
e-commerce. We can host it on AWS. And
for the region, you can select really
any one that is closest to you and click
create. Once you reach the project
dashboard, click the connect button at
the top right. And then simply copy the
snippet. So now we have to set up our
project within a local environment so we
can add our connection string. To do
that, we'll have to move over from Devon
and GitHub over to your IDE or code
editor. In this case, I'll be using
WebStorm, which is a super powerful IDE,
which as of recently is completely free
for non-commercial use. It used to cost
a lot, but it gave great benefits to
professionals, but now it is completely
free, so you can just download it and
use it with me. Once you download it, we
can create a new project within WebStorm
or we can very easily just clone an
existing repo. So, I'll select that.
I'll head back over to my GitHub and
just copy the URL. Once you do that,
paste it right here and let's clone it.
And with that, all of the code that was
generated for us is right here. You can
check it out within the source folder
app. All of the files and folders are
right here. So, let me actually make
this go full screen. And let's not
forget to install the dependencies. You
can see that webtorm allows me to do
that with one single button press. Uh,
but if you're in some kind of an other
text editor, you can just open up your
terminal and then run mpm install. But
now we have to actually add our
environment keys. So let's first take a
look at env local and we have to replace
this database URL with the one that we
have to copy from here. So just paste
your new one right here. Once that is
done, run your application locally by
running mpm rundev
and that'll spin it up on localhost
3000. And check out what I have right
here. An e-commerce app built with
Nex.js GS TypeScript Tailwind Better
Off, and all of the other technologies
that we outlined. So, not only did Devon
do the entire project setup, it actually
created a nice boilerplate landing page,
letting us know what are the steps of
our setup. So, in the future, we'll have
to set up our database and better off.
And then it even tells us what the next
steps are. Now, here's the thing, and I
want to emphasize this. This landing
page will most likely look a bit
different for you. As a matter of fact,
it can look totally different. We can
never expect the same output from the
AI. So, this is something that they want
you to know for the rest of the video.
Whenever I run some prompts, you might
get different results from what you can
see on my screen. And that is completely
normal. That does mean that you'll have
to do some workarounds to replicate what
I'm seeing on the screen, but more or
less, we should be good because I'll be
writing prompts that are super precise.
And that's it. In just a couple of
minutes, our entire application has been
set up. You simply describe what you
want in plain English and Devon does the
heavy lifting. But one thing doesn't
change, and it's that you are still the
architect. You choose the text stack,
you define the plan, and AI just helps
you execute it faster.
In this lesson, you'll implement the
entire design system of the application
from the color palette and font families
to core typography styles all directly
within the code. There's a complete
theme guide ready for you. This was
created by our designer Fisen, and it's
exactly how things work in real world
projects. When you're working on a
client or industry project, you'll often
receive a design system like this
alongside the main UI designs. We'll
implement this manually first so you can
later feed it into Devon and have it
consistently follow the theme in all UI
related tasks. Yep, Devon can help
automate this too, but it's important to
establish a solid foundation with your
own conventions first. That way, your
system stays clean and maintainable. So
head over into source app layout.tsx
and let's first set up a new font family
that we want to use. Instead of gist I
want to use a font family called yost.
So right at the top I will import j
and this is a family coming from next
font google. Then we can set it up by
saying const jost is equal to jost. And
then we can pass a variable with which
we can call it. That's going to be
d-font-jost.
And then we can choose the subsets of
the font that we want to use. In this
case, it'll be just the Latin subset.
While we're here, we can also change the
website's metadata by calling it Nike.
And then we can also change the
description by saying something like an
ecommerce platform for Nike shoes.
Perfect. And we also have to modify this
part right here by adding our new font
into the body's class name by saying
something like just doclass name.
Perfect. And now let's head over to
Figma link that you just opened. Check
this out. Here you can see all the
different colors which we'll be using
and their names such as dark 900 which
is the variant of the color dark and
then the exact hexodimal value and then
we also have the light and supporting
colors. Now if you check out Tailwind's
guidelines on customizing your colors,
you can see exactly how you can add
these custom colors to your theme. And
if you want to dive a bit deeper into
Tailwind CSS, I would highly recommend
checking out our Tailwind CSS crash
course on YouTube and full Tailwind
courses coming soon on jsmastery.com.
So let's implement it by heading over
into app globals.css. And for now, I
will clean up everything from here. We
just want to have the import at the top.
That'll say tailwind CSS. And then we
want to define the theme. Now we can
start with the dark color by doing
d-color-
dark and then give it a variant of 900
and this is going to be that hexadimal
color of hash 1111 1111 like this. I
just picked this right here from the
color guide. We can also get different
variants such as we can do light 100. So
if I head over here we can check it out.
That's going to be d-ashcolor-
light 100. And that's going to be hash
fff fff. Perfect. And now you can go
ahead and add all of the other colors.
Dark 700, 500, different light colors,
as well as the supporting colors too.
When you do it, it should look something
like this where we have three sets of
shades and all the different variants.
Just so you don't have to type it
manually, I will leave the final
global.css CSS down below. So you can
just copy and paste it. You don't have
to do it yet. You can do it in a couple
of minutes when I show you how we can
set up the typography of our
application. So right below the colors,
if you scroll a bit down, you'll be able
to see our typography guide. Here we can
use Helvetica as well as some other
fonts like Gilroy later on. And as
before within Tailwind CSS
documentation, you can see how we can
define different theme variables such as
fonts. So let me show you how we can do
it for our application. In this case,
just below the fonts, I will set up the
font
just and that's going to be equal to
just or sans serif as a backup if we
cannot find it. Now while we're here,
you can also define different heading
sizes. For example, here you can see
that the font size of an H1 heading is
72 pixels with a line height of 78 and
text transform uppercase.
So you can add it right here by saying
d-ext- heading- one and set it to 72
pixels. Similarly, you can set the text
heading one line height to 78 as per the
Figma design. And then also we can do
the text heading not line height but in
this case we'll modify the font weight.
And this will be something like 700.
It's going to be quite bold. Perfect.
And now we can repeat it for all the
other headings and texts directly from
the Figma design. Just you don't have to
type it out. As I said, you can get this
complete block containing the headlines
as well as the colors from the video kit
down below. Now to test these within our
application, let's head over into our
app page.tsx
and remove absolutely everything from
here and just create a new functional
component by running rafce.
You can call it home. And then within
here, return an H1 that'll have a class
name equal to text heading 1 and font of
just and it'll say Nike. Now, if you
head back over to localhost 3000, sorry
for flashing you with a light screen,
but you should be able to see a Nike
text with this nice looking font in a
bold large variant. This means that
everything is working well. Now, while
we're still setting up our project,
let's also get all of the assets into
it. See, within our Figma design, sure,
we have already inputed all the colors
and typographies, but there's going to
be a lot of different images such as
these shoe images and little things like
the Nike logo at the top. So you could
either go ahead and manually download
all of these little icons and images one
by one by clicking many times and then
exporting them or you can get the
complete public folder that I will
provide within the video kit for you
just to make it a bit easier. It'll be
just a zip folder on Google Drive. So
just go ahead and download it and when
you do check out your current public
folder, delete everything that is within
it or you can delete the entire public
folder. Find your new downloaded zip.
unzip it and just drag and drop it into
your current application at the root of
your application.
There we go. So now we have the public
with all of the different shoe images as
well as the logos and more. And now
here's a very important thing that we
have to do and that is to push the
changes to GitHub because that's the
only way for Devon to be aware of the
changes we made so that it can properly
modify the application in the future. So
just open up your terminal. I'll
actually open up my current one and
split it into two. I'll call the first
one app and I'll call the second one
terminal
so we can run some additional commands
while our app is running. I'll say get
status to see what the state is. You can
see we made changes to a couple of files
and then we can do get add dot get
commit-m.
We can say something like implement
initial styles
and get push. Perfect. So now if you
head back over to GitHub and reload,
you'll see that some changes were made
just now. So this is amazing. So far,
it's the same workflow you're used to.
Nothing new, but just a little shift
from here on. You've made some changes
and pushed them to the repo, and that's
what matters because Devon needs to stay
up to date with the latest codebase,
just like any teammate would. Think of
Devon as your colleague or pair
programmer actively working alongside
you. So now that the project has been
set up, let's dive into this new flow.
In this lesson, you'll use Devon for the
very first time. Not to build the entire
app, but to handle those small
repetitive UI pieces. Can you guess what
they are? I mean, they're the usual
suspects. A navbar you've coded more
times than you can count. A product card
that looks the same on almost every
e-commerce site. And of course, a
footer. important, but honestly not
worth your precious time. This is where
AI comes in handy. It shines when you
give it small, focused tasks. Handy the
whole landing page in one go, and it
might panic and go off script. You're
welcome to try it, but don't say I
didn't warn you. In this video, you'll
learn how to write better prompts and
actually get your AI peer programmer to
do what you need without going off
rails. So head over to Devon AI and
click on the Devon session. This will
open up a prompt window similar to that
of Chad GBT. But here's the kicker. You
can give it access to a specific
repository. So make sure to choose the
one that we worked on. Oh, and here's
the cool thing. GPT5 got released
recently and it's already included
within the agent preview. So we can turn
it on. Now, if you head back over to
your Figma design, you can see that we
have a very simple navigation bar at the
top. Then we have these product cards.
We can skip these trending sections. And
finally, there is a very simple Nike
footer. So now we have to write a prompt
that will allow us to build this navbar
product cards and the footer. Let's
start with writing a prompt. We can say
something like you are a senior full
stack
engineer
assigned to build a modern web
application from scratch
and you already know the deal. After
that we can define the objective. So
here you would go about saying something
like build three responsive UI
components. That's going to be a navbar,
a reusable card,
and a footer. We can then say based on
the provided design. So, what do I mean
based on the provided design? Well, for
now, we'll provide these designs as just
images. But soon enough, you'll just
have to provide Figma links and Devon
with the help of the official Figma MCP
will automatically refer to their
designs and we'll try to replicate them
as close as possible. See, even right
now, if you head over to settings and
then the MCP marketplace here, you can
see all of the different MCPs that Devon
right now supports. So, what is an MCP?
Well, an MCP stands for model context
protocol, and it's an open standard
created by Enthropic. Its job is to give
AI models a clean and consistent way to
talk to tools, services, and data
sources. I mean you can talk to chat GPT
or Devon out of the box but they only
have the data that you give them. In
this case Devon has access to our GitHub
repo. But if you want them to query your
company's posgress database call the
internal APIs look into your file system
or get repo or maybe send a Slack
message once a new PR is ready or
anything like that. Well then you'd want
to integrate it with these MCPs. In
simple words, MCPs allow your AI to
interact with data systems and tools in
a reusable way. And Devon already
integrates with a lot of them. Actually,
there is a Figma MCP too, but it's not
yet official. So, for the time being,
we'll just pass the screenshots of the
Figma designs, and that's going to be
enough info for Devon to generate them.
So, if we head back over to our session,
the second thing we'll have to give is
some more structure.
And here we can say something like use
the text stack present in the project
structure. We can also say something
like um use typography and color pallet
that's also provided in globals.css.
And then we would go ahead and define
the tasks for Devon to replicate the
output we wanted to achieve and finally
give it some notes. Now, instead of me
writing all of this by hand, I'll
actually provide you with the full
prompt in the video kit down below. So,
if you just open it up, you'll see this
first prompt we're on, and then you can
just paste it right here. It'll look
something like this. Now, do you have to
make the prompt so long and so detailed?
Definitely not. It can be much shorter,
but for the purposes of this course,
just so we can follow along and get
similar outputs, I really took my time
to properly define what needs to happen
so we can get better outputs.
And as we're referring to the provided
designs, we of course have to pass those
images into it. So if you head back over
to the design, you can actually export
these components as just images. So you
can export it as for example JPEG by
selecting this navbar and then click
export navbar. We can do the same thing
for this card right here. And finally
for the footer I will actually copy this
as PNG so it's all together. And now I
can start pasting those images. I'll
first add the footer and make sure to
also drop the navbar and the image card.
Before we continue, make sure to give it
access to the repo and select the agent
preview. And let's give it a go.
As soon as you press enter, this text
will actually turn into markdown. So
you'll better be able to see what you
requested from AI. And then it'll start
setting it all up. It'll analyze the
entire codebase, synthesize the search
results, evaluate its confidence, and
it's going to think about the questions
you asked it to do. You can also monitor
the progress on the right side if you
want to, but more or less it's going to
give you everything important here. Oh,
and here's an important note. You see
that for every action there are these
ACUs. Now, what does this ACU stand for?
Well, Devon runs on ACUs and these are
called agent compute units. These
measure how much work Devon is doing
based on task complexity, code
execution, and the virtual machine
usage. The good news is that it doesn't
burn any ACUs while it's idle, running
tests, or setting up repos. Only when it
works. You can always buy more credits
for more ACUs, and Devon will show you
exactly how much each action costs. This
one is very, very inexpensive. Now, you
can see that Devon understood its tasks.
It'll create three responsive UI
components for the Nike e-commerce
platform based on the codebase
structure. Typography and color
variables defined in globals.css.
Text stack is recognized. Font is here.
Components directory is there. And it'll
get the images from the public folder.
The confidence is very high. And now
there's a plan overview. It's going to
add three components, navbar, card, and
footer. All styled with tokens from
globals.css. CSS. It'll optionally
export this index.ts and wire these
components together to create an app
shell and then the sample card will be
temporarily placed in the page for
visual check. This is exactly what we
wanted to do and it can do that with
confidence of high. And it actually did
it. You can see right here that it
created the navbar, the card, and the
footer. And now it's going to lint it
and put it all together so we can
visually see it within our application.
And there we have it. Devon opened up a
new pull request. It gave us the
components we wanted and everything is
prepared for us right here within this
PR. Now, I love it that the PR was
created instead of just pushing the code
directly to our codebase. That would be
a bit too much. But here we can actually
see what it did. Review this
conversation and it even provides a
complete diagram of what it did. So now
you can check out the file changes right
here.
There you have it. Or even better, we
can open them up within WebStorm. So
first run git pool to pull the latest
changes and then depending on which
editor you're using, you can find your
git system. on WebStorm for me it's
control shiftG which will open up this
git view right here and then you can
click on it. So head over to the PR that
it opened rightclick it and then check
out to this revision. Now if you head
back over to your code you should be
able to see the changes that were
implemented. Here's the footer. It even
provided some columns and then the code
is very very clean where it maps over
those columns. There's the navbar with
different navbar links. Once again, the
code is super clean. And then there is a
reusable card. Oh, and after creating a
PR, it even decided to export these
components. So, we can take a look at
this commit right here. And you'll
notice that this one also has this
index.ts from which it nicely exports
all of these components. Now, to be able
to see the changes that Devon
implemented, we can head over to the
page where currently it just says Nike,
and we can try to render a couple of
cards. Before doing that, we can head
over to the layout first and render the
navbar at the top of the children, right
within the body. So that's going to be
navbar.
And then right below it, we can also
render the footer. Make sure to import
these and they're going to be coming
very conveniently from at/components.
But now let's actually render the cards.
And for that we'll have to create some
dummy data to map over them. And I think
once again this is the perfect use for
AI to quickly generate some code that we
don't really feel like typing and it
isn't mentally challenging. So what I'll
do is I'll open up Juny. Juny is a
coding agent built right into WebStorm.
So you can just install it as a package.
Press command shiftp and then type juni
and then it'll pop up right here. So now
I'll ask it to in the homepage create a
section within which there should be an
H2 that'll say
latest shoes and below it there should
be a div inside of which we map over
some placeholder
data. Each product will be a reusable
card which already exists within the
codebase. You can use pictures from the
public folder
and let's see what it comes up with.
I'll give it some time to create it and
then I'll be right back. There we have
it. You can already see it's doing its
thing. And if you don't want to use
Juny, you can just type it out manually.
There we go. It is done. It has created
a list of dummy products that we can
show and then it is conveniently mapping
over them and using the card component
that Devon created passing all the right
props. So if you head back to the
browser it'll look something like this.
We can of course remove this huge Nike
text
and capitalize this L in latest shoes.
So now we already have something that
resembles a Nike e-commerce store.
Pretty cool, right? Still, we wanted to
ensure that it is perfect. I noticed a
couple of things that can be improved.
For example, the navbar is white and
there's a white Nike logo. The footer
should have been black, so that's not
good. And also, there should be some max
width for this product section. And
also, there should be some max width for
the homepage when listing these
products. So, what do you say that we go
ahead and wake Devon up? I'll once again
pass these images so it knows what we're
referring to. Also for the product
image, I will now pass this bottom part
as well. So it knows that it should show
the number of colors, the names, and
everything else. And also show this
complete section so it knows how to show
these cards one next to another. Oh, and
before I start writing the prompt, let's
not forget to push the code that we
added to our codebase. That's the only
way for Devon to know what's happening.
Okay, open up the git interface one more
time. And on the left side, you'll be
able to see different branches. So if
you want to push things, we want to
check out to this branch that Devon has
created with the changes that we wanted
to implement with this PR. All the code
will still be there. If you check out to
that branch, you'll notice that you have
all of the components that were
generated by Devon. But if you head over
to the homepage, they're kind of
missing. So I'll provide this starter
homepage within the video kit down
below. So you can just copy and paste it
here. It's the same thing that we got
generated before mapping over our
product cards. So now we want to push
these changes to this branch. We can do
that by running git addit
commit-m
and I'll say something like implement
components on homepage and run git push.
This will now push the changes to this
new branch that Devon created for us.
And then if you head back over to Devon,
it'll now understand those new changes
when it wants to make some additional
changes. So let's write the prompt. Make
sure the navbar looks like in the
design. Logo should be black. Make sure
footer is black too. On the homepage,
make sure there is max width.
Same as navbar and footer have it. and
make sure the product card
looks like on the design provided
and that it looks good in the list also
provided above and press enter. This
will wake Devon up and it should make
changes to the changes that we
implemented within our codebase. So it
basically feels like we're working
together and collaborating with another
developer. So, let's give it some time
and once it's back, we'll check out the
changes directly within our code editor
or IDE. So, as you can see, it went
ahead and edited a couple of pages and
then it verified it's all good and
pushed it. But check this out. It's even
navigating to the page within a virtual
browser and checking how it looks like.
And then based off of that, it can make
additional style changes. That's the
beauty of Devon. It's not just a chat
box to which you say something. It knows
your entire codebase. It knows the
design and you can connect it with all
the different MCPs and you can even
check out its progress right here as
it's implementing it. Check out the
shell that it's working within. Check
out the full codebase here and then even
the browser. It's all integrated within
this web app. And now that Cognition
acquired Windsurf, all of it will be
even more neatly integrated within the
new editor experience. There we go.
Updates are live on the PR and verified
locally. So, heading back over to our
application, I believe we're already on
the branch. So, if I just run gitpool,
it should pull all the changes. There we
go. And if we head over to the page, you
can see that now there should be some
other changes right here as well. But
we'll be able to verify it better when
we open it up on local host. So, back on
localhost, you can see that now the
navbar is perfect. The max width for the
homepage is good, and the footer is
already looking much better in the dark
mode. It actually added the fourth item
right here, but the actual product cards
are still not exactly right. And this is
something that I want to point out.
Throughout the entirety of this course,
and in general, when working with AI
agents such as Devon, we'll never get
the exactly the same output. So all of
our apps are going to be different. So
at the end of the day, maybe you end up
with an application that looks a bit
different or functions in a bit of a
different way. That's totally up to you
and it's not necessarily a bad thing.
It's just that we can take the app in an
entirely different direction with
different prompts we give it. Still, I'm
not going to defend it here. Obviously,
this isn't looking perfect, so we can
tweak it manually. So let's go ahead and
modify it a bit. I'll remove this
section at the top. We don't need it to
say Nike. We already know it is based
off of the navbar. And now I will head
over into the card and maybe we can
remove this batch part. So here it's
checking whether this label exists. If
not then it just displays the label. I
don't think we need it at this point in
time. We can just show the actual image.
Now you can also notice that it styled
the colors in a bit of a weird way. It
says bg dash and then it created this
like a custom color within square
brackets. But instead of that, we can
just say bg-
light- 100. And now we can see that my
webtorm will automatically recognize the
color, whereas before it didn't. So we
have to go ahead and make these changes
for a few other instances. It's highly
likely that for you it did it the right
way. Um, a couple more important ones
are the background color of the card. So
I'll change it right here. BG Lite 200.
And then we have a couple more
occurrences. Also, let's make sure that
the image itself is object cover instead
of contain. And we don't need any
padding right there. And then there are
a couple more situations where we have
to change this color. I'll actually try
to automate it. So I'll do it a few more
times. You can see as I select it, it
auto selects it everywhere else. So it's
easy for me to just select it everywhere
alongside the closing square bracket and
then just remove it. If you make a
couple of these changes and go back, you
can see that this now looks so much
better. I'll go ahead and do those
changes across the entire repo so that
Devon can actually learn from this and
in the future implement it the right
way. This is looking amazing. So, let me
also open up the inspect element to
check out how it looks on mobile. And
besides this very weird and AIish
looking hamburger menu, which actually
works, by the way, it is all looking
great and it functions well on mobile
devices. So, this is amazing. I'm
actually super satisfied with this
initial landing page. So, now we can
head back over here and make a push. get
add dot get commit landing
page improvements
and get push. Now if you head back over
to the same PR you'll notice that now
all of the files are changed the ones
that we created as well as the ones that
Devon created. And finally we are ready
to merge this PR. Go ahead and click the
big green merge pull request button
because there are no conflicts with the
base branch and confirm the merge. And
that's it. Now, if you want to, you can
head back over to your editor and check
out back to the main branch.
And make sure to run gitpool just to
make sure we have all of the latest
changes. There they are. And now on
localhost, this time running on the main
branch, you can see all of the features
that were implemented by Devon. Pretty
cool, right? The key here is learning
how to let AI handle the simple stuff
instead of freaking out and avoiding it
altogether. Think of AI like a
calculator. You wouldn't do all your
math on paper when a calculator makes it
quicker and easier. And remember, AI
isn't here to replace you if you use it
wisely. It just helps you work smarter
and faster.
And in this lesson, we're going to focus
on creating the authentication for our
e-commerce app. Now, it is advisable to
create a new Devon session. So, head
over to sessions and just create a new
one and give it access to the repo as
well as agent 5 preview. We do this for
Devon not to run out of context because
there was a lot of stuff happening in
the previous session. So, we want to
start a new one for every new major
feature. Now, just to save us some time,
I'll provide you with a complete prompt
within the video kit down below. So, you
can just copy it and paste it here. But
of course, we'll go through it together.
And another thing you can find in there
is the design inspiration that Devon can
take for building our O. So, let's
actually go through this prompt
together. Once again, we have the you
are a full stack senior engineer prompt.
Then we're diving into the objective and
it is to build fully responsive styled
authentication pages both the signin and
the signup using a shared group layout.
This is what matters because we are the
architects here. So we have to specify
how we want it done. These pages must
support email, password, social, signin,
all of that good stuff. and the design
should be taken from the one we provided
following the theme and typography
outlined before. I'll also add here that
it can use Tailwind V4 for conventions
when applying styles. For example, text
white 900. And then we dive into the
structure. Once again, you don't have to
be this detailed, but just so we have
similar outputs, I decided to be very
very descriptive of how we want things
done. I specified it within different
tasks. And finally, the output
requirements. So basically, we want to
just have two pages back. And with that
said, let's submit it. Now, talking
about O, every single app needs it. It's
two pages, sign in and sign up, wrapped
in a clean, responsive layout that feels
on brand. Like, if you pause for a few
seconds to think, think about it. Almost
all OIS are 99% the same everywhere.
It's two inputs, a button, maybe some
signin with Google magic link, and then
a link to switch the pages. You've coded
it before, and you'll likely code it
again, but you shouldn't have to waste
hours retyping the code every time. So,
this is the exact use case where you can
use AI to generate repetitive structured
UI that still maintains your theme and
folder organization. Now, let's give it
some time to work out its magic. Very
quickly, it came up with a complete plan
of how it's going to approach creating
this, such as creating a new O layout
with split screen and then add different
pages within it. Afterwards, it worked
and implemented all of these files and
it gave us a complete pull request with
all the changes. Once again, you can
just click into it to open it up within
GitHub. And all of the changes are right
here alongside the actual photo of the
implemented design. This is pretty
crazy, I got to say. But of course,
let's test it within the actual
codebase. I'll do a git pull to pull the
latest changes. Door control shiftG to
open up my git environment. And then
I'll switch over to the new branch o
pages by checking out to it. You can
also just copy the name and then do git
checkout and move to this branch.
Perfect. So let's see the changes that
have been implemented. You can see that
now o and root are separated into
different layouts. Within Oth, we have a
layout that is shared between both the
signin and signup pages. The only
difference is the O form. And you can
see that it did it exactly as how I
would approach it by creating a new
reusable odd form component. And then
there's only one prop called mode which
decides whether it's going to show the
sign in or sign up functionalities. And
then there's also the root layout which
has the navbar and the footer within it.
And here's our homepage. I think there's
one thing we got to do and that is to
move this page within the root layout
because only then the navbar and the
footer will still be showing. Maybe for
you it was okay already. If you do it
that way, you can still see that our
homepage was left intact, but now you
can navigate over to localhost 3000
slashsign up and you'll be able to see
this beautiful just do it Nike
authentication page which looks exactly
like it does on the design. You can
enter your full information and sign up.
Well, at least soon you will be able to.
And you can also use Google and Apple.
Also, if you want to switch over to sign
in, you can just click right here. And
it is automatically interlin.
Wonderful. There are absolutely no
changes that I would make here. And now,
if you get back to the PR, you can also
review all the changes.
It's looking good to me. There's even a
diagram with how all of these files are
interacting. So, with that in mind,
let's go ahead and merge our second PR.
No conflicts once again, so we're smooth
sailing. As soon as that is done, I'll
head over to my editor and switch back
to the main branch. Do a pool and then
all the changes will be now on the main
as well. So now on localhost 3000 we
have both the homepage as well as the
off pages which for now you can just
visit manually by heading over to
for/signin. I got to say this was super
smooth. So let's continue.
So far we've seen AI generate this nice
looking homepage and also these odd
pages. But that's still just UI. AI can
handle it for sure. But what about the
back end? Can AI really build a secure
productionready o system that will go
behind just this pretty layout? It can,
but only if you guide it carefully. See,
O backends are full of repetitive
tables, session cookies, server actions,
zod validation, same patterns all over
again. But if you clearly instruct the
AI on the judgment calls like how long
should the session last? How do we
safely migrate the guest cards? How do
we handle all of the error edge cases
and give it precise instructions? Well,
then AI or in this case your intern can
actually do the heavy lifting. That's
exactly what we'll do in this lesson.
You'll learn how to generate modular
Drizzle schemas for users, sessions,
accounts, verification, and guests.
You'll also set up server actions
for sign up, sign in, sign out, and all
the guest handling. And finally, you'll
configure cookiebased
sessions.
You'll define exactly which server
actions to create, where to place them,
how they integrate with the current
platform, and how cookies and sessions
should be handled. Then sit back and
watch Devon handle the rest. So, let's
see it in action. And we can do it the
easy way or the proper way. The easy way
would be to just create a user table,
store the session and call it a day. But
if you want real production ready
e-commerce application, well then
instead of doing half the work, you
should first ask the right questions.
And that is how do you handle guests who
haven't signed up yet? How do you manage
sessions securely with cookies? How do
you allow multiple login methods without
complicating the system? And how do you
keep things modular so future features
like OOTH or two-factor authentication
are easy to add? Well, in this case,
Better Off will come to help. Better Off
is a comprehensive authentication
framework built for TypeScript. It's
going to give you the framework, but it
expects a specific database structure.
If you don't follow it, things will
break or worse, you're going to lose the
session security. And you can see that
right here under core schema where it
specifically defines how a user session
account and verification should look
like. We'll follow pretty much the same
structure with one little addition and
that is a guest table. See, it'll let
you treat anonymous visitors almost like
users, giving them a temporary identity
and keeping their data safe and then
later on we'll merge it into a real
account. This means that every guest
user will get a specialized token named
session tokens stored in a cookie to
recognize the guest across requests. The
way it would work is that if a user
visits your site with no account, you
will create a guest row with a unique
token with a cookie. Then if they add
items to the cart, well, we'll just add
references for their guest ID. and then
later on when they sign up you can
simply migrate the cart items and
preferences from guest to user and
finally delete that guest row now that
everything lives under that user it
might sound complicated but just bear
with me we'll do it all with help of AI
now we're using Neon for the database
and thankfully Devon already supports an
official Neon MCP
which allows you to list create and
delete Neon Postgress projects and
branches is execute SQL queries and
insert tables and schemas. This is
exactly what we need. So go ahead and
enable it and make sure to connect your
account right here. Then head back over
to Devon and create a new session. Once
again, select a repo and a new agent.
And within the video kit down below,
find a complete prompt that we can pass
in. Let's dissect it together. Once
again, we're working with our senior
full stack engineer that is acting as
our intern. And this time their
objective is to develop a scalable
authentication system for a Nike style
e-commerce application. This system
should support both authenticated users
as well as guests using better off
enabling email and password login with
no verification in MVP session
management and smooth guest to user
transitions during login and signup. The
system must be modular, extensible and
production ready. Talking about the
structure, we're using the same stack as
before. Postgress plus drizzle plus
nextjs app router and better o. Talking
about the o, we're going to use cookie
based o system using better o and guest
sessions. We'll manage it all using
drizzle. And here is the schema that we
want to have. Users, accounts, sessions,
verifications, guest, and an index.
That's a lot of different boilerplate
schemas that we would have to write by
hand. But now we can tell Devon what to
do. I would try to be super precise by
taking some fields from better off. So
we have everything we need. For example,
users need an ID, a name, email, email
verification, image, and created ad and
updated fields. Then we have another
table for the session, for the accounts,
for the verification, guests, and more.
Once again, do you have to be so
descriptive? Definitely not. But will it
help? Well, that's a yes.
Then we tell it to use this cookie based
authentication, enable the guest to user
migration, use Nex.js server actions,
and then protect the routes and do the
checkout flow. And that's it. So let's
go ahead and submit it. This is the
first time that we're testing out Devon
for something that is not an easy task.
It's as hard as it can be because it
requires understanding the entire
codebase and then implementing a
solution across databases,
authentication Postgress Drizzle
better off and other tools to all work
together seamlessly within the existing
codebase. I don't think a single human
intern would be able to do it,
especially not in a couple of minutes
that'll take Devon to do this task. So,
let's wait a bit and then I'll be right
back. Okay, so let's look at what we
have right here. Devon says that it'll
help us build a robust authentication
system and it already recognized some
important code locations. There's an
existing better o configuration, current
schema structure that needs refactoring
and then database connection setup. The
implementation plan is to restructure
the schema into modular files per table.
Then update the existing tables to match
better o requirements and our
specifications and add the new guest
table for guest and session management.
Create server actions to tie it all down
and then implement secure cookiebased
session. Exactly what we want. It'll
also refactor the existing monolithic
schema to ensure better odd
compatibility. So it doesn't have to do
this but it's doing it to keep the code
more clean. The confidence is medium
because it needs to explore the codebase
further to understand the complete
authentication flow. But it got started.
Then it worked on creating all of these
different schemas and files. Then it did
even more reading and then it even
applied some edits based on what it
learned. Then it tried to run it and
once it did it actually pushed a new
branch and opened up a PR and that PR is
now ready and open right here. It
includes modular drizzle schemas, better
o wiring, guest session support and
server actions. And the next step is to
just add a new database URL from Neon
and run DB push to see it. So let's go
ahead and test it out. Back within our
application, make sure to run gitpool to
pull the latest changes. Then go ahead
and check out to the better off guest
sessions branch or however it named it
for you. If you move over there, you'll
notice all the changes that it
implemented right within your codebase.
For one, there's going to be a lib with
o actions and many different database
schemas. It also went ahead and
installed some packages. So, we have to
run mpm install to install those
dependencies right within our
application. Now, let's check it file by
file to see how it did. First, I'm
wondering about how it approached the
Drizzle config. Here we are importing
the defined config from Drizzle kit. But
one thing that I also want to add is for
it to pull the environment variables
because sometimes it has trouble just
getting them from the process env.
So, make sure to install by running mpm
install.env env or I'm just going to ask
my webtorm to do it for me. Then what
you can do is say env.config
and provide a path to your environment
variables. For me that's going to be
under env. And let's check if it's
here.local,
right? So whatever you called it right
here, make sure to also specify the path
here.
And this contains the access to the
database URL. Make sure that that is the
case for you too. So that's going to be
env.local database URL containing your
Neon database string. If you don't have
it yet, just pull it up from neon. Now
we also want to copy those two lines and
also add them to lib dbind
index.ts.
Here we're also using that schema. And
it's very important to use to actually
be able to pull it up. And if you're
wondering why we even need this, you can
copy it and pull up Juny. ask it a
question and then say something like why
do I even need this part
and then press enter. It very quickly
came up with a response that you need
this in any runtime where environment
variables are not automatically loaded
for you. For example, node scripts or
tooling that executes your code outside
Nex.js. For example, drizzle kit
migration scripts which is exactly what
we're using here. So without it the
process.env.database database URL may be
undefined. So that's why it's
recommended to set up this env config.
Okay, with that in mind, we are ready to
generate the schemas based off of the
configuration that Devon created for us.
Now let's take a bit of a closer look at
the user schema that it created. We have
the necessary ID, the name, email.
There's also this email verified, the
image of the user and then created and
updated at fields. Now in this case,
Devon made this email verified field as
a must field with a date type. But in
our case, we can make it a boolean field
set to false. Or we could keep this data
type but make it optional. That's
because email verification will come
later as part of MVP version 2. So for
now, what I'll do is just set it to a
state of boolean email verification.
I'll make it not null. And by default,
it'll be set to false. And make sure to
get the boolean right here from the top.
Perfect. Now, let's go ahead and go
through some of the other schemas.
There's also this schema for
verification that lets us know whether
the user has been verified. It has the
ID, identifier, value, and all of the
timestamps. That's looking good. There's
also the account which connects the
account with a specific ID with a
specific user ID and lets us know which
provider they use to sign up. All of
this is looking good. There's also the
guest schema which only has to keep
track of the session token. And finally,
there is the sessions table. And this
one is looking good as well. What
matters the most is that we have the
user ID and the token right here.
Perfect for the sessions table.
Everything is looking good, but we just
might want to change this ID to UYU ID
right here. That's a bit more precise
and works better with Drizzle. Great.
So, with that in mind, I definitely want
to point out that there are likely to be
situations where the code will be
different for you. Maybe in your case
for the account, it chose to use text
instead of UID. For me, it was in the
session. So our code output will never
be the same. So you'll have to go
through it and we'll have to make sure
that it looks good. You can always
compare it with my code at a specific
point in time if you head over to my
GitHub and just make sure that all the
schemas and all the necessary things are
properly set up for our application to
work. I'll try to be as detailed as
possible while going through the
codebase because this is a collaborative
process. Devon gives us the output, but
then it's up to us to actually make it
work. And now that we have checked out
all the schemas, let's also open up the
index.ts.
You'll notice that it's exporting some
schemas that we deleted a while ago,
such as products, orders, and order
items. So, simply delete this line and
keep everything as is. This only
contains the newest exports. And if you
take a look down below, there's a
schema.ts TS file which was a single
monolith file that contained all of the
exports which we no longer need because
now each one of the schemas lives within
its own file. So delete the schema.ts
file. Perfect. Now let's check out the O
files. First we can head over into O and
then index.ts.
Within here we have a better O options
wrapper. And a couple of things you can
notice is that we're using a drizzle
adapter and we're passing our database
to it. The imports are right at the top.
Then we're passing all of these schemas
right here such as the user schema,
session schema, account, and
verification. And then there's some
additional settings down below such as
this email and password thing which is
turned on. But in this case, at least
for the MVP, we don't have to require
the email verification. So for the time
being I'll just set that to false. Also
for the socials provider we can leave
them empty for now just an empty object
and then we can provide some additional
settings on how we can store our
sessions by opening up a sessions
object. And here we can define the
cookie cache. Within here we can enable
it by setting enable to true and then we
can set the max h. That's going to be 60
seconds times 60 minutes times 24 hours
in a day times 7 days. And this is
basically equal to 7 days in total. Now
below the sessions, we can also
configure our cookies
by setting the session token and giving
it a name of Ocore session and provide
some additional options such as HTTP
only set to true. also providing the
secure variable which is going to depend
on our environment variables or our
environment. So for example if our
environment so process envode env is
triple equal to production then it's
going to be true else it's going to be
set to false. We can also give it the
same site policy which we're going to
set to strict as well as a path which is
going to be set to the root path. And
finally the max age as well which is
going to be the same as the max age for
the cookie cache 7 days. Finally we can
also provide some advanced options at
the bottom. The advanced options here
are going to be all about generating
database ids. So we can say database and
then define how the ids of our documents
in our database get generated. For that,
we can create a new callback function.
And we don't want to just return a
string ID like this. We want to use an
actual UU ID because that's what works
better with better o and drizzle. So, we
want to install a package called UU ID.
So, just open up your terminal and run
mpm install uyuid. import it at the top
by saying import in curly braces v4 as
UU ID v4 from UU ID and then for each
new document we can just generate a new
UUID V4 that's going to look like this.
Oh, and very important outside of the
advanced we also have to add some
plugins. In this case we'll just add one
plugin and that is the next cookies
plugin. See, in a typical Nex.js server,
cookies can be accessed from request
response objects. But in Nex.js app
router or server actions, there isn't a
direct global request response object.
And that's where Nex cookies helps us by
allowing better oath to read and write
cookies properly in serverside functions
like server actions or API routes.
Great. So now our better o setup is
looking good as well. Let's exit it and
let's check the O actions.
right here under o actions. You can see
that Devon generated quite a few actions
for us. I'm going to go ahead and
collapse them so we can see it all in a
single page. And that's going to look
something like this. It created actions
for creating guest sessions, the guest
session itself, the signup schema, the
sign-in schema, and then the functions
for sign up, signin, and sign out. They
seem to be good. But we can change the
inputs of the signin and signup
functions to form data instead of an
object. That way we can pass these
actions from the O pages to O form
without making these O pages client
side. This is nothing big and you can
keep it as it is. But this is just going
to make our application that much better
especially since we're using Nex.js. So
in this case, let's start with signup.
And instead of simply getting the data
by parsing the inputs we have right
here, what I'll do instead is I want to
accept the form data right here through
props or through the first parameter. So
I'll say form data of a type form data.
We're accepting it. And then we can
extract the data like this by saying
const raw data is equal to an object
where we have a name which is equal to
form data.get
name as string. Then we have the email
as our second field. That's going to be
form data.get email as string. And
finally we have the password which is
form data.get password as string. Now we
can take that raw data and form a new
data object out of it by saying signup
schema
parse the raw data that we're getting
right here. And then we can just form a
new API request sign up using email by
passing all of these proper fields from
our front end that get converted to our
back end. This is perfect. So now we can
copy this part and we can replicate it
over for sign in. So let's expand the
sign in action and right at the top I
will paste the raw data and make sure
that we are getting the form data of a
type form data. But in this case there's
not going to be uh name. It's just going
to be email and password because we're
just signing in using our email and
password. And once again we're taking
that raw data and we're passing it into
the signin schema.parse. So, make sure
to copy the name of the variable and put
it here. And finally, we make a sign-in
email API call. Perfect. Now, there's
just one more function that we have to
create to get the current user. And this
is coming strictly from the better raw
documentation. See, the server provides
a session object that you can use to
access the session data. It requires
request headers object to be passed to
the get session method. And here's an
example using Nex.js. GS. So I'll just
copy this part that they have right
here. Head back over here and just below
sign in I will create and export a new
asynchronous function called get current
user.
I'll open up a new try and catch block.
In the catch if something goes wrong I
will just console log that error. And I
will also just return null because
something went wrong. But in the try I
will try to get access to this session
by saying con session is equal to await
o.api.get
session. Then we pass in the headers and
the headers we can get access to by
saying await headers which we have to
import over from next headers right
here. Perfect. And once we get our
session, we can just return the user out
of that session. So that is session
question mark user or null in case it
doesn't exist. Perfect.
So now our actions are properly set up
to be able to communicate with the front
end. Oh, and since we've already
implemented a function to get the user,
we can test it out by heading over into
app root page. And right at the top we
can try calling it. You can do that by
saying const user is equal to await get
current user. And then we can simply
console log that user
just like so. And don't forget since
we're using a wait to get it, we have to
also make this function asynchronous.
We'll explore this later on as soon as
we authenticate our first user. But for
now, let's go ahead and check out our o
form.
So if you head over to O pages sign in
or sign up. So instead of just passing a
mode to this O form, what we have to do
is also make it do something besides
just rendering UI and that is to
implement the O actions that we just
created. So we can give it an additional
prop called onsubmit and here you can
pass the action. The action in this case
will be sign up coming from lib o
actions just like this. And you can do
the same thing for sign in by passing an
additional action onsubmit will be equal
to in this case it is sign in. Perfect.
Now we have to go into the o form and
implement all the connections for o the
work. What does that mean? Well, first
we have to define this new prop we're
accepting. The prop is called onsubmit
and it is a function that accepts form
data of a type form data and then it
returns a promise
that then contains the okay which is a
boolean as well as the user ID which is
optional of a type string or void
nothing if something went wrong. Now you
can see that there's a yellow warning
right here saying that maybe if this is
a server action it would be good to
actually name it an action. So like
onsubmit action. This is actually a very
nice advice by eslint. If you want you
can go ahead and change in multiple
places but for now I'll just leave it as
it is. It's just a warning. Okay. So now
we have to actually accept that prop of
onsubmit within the props. Now after we
successfully submit we'll also need to
reroute the user. So what I'll do is
I'll set up the router functionality
from next.js by saying con router is
equal to use router and make sure to
import the use router coming from next
navigation not next router but next
navigation. Then we can create the
handle submit function. const handle
submit is equal to an asynchronous
function that accepts an event of a type
react. form event and as soon as we call
it we need to prevent the default
behavior of the browser to reload and
then we can extract the form data. We
can do that by saying const form data is
equal to new form data ecurrent target.
Right now TypeScript is complaining that
the argument of type event target is not
assignable to this HTML form element. So
I think what we have to do is just give
it some more info on what this form has.
So I'll say HTML form element right
here. And that means that it'll contain
the current target of all of the form
values. And now that we're getting the
form data, let's just open up a try and
catch block. In the catch, we can just
console log the error if there is one.
And in the try, we can try to call the
server action responsible for our
specific mode of the odd form. That can
either be the sign in and the sign up
depending on which page we're on. So we
can say const result is equal to await
onsubmit and once again this onsubmit is
different depending on the prop that we
pass. It can be either sign in or sign
up and then to it we have to pass the
form data. Perfect. And then we can
check if the result is okay. In that
case we can just use the router.push
push functionality and push over to the
homepage. Just make sure that the router
is coming from next navigation and not
next router. Perfect. It looks like the
okay does not exist on the result. I
think that's only happening if we are in
the catch because then we're returning
null. But if something goes right, which
it should, in that case, there should be
a result okay property. Perfect. And
then we're pushing back to the homepage.
So now we have to use this handle submit
when we submit the form. We can do that
right here by scrolling down to the form
and then instead of just preventing the
default, what we can do here is simply
call the handle submit function once we
submit the form and that should happen
once we click the button that is at the
bottom of the form either sign in or
sign up. Oh, and one more important
thing. Um, in our backend and server
actions, we have three fields, email,
password, and name. But here on the
front end, we might have a full name.
So, make sure to rename all of the
instances of full name here within HTML
4, within the name, within the ID and
the name as well. Make sure that it is
just name and not full name.
Perfect. And that's it. Now is the time
to test it all out. In this case, we
have generated a lot of different
schemas. We went through them at the
start. So, if you head over to lib db,
you can check out all the schemas, but
these are just the files that will
generate the schemas when they're ran.
So now it's our turn to trigger
different commands that'll actually run
real SQL queries that'll form the future
of our database. So head over into
package.json. And you can see here that
Devon has created a couple of these
scripts for us. DB generate migrate push
and studio.
So we have to run the dbgenerate
command. But before we do, let's just do
one final check of our lib database
schema. Because what's going to happen
is that this DB generate will look at
our schema file, compare it with the
current database state, and then if it
finds differences, for example, if you
added new columns or schemas or change a
table, it'll generate a new migration
SQL file inside of migrations folder. In
this case, I think there's just one
small mistake I made. The email verified
should be the same right here.
email_verified.
It should not say verification. So I
just want to make this quick fix before
we go ahead and run the db generate
command. So now open up your terminal
and go ahead and run mpm run db
generate. This will generate all five
different tables. And after that we are
ready to push those changes to neon. And
that's exactly what the db push command
does. So simply run mpm run db push and
you'll see that it'll apply all the
changes. Now how do we actually check
out our tables? There are two ways. The
first one is to just head over to your
neonb console and just check out the
tables and you'll see that we have five
different tables right here. And the
second way is to run mpm run db studio
which will spin up a local studio where
you can see your entire database. So, in
case you want to check it out later on
once we add some more products and
stuff, you can definitely do it within
here as well. For now, I'll stop it from
running. And with that, we are ready to
run our application after this huge
push. I think it's already running for
me within the second terminal, but I'll
just go ahead and rerun it because we
made so many changes. So, if you head
over to localhost 3000/sign,
or you can navigate over to sign up as
well. And we can give it a shot by
creating a new account. I'll enter my
name.
Also, I'll do my email and a password
and then click sign up. As you can see,
we automatically got redirected back to
the homepage, which is always a good
sign. But just to verify whether
everything works right, we can go ahead
and inspect element, head over to the
console, and check out the console log
that we have put. This console log is
actually coming from the server, but you
can also view it in the browser. And you
can see all the info about the newly
generated user. How do I know that it
was generated for sure? Well, that's
because it has this new ID. What you can
do as well is head over to the network
tab, reload the page, and check out the
cookies that were generated for the
user. You can do that by clicking on
localhost, then head over to cookies.
And if I expand this just a bit more,
you can see all of the cookies that were
generated for you. one is coming from
better off and you can see the
expiration dates and more. Now, this
already gives us a lot of positive info
so we know that our user was created.
But if you want to, you can also head
over to your database and check out this
newly generated user right here and you
can see one account that is connected
with that user. Oh, and also a session
got generated for that user so that we
can track what they're doing and we know
when they're logged in or when should we
just expire their session. This is
absolutely crazy. This means that we
have a fully functional authentication
backend system hooked up with Postgress
database and Drizzle and TypeScript and
it all works together and it was
generated by Devon. Sure, we needed to
go through it and make sure that it's
all properly hooked up, but about 99% of
the code was indeed generated by Devon.
And with that in mind, not only the O
UI, but also the O backend logic has now
been fully set up. So the only thing
that remains is for us to push it to
push our changes to combine them in the
PR that Devon created and to call
another feature finished. So, I'll open
up my terminal and run git add dot git
commit-m
finalize
backend authentication logic and then
run a git push. Once again, make sure
that you're in the same branch that
Devon created for you.
Once you do that, you can head back over
to your open PR on the GitHubs repo and
just check out how nice Devon makes it.
It says O modular dizzle schemas plus
better o setup plus guest sessions and
server actions. It actually gives it a
proper name unlike what we developers
do. And then you have a complete summary
of exactly what happened. Perfect stuff.
And we are getting ready to merge it.
And finally, I'm always so happy to see
no conflicts with base branch, which
means that we are ready to merge this BR
into main. With just one click, I'll
automatically do it. And I'll head back
over to my git interface on my webtorm
and I will switch back or check out back
to the main branch within which I'll run
a git pool to pull all the latest
changes that we have implemented. And
just like that, you've got a full O
backend in place. Devon handed it
exceptionally well. Tables, schemas,
server action, cookie setup, and
everything. Of course, we didn't just
sit back and hope for the best. We still
checked all the important stuff like
sessions, cookies, authentication logic,
fixed a couple of issues here and there.
And that's exactly where the human touch
matters. The result is a working modular
and ready for production o system
without spending hours on repetitive
setup. Devon did the heavy lifting and
we made sure it's safe, correct, and
fits perfectly within our codebase. This
is exactly how AI should be used. Fast,
helpful, and letting you focus on the
parts that you really need to use your
brain for.
Now that we have implemented
authentication, the next natural step is
to define your database. what tables
you'll need, which fields go into each
one, and how they all relate to each
other. You might be thinking, okay, I'll
just make a product table and an order
table, maybe a cart, and that should do
it. Well, that's exactly how every
junior developer starts. But if you want
your system to scale, like handle
thousands of products, users, carts, and
orders, you've got to think like a real
database architect. And I don't just
mean reading a huge textbook on how to
design your system database. It just
means asking the right questions and
doing the right actions. Once you
understand how to make solid
architectural decisions, getting things
done becomes way easier, especially with
Devon by your side. So, let me show you
how I approached designing a database
for this whole e-commerce application
that resembles a real e-commerce website
structure and then show you how I built
it using Devon all within an hour. First
off, before writing any kind of a
schema, your question should be, what is
this system actually doing? Well, users
browse products, they add things to
their cart. Eventually, they check out.
You get the money, they get the shoes.
Sounds simple, right? But that implies a
lot of things. A user stable, products
because they're buying and selling
stuff, a cart, which is a temporary zone
before purchase, an order stable, and a
payment stable to be able to track
transactions.
That's your base. And then you'll expand
on each one of these bases to decide
what else to implement. Let's start with
users. You might be tempted to approach
it in a way that only the logged in
users are allowed to add the items to
the cart. But in real world apps, you
want to remove as much friction as
possible and allow people to check out
as guests. So here's how to make it
happen. Have a users table that holds
registered people, but also generate a
session ID for guest users that's stored
in a cookie. Then have a cart table that
accepts either the user ID or the
session ID. So if a user is not logged
in, you assign to them a unique session
ID and create a cart for them in a
database. And when they log in later,
you can merge their cart into their
account by adding their user ID and
setting the session ID back to null.
Keeping this in mind, your cart schema
would look something like this. It has
an ID. It has a user ID which is set to
null for guests. And it also has a guest
ID which is the fallback for anonymous
carts. Also, there are some time stamps.
The whole point here is assigning a cart
to either a user or the guest. Handling
products within a cart will be handled
on its own. But to finalize this for the
user, let's also have a table for
holding addresses. So as a user, we can
have multiple addresses stored to our
account. And then you would have all the
typical address fields. That's it about
the users. But now let's move on.
Talking about products, what do you
think would be the main element of a
product? Let's say you open up a typical
product on a Nike shoes store. It needs
a product name, a description, category,
maybe a gender, who the product is for,
a brand over here, it's going to be
Nike, but if you want to support
multiple brands, you need this field,
too. Colors, sizes, prices, weight, and
many other things. So, how would you
manage all of these? Well, it's simple.
Create a product table and put all of
these right into it, right? Well, sure,
if you want to break the whole system in
a couple of weeks or even days, it's
tempting to store all product related
fields in one place, but that structure
just doesn't scale. As you can see on
the Nike website, a single product can
have multiple sizes and colors. So if
you want to store that directly inside
of the products table, you'd be left
with messy structures like sizes 7 8 9
10 11 and colors red black white and so
on. These are arrays or commaepparated
strings and they are super hard to
query, impossible to index properly,
difficult to filter in your catalog, and
prone to errors and inconsistencies. But
that's not it. Each variant, such as a
color or a size, can also have its own
pricing. So, let's say that a size 9 in
red is in stock and costs about $98.
Size 10 in white is out of stock and
size eight in black is on sale at $75.
How do you represent all of this in a
single row within your products table?
You can't. And that's why we need a
product variant table to handle things
like SKUs, which is the stock keeping
unit sizes colors prices sale
prices, quantities, and weights. Because
each combination of a color and a size
is a distinct variant that must be
tracked independently. Okay, you're now
ready to create two separate tables. One
will be products and the other one will
be products variants. But what if I tell
you that we're not done yet? It would be
all good until users expect to filter by
color, size, gender, price range, or
availability. Imagine an SQL query where
you say something like select everything
from product variance where colors like
red and sizes like nine. That may work
initially until your data grows. Then it
turns into a slow and non-indexed
disaster. Good luck scaling that. That's
the beauty of having a relationship. A
product can belong to multiple
collections, for example, new arrivals
or basketball, have many images, receive
dozens of reviews, and can be sold
across different regions or warehouses.
Trying to put all of this into a single
table means you'll repeat data, you'll
hit limits fast, and you'll cry when you
need to change something. And that's not
the worst part. Let's say you store each
color and size into a new row in a
product table. You now have 12 rows that
say Air Jordan 1 low with slightly
different attributes. Now you need to
update the product description and
you're updating it in 12 places. This is
called data duplication and it's the
root cause of inconsistency. Okay, I'm
done traumatizing you. So what's the
right approach?
Well, break your data into purposeful
normalized tables. Have a table like
products, which is the core table
representing the main product entity
like Nike Air Max 90. Store general info
like its name, description, associated
brand, category, and gender. And then
have another table called categories
which are used to classify products into
hierarchical groups. Footwear, sneakers,
or something like that. Then you can
have another table called brands to
store different brand names like Nike,
Adidas and so on which helps us filter
and organize products by brand. You can
also have tables for genders, colors,
sizes, product images and product
variants because all of these play their
own role. This way each table has a
single responsibility. Data stays clean
and you can scale to hundreds of
products, varants, colors and more. You
can also query much faster, filter
things out more smoothly, and update
more easily, and avoid duplication. In
short, if you want to build a robust
catalog system that behaves like Amazon,
Flipkart, or Nike, design it like a
system, not a spreadsheet. So, based on
all of this info, this is how each one
of these tables would look like. Feel
free to pause the screen if you want to
review it. We have the products, brands,
genders colors sizes categories
product images, and finally product
variance. And of course, if your app
grows, now you have the structure in
place. So you know how you can scale it
further. For example, if you want to
implement reviews on each one of these
products, now it should be very simple.
You would just add a reviews table that
looks like this. And the same thing goes
for managing or creating collections
like bestsellers or summer essentials.
You just have a collections table and
then you can map this with a product so
a single product can be part of multiple
collections. Hopefully everything so far
makes sense. But now we have to dive
into the cart. Many beginner tutorials
and content creators oversimplify and
create a single cart table that looks
like this. It has the ID, the user ID,
the product variant ID, and the
quantity.
This seems fine at first, but very
quickly it starts breaking as soon as
real world complexity kicks in. If you
do the same, you can only store one
product per cart because each row now
represents a cart with one product.
That's not how carts work. If a user
adds three different items, you now have
three cart items with repeated user ids
or cart IDs and there's no actual
container tying them together. You've
lost a concept of a cart. So, a proper
way would be to create two tables for
cart management. A cart table and a cart
items table. Doing so, you can keep the
cart as a container and then one cart
can have multiple items, each with its
own variant and quantity. And as you've
learned before, you can also attach a
cart to a logged in user or an anonymous
visitor because of the session ID stored
in a cookie. Or you can even merge
anonymous carts with user carts on
login. This will also enable features
like save for later or abandoned cart
reminders, cart analytics, and so much
more. So now your architected users,
products, carts, and the next thing are
orders. Orders table will follow the
same principles as we've discussed so
far when creating cart product tables.
In this one, you'll have the orders
table which stores each order placed by
a user including the overall status,
total amount, and associated shipping
and billing addresses. Then there's the
order items table, which contains
individual items in a given order,
capturing quantity and the price at a
time of purchase for each product
variant. There's also a payments table
to track the payment method and status
of an order, including transaction
details and a timestamp of payment
completion. You can also add additional
schemas or tables for coupons, maybe to
handle all sorts of different discounts
or maybe a wish list table so that users
can add the products they'd like to
purchase in the future. Okay, that was
quite a database setup, wasn't it? But
this really covers everything you'd need
to run any e-commerce store. And most
stores are more or less built the same
way. You could easily extend this later
to handle sales management or any other
features you'd want. But the important
part aren't the features. It's
understanding how to design a solid
database. Once you get that, you can
architect any system you want, no matter
how complex it is. All right, enough
talking about architecture. Let's put
all of this together and start building.
And now we are ready to get that
database architecture implemented. So as
before, head over to your new Devin
session and give it access to your repo
and choose the agent preview if you want
to. Then in the video kit down below,
you can find a complete prompt that we
can go through together. More or less,
it's going to be the same thing that we
discussed in the previous lesson. We'll
tell Devon who he is. Then we'll tell it
to design and implement a robust
normalized database schemas using
Driflow RM for a scalable e-commerce
application. It includes user accounts,
product catalog, filters, reviews,
orders, and supporting features all
aligned with the industry best practices
for long-term scalability and clean code
architecture. We tell it about the
structure of the application and the
additional schemas it can add. And then
under tasks, we get a bit more specific.
So we tell it to define new schemas for
the new entities. It's going to be
addresses products categories product
variants, and so on. Once again, you
don't have to be this descriptive. It
can do it on its own if you just list
these different things. But I wanted us
while going through the video to have
more or less the same output so that you
can better follow along with this
course. And then we tell it to use the
best data modeling practices such as the
accurate data types for UU IDs, text and
so on. Define proper relationships,
normalize the structure and basically do
everything else that I explained that
our database needs to follow in the
previous lesson. The output of course
are the schemas for each table, zod
based validations, clean files, foreign
keys, and ready for production. So,
let's submit it and see what it comes up
with. And here we are a couple minutes
later. There's a new PR. What I'll do in
this case is open up the progress so we
can go through it together. First things
first, you can see that progress can
either be a shell or it can be
information on what it did and what is
it thinking. So, you can use the arrows
to move between different steps of the
progress of creating this PR. not just
writing the code but also the decision-
making and thinking and research and
reading everything that we as developers
need to do. Devon is going through that
as well. So first things first it's
exploring the e-commerce repo to
understand the structure and it read a
couple of files. Then it continued doing
that and reading more files. Then it
started inspecting the current schema to
see what it needs to do. After that, it
created a modular schema for the
e-commerce application, including
filters brands categories
collections, and addresses. Basically,
everything we asked it to do. But then
it ran into an issue. I'm addressing the
relation typing errors caused by dynamic
imports in the schema files by replacing
them with direct imports of related
tables, ensuring proper references. Now,
up to this point, ChatGpt would also be
able to provide you with the finished
code. But actually testing it out in
real life, figuring out what the error
is and trying to fix that error without
you ever seeing it, well, that's another
level. So, it went ahead and fixed the
import statements and the error that had
happened. And then it continued to
refine the scripts that we have. It
decided to make it modular and it even
created a seeding function that we can
use to seed the entire database. After
that, it opened several key schema files
to verify their contents and then it's
going to proceed to install the
dependencies that we need to run this
new application. It went through even
more debugging and error fixing and we
didn't see any of that which is amazing.
So you can see for example here it
corrected the foreign key definition in
categories.ts ts by chaining the
ondelete set null method instead of
passing it as an object property. See
before it was passing it but then it
chained it. This error likely happened
because of a change in a library. So
first maybe it got the info about the
previous version of the library and then
it actually corrected itself based on
the error that it received. Then it's
using its shell to commit the files,
doing a couple more changes, and finally
near the end, it verified everything is
good, reviewed it, and opened up a PR.
So to sum up, it implemented the full
modular Dislo schemas, ZA validation,
and a comprehensive seed script, and
also opened up a PR. So let's pull that
PR open so we can check it out soon. But
what I first want to do is actually open
it up within my IDE. So just run git
pull to pull all the latest changes and
then check out to this new drizzle
schemas branch. Once you're there,
you'll be able to see all the new
changes. And don't forget to run npm
install because some new packages got
installed as well. Webtorm is doing that
automatically for me. Okay, so what do
we have here? Well, if you head over to
DB schema, you'll notice that now there
are so many more schemas. schemas for
collections of products, categories,
carts, brands, and so much more. This
would take hours to generate on our own
and most likely we would have failed
somewhere. If you want, feel free to
take some time to browse through all of
these newly created files and pay
attention to how it approached creating
specific fields. Alongside the schemas,
there's also one more thing that was
generated by Devon and that is a seeding
function. Seating in databases typically
means that we want to fill up the
database with some randomized
placeholder data, but we still want it
to look good. It can be copies of what
we believe will be the actual products.
In this case, we will be seating our
database with shoes, of course. So, this
is a fairly complicated seating function
that a couple of years ago you would
just have to write on your own. There's
no really other way to do it. you cannot
find a copy somewhere on the internet um
because you know every product and every
database is different depending on the
shoes and the colors and everything um
and you couldn't use AI right but at the
same time this is so boring to write if
you want to get an explanation of this
seed function you're totally free to
take some time to browse through it or
ask Juny or Chad GPT to explain it line
by line but really in simple terms
what's happening here is it's basically
creating some generic sets arrays of
data which then it'll feel the database
within once it randomizes it. So it's
going to pick a random gender for
example men then a random color for
example red then it's going to pick the
random size and then it's going to
randomize the collection then it's going
to randomize the photo and then it's
going to put it all together into a new
randomized product. This would take a
lot of time to build and it is super
boring. So, if there's a top use case
for tools like Devon, well, writing seed
functions is definitely on top of the
list. Okay, there's one thing that I
want to modify here though, and that is
that we're working with shoes. And shoes
are typically not outlined in extra
small or extra large. They have some
numbers. In Europe, Asia, or the US, the
numbers are different, but for now, you
can stick with the numbers that suit
you. You can go with something like 40,
41, 35. Or you can use the US numbers.
Let's say starting from 7 onwards. So
I'll do 7 8 9 10
11 and 12. And you also need to modify
the slug. So I'll do 7 8 9 10 11 12.
Perfect. So now we have some sizes.
You'll also notice that it's going to
randomize the images. That's going to be
happening right
here. It's going to find a random image
from 1 to 15 because we have about 14 15
images in our public folder. It's going
to pick a random extension and then
it'll hook it up together and do a
static upload. In case you want to
upload this image on another image
hosting provider platform, you're
totally free to change just this part.
and it's just going to work. But for
now, what do you say that we give it a
shot? Whenever we change the seed
functions, we have to generate new
schema migrations. So, open up the
terminal and run the mpm run db generate
command. There we go. All of these new
tables have been created. 24 tables. I
mean, just imagine writing all those
tables by hand. And then we want to push
those schemas to the database
specifically to Neon DB in this case.
Okay, now that we've done that, we are
ready to run the seeding script. So if
you head over to the package json,
you'll notice that a new script db seed
has been created, which basically just
runs the seeding file.
So let's do just that by running mpm run
db seed. This one might take some time
and it'll quickly skip through all of
the initial parts like seating the
filters, seating the brands, categories,
collections, products with variance, but
then you'll quickly start seeing a lot
of seating errors and this will have to
be related with finding the randomized
image. So it says fail to copy image
shoot 10.jpeg.
That's maybe because shoe 10.jpeg JPEG
doesn't exist or same for shoot 2 with
this extension and so on. Don't worry,
these are not errors. It just couldn't
find that specific version of a
randomized image. But it found an
alternative. So, we're still good to go
because at the end it says seated
product Nike Air Max 15 with 20 variants
and seating complete. Perfect. Now,
either open up Drizzle Studio or head
back over to Neon DB and reload your
tables. If you head over into products,
you should be able to see different
variations of the Nike Air Maxes with a
description. With all of them, you can
experience the maximum comfort and
performance. They have their own
category IDs, gender ids, brand IDs. All
of this has been fully randomized by
your seed script. And then they have
relations to specific product images and
variants. So, if you click on some of
these relations, for example, to product
variance, you'll be able to quickly open
a product variance table and see the
different variations of this specific
shoe that we have, such as the gray 7,
gray 9, or red 7, or red 12 with their
own randomized prices. This already
feels like a real store. And you can
also see that they have some images. You
cannot open them online because they've
been added to a repository statically.
So we'll be able to load them within our
application.
Of course, a more scalable way would be
to host them on a platform like
Cloudinary, but for now, this is more
than enough. So with that in mind, we're
good to commit all the new changes.
Unfortunately, we cannot see any changes
live on our application just yet. But
we'll be able to see it in the next
lesson once we implement the product
listing page where we'll fetch all of
these generated shoes. So for now, let's
just go ahead and commit it. Once again,
make sure that you are on the correct
branch, the one that Devon created. And
then go ahead and do get add dot get
commit-m.
This commit was all about well, I just
did a small fix on the seating function.
So I'll say fix seating function sizes
and then I'll do a get push. Back on
GitHub, we can see a new feature that we
have added to our database. And these
are the modular drizzle schemas, ZAD
validation and seed scripts for our
e-commerce app. 88 files changed. That's
a lot. In simple words, we have added
new schemas for products, variants,
categories brands images reviews and
more commerce specific stuff like cart,
cart items, order, order items, and
more. Everything that we discussed a
couple of lessons ago when we talked
about overall database architecture of
an e-commerce application. Definitely a
huge PR that would take us a lot of time
to do on our own. So let's merge this
huge PR to main.
Great. With that in mind, the majority
of the logic for the off and databases
and everything e-commerce related has
been implemented. And in the next
lesson, we'll finally be able to see it
on our page because we'll develop a
product listing page. So let's do that
next.
Okay, the exciting stuff is about to
come. Now that we have done a lot of
logic and technical implementation, we
can actually visualize it on the screen.
At least we'll be able to after this
next feature implementation and that is
the product listing page. So feel free
to take a screenshot of this design page
or I also provide a full design
screenshot in the video kit down below
and then head over into a new Devon
session and feed it this image that you
have alongside the prompt that you can
find in the video kit down below. The
objective this time is to build a
product listing page for the Nike
e-commerce websites that supports
filterable and sortable product
listings. Product data must render
server side. This is very important
using URL query parameters not using
state management and then the filter and
sort UI must be built using client
components only that sync state to the
URL without performing any data fetching
themselves. So once again you are the
architect here and you're directing how
this should be built. The design as
before is attached. The structure of the
application is here as well. And then
the tasks are to create the server
rendered product page and some client
page filters which will be available on
the sidebar. That's going to be the
filters UI, the sort UI query utilities
and then we can also make it responsive,
make it navigate to other pages and also
manage the state of the filters through
the URL bar in real Nex.js way. Perfect.
So let's go ahead and submit it and see
what it comes up with. And there we go.
I think this was the quickest one yet.
In only 6 minutes, it created a complete
serverside rendered products page with
client side rendered filters and sort
with URL syncing. This is exactly what I
wanted to do. Then it took some time to
check it out and verify it locally. So
at the end of the day, it looks
something like this. But of course, we
won't just look at the screenshots.
We'll actually check it out locally. So
let me open up this PR first. Then as
usual, open up your IDE and do a git
pool. Then open up your git interface
and check out to this new branch. You
can also run mpm install to install the
new packages that have been added to our
repo. And then we can check out the
changes. We can take a look at the files
change tab right here in GitHub to see
exactly which files got changed. It's of
course the products page and then a
couple of extra components such as the
filters. the knot got modified a bit and
then the sort as well as the query.ts
which contains all of the querying
utility functions. Okay, that's pretty
straightforward. So let's start from the
main deal and that is the products page.
It is serverside rendered. We can see
that because it doesn't contain any use
client directive at the top and it is
conveniently importing all three
important components such as card
filters and sort. It has all the proper
types such as the search params and
information about the product and it
even rendered a couple of mock products
so far. These mock products still
contain the ML and Excel sizes. That's
totally okay because later on we'll
replace this with the real data. So for
now we can just collapse it. That's
totally okay. If you head over to the
part where the actual page starts, you
can see that we are filtering and
sorting different products. And then
here's the JSX where we actually render
both the filters and the sorts as well
as the cards. Now, this doesn't look
super straightforward. We have a lot of
sorting and filtering logic and a bit of
string modification right here when it
comes to creating labels. So, I think
Devon did the best job possible here.
Maybe we could have extracted these
somewhere else, but this is totally
okay. Now, let's go ahead and check out
these two client components. the filters
which use the router and the path name
to change the search params. That is
good. And we have all of the different
sizes, genders, colors, and so on. And
then if you check out the actual filter,
you can see that you can open it up. And
then we're mapping over through all the
options, which are check boxes that you
can click and select a specific option,
which then calls the on toggle, which
simply pushes the URL to update the
selected filter. This is looking good.
And the same thing goes for sorts.
Instead of filtering, we simply can
select different options and then apply
a sorting mechanism. Of course, all of
these are happening within utils. So if
you head over to utils query.ts, here
you can see how we're stringifying the
query to be able to use it or how we're
updating different params by first
getting the current query, then getting
the object entries and mapping over all
the properties and either deleting them
or adding some new properties. as you
can see right here. So, let's actually
check it on localhost 3000. And I think
it actually fixed the navbar. So, if you
click men, women, or kids, it'll
actually ren you to this new product
listings page/roucts
with the filter already applied. That's
cool. So, if I uncheck it, you can see
that all the shoes are here. Looks like
one has a broken image, which I don't
necessarily mind because we're going to
replace this with the real products that
we seated anyway. The most important
thing right now is that the filters and
sorts work. So if I sort it by women,
this works. If I do a small size, that
works as well. Perfect. We can choose
black. Okay. Now, this is of course not
an issue on the filtering side, but just
the images related to these products.
Maybe they're marked black. You can see
six colors right here, but the one in
the image is not actually black. There
we go. So these ones are gray. And then
if we want to go by prices, looks like
Nike doesn't sell anything super cheap.
And then if you want to have images, you
want to go over a h 100red bucks. So
here we have some more expensive ones.
And then right here over 150, these are
the most expensive. So filters are
working perfectly. And if we try to
filter it by price, high to low, that
works. Low to high works as well. And by
newest and featured, that works as well.
The only thing I can see is we're
missing a bit of padding right here at
the bottom and maybe this image. So,
let's go ahead and fix that by heading
over to the products page. We can take a
look at this mock data and just take a
look at the last image. I think that's
the last one right here. So, we can set
it to shoe 5 or we can pick one that is
actual image from shoes. Maybe let's do
shoe 3. WEBP and let's add a bit of
padding to all of these shoes. So, I
think that is right here. Maybe I'll add
a padding bottom of about six. There we
go. So, now all of the shoes are here
and it's looking great. This tab also
displays the correct number. The filters
are nicely positioned. Sorting is also
nicely positioned and everything is
actually hyperl. So, this is already
starting to feel like a real
application. Let me actually inspect it
and see how it looks on mobile. Okay. If
we navigate over to the products page
right now. Oh my god. I mean, this
actually works. And it's already
filtered. So, I can see the filters here
in a little pill. And if I clear it,
that actually works. And then the
filters actually open up from a toggable
left sidebar where you can select
specific things. It automatically
collapses and you can just see the
changes. This is absolutely crazy. So
Devon really killed it with this PR. So
let's go ahead and make our little
commit by saying get add dot getit
commit-m
apply small fixes and then get push.
This now allows us to come back to this
PR within the GitHub interface and we
can get ready to get it merged. Once
again I'm super happy with this PR. So
let's go ahead and happily merge it. And
back within WebStorm, let's go ahead and
pull it and navigate back to the main
branch to have all the latest changes.
And you can do a pool again now that
you're on the main branch.
Wonderful. This was great. Now, in the
next lesson, we'll focus on implementing
product actions. These will allow us to
fetch the actual products so we can
display them on the products page.
Okay, the UI is there, but now we want
to focus on actually fetching real
products from the database and still
make sure that sorting and filtering
works. Keep in mind, I'm not talking
about just toggling on or off or sorting
a mock for product array. I'm talking
about querying a real performant
database. And to do that, we got to
create some product actions. So within
the video kit down below you can find
the complete prompt and let's go through
it together. The objective is to
implement a high-erformance back-end
server action to fetch products with
full filtering, search, sorting and
pagionation support. We implemented that
work on the front end but now it's time
to power this up on the back end. For
this we will use Nex.js server actions
with Drizzle OM and Postgress. We'll
place the main logic in products.ts
and the query parsing helpers are
already here. Query should support
multiple product variants, color
specific images and generic images as
well. And we will render all product
server side using the card component.
Now here's the key part. All queries
must be optimized to minimize joins and
avoid N plus1 queries. You are now
speaking as a database architect and you
need to tell it to do it in an optimized
way. Now getting to the task themselves,
the goal is to implement the get all
products server action, the get
individual product server action. And
then we want to update the products page
and the queries page to accommodate
those changes. Basically, we need to get
two server actions to fetch all products
and to fetch product details. So let's
go ahead and submit it. And there we go.
Our latest PR is right here. feature
products server actions plus query utils
plus SSR products page. It says that it
implemented the product server actions
query parsing and updated the products
page for serverside rendering. So let's
go ahead and check it out. The changes
are in product query and products page.
Heading back over here, I'll run a git
pool and then I'll open up my git
interface and navigate over to product
actions by checking out that branch.
Then we can check out the code and the
code is right here within product
actions. Here it implemented the actions
to fetch all products as well as to
fetch an individual product. These are
the only two actions and then it made
the actual changes within the products
page. So that is right here where
instead of having a fake dummy data, it
actually is trying to fetch all products
right here and then get the products and
the total count. So let's see whether it
successfully does that. If I head over
to localhost 3000/roducts,
it looks like we get an error saying
failed query for selecting these
products. Now there are three choices
that we can make right here. The first
one is to copy this full error and then
paste it over to Devon and ask it to fix
it. But if you think about it, why
didn't Devon fix this error on its own
if it can do that, right? It runs across
different errors and then fixes it so
that we don't have to. Well, the reason
that it didn't fix this error is that it
happened on the backend side and it
happened specifically for our
environment variables, but Devon doesn't
have access to our env. So, it didn't
encounter this error before we did. It
couldn't have actually fixed it. So,
instead of providing this specific error
and ask it to fix it, we can say
something like there are issues
rendering the product listing page. We
could maybe give it just the start of
that error so it knows a bit of what
we're talking about. Failed query,
select products, and so on. You know,
the first choice would have been to just
pass the current error in its entirety
and ask it to fix it. But we're going to
go a step further. We're going to
anticipate there going to be future
errors and we want Devon to be able to
encounter them and automatically fix
them. And it can only do that if it has
access to our environment variable. In
an ideal case, this would be a staging
environment variable. So if Devon has
it, nothing can go wrong and it cannot
mess up your production application. If
that is the case, you can go ahead and
give the variable to it. So you can say
here is my database URL,
use it. And if you encounter some more
bugs, fix them along the way. And the
third choice would be not to give it the
URL, but rather to try to fix all of
these issues on our own. In this case, I
want to see how Devon handles it. So
let's go ahead and speak with it. This
is actually one of the first times where
Devon didn't one-shot with the whole
feature that we asked it to do. So, of
course, there's some natural back and
forth communication for it to implement
this feature properly. So, let's wait a
bit and see what it comes up with. In a
couple of seconds, Devon said that it'll
set the database URL to our neon URL.
It'll reproduce the failing product
query locally and then push the fixes
shortly. I mean, a couple of years ago
or even a couple of months ago, it would
be almost impossible to imagine that
some kind of a tool would immediately do
this for us. It's doing back and forth
debugging, refactoring, bug fixing, and
it's finally pushing it into a PR for us
to review. Just unimaginable. And here's
the response. It says, "Quick update.
The products page is now fixed and
rendering correctly against our Neon
DB." Well, that's exactly what I wanted
to hear. Um, in this case, it says what
it changed. Okay, that's great. What I
care more about is the verification, how
it knows that it's actually working. It
says it started the dev server with our
database URL and navigated to the
products page. And the page indeed does
render without errors. Cards name show,
genderbased, subtitle, primary images.
It all works. It pushed it. Checks are
green. That is great to hear. And it
even gave us a screenshot of how it
looks like. This is great. But just to
verify it, let's go ahead and pull the
changes locally and see how it feels.
Back within our application, I will run
gitpool. We are already on the updated
branch. So it looks like it updated the
product.ts. And then if we move over to
localhost, the product page now loads.
Uh looks like we have a lot of great
products. Unfortunately for us, it looks
like the seating function hasn't seated
the images properly. You can see the
names actually are different, which is
great. The prices are also different.
They range within different amounts for
different variations of the products.
It's just the images that are the same.
We can definitely look into that in the
future, but for now, the sorting seems
to be working well. This is the highest
paid product. In total, we have six, 12,
and 15 products. So, if we try to filter
it by men, oh, it looks like we still
have 15. So, in this case, it looks like
the filtering isn't working, neither by
gender, size, color, or the price. So,
that's definitely something we have to
look into. So, getting back to Devon,
let's just tell it to fix it as it did
so far. So, I'll say the products page
now loads,
but the filters aren't working.
Fix the filters. And I'll press enter.
And just like that, filters are fixed. I
mean, once again, I'm completely blown
away by the fact that we tell it what's
wrong. It actually tests it, fixes it,
and calls it a day. It says the issue
was parsing bracketed array parameters
like gender and normalizing slugs to
lowercase. So it even explained it and
then it checked it out and it actually
proved to us that it's working by
passing a screenshot. This is I mean I
cannot make this up. I've seen a lot of
people use Devon on smaller use cases,
especially some YouTubers using them on
small applications, but this is the
first time in person that I'm seeing it
properly debug and work within a medium
to large complex e-commerce application.
And I can only praise it. But let's not
get ahead of ourselves. Let's actually
test it out locally. So, I'll pull all
the changes and then head back to our
application.
The products are here. Let's try to
filter them. I'll first go for men.
Looks like there are five shoes for men.
There are five shoes for women and
there's five shoes that are unisex. This
could point to the fact that maybe it's
not working. Or it could be a
coincidence that actually 15 shoes were
separated by five across all three
genders. We can test it further by extra
small,
small, medium. This seems to be working
very well.
But of course, we'll know it better once
we actually can open up this product and
view the product details because then
we'll know whether this product, whether
the shoe is actually for men or not. If
we filter it by color, looks like that
works as well. And again, you cannot
really see it right now because the
photos are the same. The prices we are
seeing though. So, if we filter it maybe
here, you can see that now it's in the
hundreds range. But if we put it over
150, then it's only the shoes that are
over 150. So this is actually working
perfectly. The last thing that I would
do here is to get it to fix our seating
function to make sure that each product
has a unique image. So let's fix that
seating function. First, you want to
head over to your database console. Head
over under SQL editor and then type this
command. drop schema public cascade and
create schema public. This will simply
drop the entire database so we can clean
it up and start from scratch. So I'll
run it and you can see drop cascades to
29 other objects. And if you look at the
table, it'll basically be completely
clean. Then you have to head back over
to the codebase and remove the drizzle
folder that contains the current
migrations. So just delete it. And then
let's run a command from our package
JSON that's going to be mpm run db
generate.
And then we also have to run mpm run db
push.
If you do that all the changes are
applied. And now if you reload you'll
see that we have all of these tables
right here but they're completely empty.
So the last thing in this session that
we can ask Devon is to fix up the
seating function so that each product
has its own image. It says right here
that exceeding five ACUs can reduce
Devon performance. And right now for
this session we are about six ACUs.
These are the units that measure how
much processing power we spent on
generating this code. So right now we're
at the upper limit but it's okay as this
is the last thing that we wanted to do
within this specific feature. I'll say
something like modify the seeding
function and ensure that it seeds the
database with 15 unique products and
each of them has to have a unique image
coming from for/public
slshoes folder. Right now all the images
are the same. Just fix that part.
Okay, let's see how that goes. Looks
like there's a little follow-up here. It
said seed updated for unique product
images just to verify 15 unique
products. Which option do you prefer to
truncate or to modify seed to skip the
existing products? In this case, either
option is fine, but I'll just select
one. Okay, the response just came in and
Deon said all set. The seeing is fixed
and verified. I updated the C.TS TS to
ensure that each of the 15 products has
a unique image from public shoes and
removed the perv variant random image
creation. It verified it, updated the PR
and that's it. It even gave us a
screenshot and this is looking amazing.
So back within our IDE just go ahead and
pull the latest changes by running
gitpool. And before we can check it out
on local, we have to once again delete
everything from our database. That's
because when Devon ran the seed
function, it was referring to the
previously seated images. So, we've got
to delete them and then the seed
function will create new seated images
within our local environment. So, to do
that, first head over to Neon console
under the SQL editor and then run the
drop schema public cascade create schema
public command. Once you do that, your
database will be deleted. Don't forget
to also delete the drizzle folder and
also delete the static folder that was
created for us in the latest BR. Once
you do that, we are ready to regenerate
the database. You already know the
drill. First, go ahead and run mpm run
db generate. Then after that, run mpm
run db push. And finally, now that we
have our new product in, go ahead and
run db seed. This will now seat it
locally with fixed images. As you can
see, seated product Nike Air 1 2 3 4
with many different variants and with
proper images. So, if you head back over
to your localhost 3000/roducts,
just check this out. We get all of the
different shoe images. And this now
really feels like a productionready
e-commerce website. Also, this is a
little detail, but notice how the left
side filters tab actually floats as the
right side content moves and then it
comes to the bottom. Like even these
little interactions are done properly.
This is amazing. And the filtering works
as well. If you try to filter by
specific colors, you'll see that we get
all of the same image. That is not a
bug. It's just that our seating function
seated all the different product
variants to have the same image. But
these are indeed supposed to be
different colors and you can see the
prices works as well. So this is so
amazing to see. With that in mind, let's
go ahead and commit and push it. I'll
run git add dot get commit-m
finalize seed function and then push it.
Then heading back over to our GitHub
interface and you can see that if you
want to test it, you might need to reset
and reseed the database using mpm run
seed. Navigate through products with
various filter combinations. Test direct
URLs with different parameters to see
whether it applies the filtering through
params and then verify performance with
the browser network tab. We've already
tested most of these things and
everything is working properly. And
finally, we are ready to merge it.
Once you do that, back within our code,
make sure to check out over to the main
branch by running git checkout main or
just using the interface and then just
run gitpool to pull the latest changes.
So now back on localhost, we have all of
the latest changes, but this time on
main. Great work coming this far into
the course. I think this is the point
where it all really started coming
together.
In this lesson, let's dive right into
creating our product details page. So,
now that we can open up all the products
individually from the product listings,
let's actually display their details,
such as this neat gallery to show all of
the product images, the title,
description, collection, price, and even
different variations, colors, and sizes.
We can also display the product details,
and finally, the you might also like
section. And again, this is about a day
of work, especially if you want to hook
up the logic. And hopefully we'll get it
done in a couple of minutes. So, as
usual, just to speed up the process a
bit, I provided the complete prompt in
the video kit down below alongside the
design image that you have to provide
alongside the prompt so it knows what to
generate. The goal here is to build a
product details page for the Nike
e-commerce app that renders a rich
product gallery, color and size
selectors, and product metadata. The
page must be pixel perfect to the
attached desktop design and strictly
responsive across mobile and tablet
devices. It should open when a user
clicks on product card and feel uniform
with the rest of the site. Now, here are
a couple of strict requirements.
The entire page must be server rendered.
Only the interactions and dynamic UI
parts that require client side state
like the gallery, swatches, and size
picker must be placed in isolated client
components under the components folder.
It also needs to follow the provided
design screenshots exactly. No
deviations. Do gallery collapsible
section product information UI as is
strictly follow the same layout and code
must be super clean. we give it more
info about the structure and then
specify the exact tasks. So the goal is
in simple words to implement the product
details page which is going to be based
on the dynamic ID param that we'll use
to load first the static mock data but
later on real product data and then this
page needs to be composed out of these
smaller components such as the product
gallery the color and variant pickers
size pickers product metadata you might
also like section responsiveness
accessibility and more. So, make sure
that you provide access to this repo and
you can select a agent preview. Make
sure that you also provided this image
right here. And then let's submit it and
see what it comes up with. Now, very
quickly, we got the server render
product details page PR and then it said
it's implemented and ready for review.
And the review we will because from the
screenshot, it doesn't look so good. At
least the gallery doesn't. The right
side is spot-on, but the gallery is
broken. And if this right here is the
color picker, then it's not where it's
supposed to be. So, let's prompt it to
fix it. Make sure the gallery looks
exactly like it does on the provided
design.
What you can do is once again provide a
screenshot
of this entire section and forward it to
it. and that the color picker is above
the size selector. I try to be as brief
as possible in my request. So, let's see
how it goes. And the update is complete.
I like how Devon immediately gives us a
screenshot of whatever it develops so we
can see whether it's looking good or
whether we have to do a little follow-up
to get it exactly looking how we want.
And now the color picker is right above
the size selector matching the design
and the gallery layout is aligned on the
left. Sure, we can still modify the
height of this main image, but other
than that, this is looking good to me.
So, let's go ahead and test it out on
local host. Back within my webtorm, I'll
run get pull to pull the latest changes
and I will check out to this new branch
that is the product details page.
Once you check out to it, you can run
mpm install to install all of the recent
packages. It most likely needed
something for the gallery. And now if
you head over to app root products ID,
you'll notice that now there's a new
product ID page rendering one mock
product. For now, we'll collapse it. And
you also see some recommendations for
other products below that specific shoe.
Then we are fetching all the data using
the params and then fetching the product
data and displaying it right here.
Finally, let's see where that product
image is. We have the header and below
the header on the left, there should be
a product image. There we go. Product
gallery is a new component of its own.
So if you head over to it, you'll notice
that it is a use client component
exactly as we requested. So now we just
need to fix the height of the main
image. I believe that is this one right
here. To fix this broken layout, I think
we can remove this aspect square on this
div and instead after the full width,
give it a defined height of maybe 500
pixels. If you do that and come back,
you can see that now the image fits much
nicer. And I can even zoom it in a bit.
And you can see that you can switch
between different images. You also can
switch between the images here. And keep
in mind, this is so crazy that right now
you're just switching between the images
of a specific color. So in this case,
most likely this will be a black shoe.
And right now, this is of course a
different shoe, but in the real world
application, you would have this shoe at
all the different angles from the top,
from the bottom, the sole, and
everything. Same thing for this shoe
right here. So you can see different
angles of a different color of a shoe
and you can explore all the different
colors for a specific shoe product. I
mean just keep that in mind. It's
absolutely crazy. Okay, now that this is
looking good, I notice that we have two
sets of chevrons. These are the icons
that allow you to move left and right.
We want to remove the ones from the
bottom because they're nonfunctional. So
just find this chevron left and right.
We want to find the one that is at the
bottom. We want to comment out these
ones just to make sure that they are the
ones. So that is these two divs right
here. Chevron left and chevron right. So
let me remove it. There we go. Now we
only have the left and right. And we can
switch between all of the colors and see
more images within a specific color
gallery. You can also select a size. You
can see that all the sizes are here and
very soon we'll be able to add it to the
bag or favorite it. Now, if you scroll
down, you can see the full product
details of that product as well as the
shipping and returns and even the
reviews. And it all works nicely. And at
the bottom, you even have the you might
also like shoe recommendations. This is
absolutely out of this world. Keep in
mind that it also implemented this
little breadcrumb navigation so that you
can immediately navigate back to the
homepage or to the products page. This
is crazy. And if you head over to a
specific section like women, you can now
actually navigate over to a specific
shoe product details page. I am amazed.
So with that in mind, there's nothing
left for us to do here but just go ahead
and get it merged. I'll once again do a
very quick git add dot getit commit-m
and I'll say small fixes to the layout
because that's really the only thing
we've done and then run a git push. I'll
then go ahead and open up this PR and we
can take a look at what we did under
files changed. You can see that we
implemented the product details page. We
have the ID pointing to that product
from the products page. We also have the
collapsible section for the reviews, the
details, and more. Color swatches, the
index to export these components, the
product gallery, the size picker, the
variance of different products, and
updates to the package JSON where we
just updated the version of Lucid React.
And when I said we did these things, I
really meant Devon did them. And once
again, that's just out of this world.
And with that in mind, we are ready to
get this PR merged. So let me do just
that. And with that, we are ready to
implement the logical part of the data
fetching of a specific product in our
next lesson where we will remove the
mock data and fetch real product data
for each one of these products from the
product listing page. So let's do that
next.
Okay, let's hook up our product page
with real data. So, the goal now, as per
the prompt that you can find in the
video kit down below, is to build a
fully integrated product details page in
a Nike e-commerce app using real backend
data from existing schemas. This product
details page must fetch correct products
when a user clicks on a product card.
Show reviews and recommended products
via server actions wrapped in suspense
so that it is nonblocking. This is a
very cool feature in Nex.js. So I'm very
excited to see how Devon handles it.
Then it needs to handle products varants
and image relations correctly with no
shortcuts or buggy logic. and it needs
to gracefully render a not found block
if a specific product doesn't exist. In
this case, the UI is already done. So,
we don't need to mention this. When it
comes to the structure, the NexJS app
router with server component, we already
have it. For backend, you already know
the drill. And now the goal is to focus
on the tasks. Basically, it needs to
refine the get product server action so
we can fetch all of the different
information and return it so we can then
display it on the front end. Implement
the get product reviews server action,
the get recommended product server
action. And then finally, it needs to
hook it all up within the page. DSX,
that's our product details page.
Everything else is exactly as it was in
the previous one. We just need to make
sure that it still keeps the rest of the
UI but now integrates it with real data.
Don't forget to add e-commerce as your
repo of choice. And you can choose the
agent preview and let's continue. Okay,
Devon started building a comprehensive
product details page as requested, but I
think this is the first time that I'm
seeing its confidence rated as low.
Okay, that's good. But what I actually
like about it is that it didn't proceed
with that low confidence. Instead, it
created a plan and it's asking us four
questions. So, some of the questions
include things like reviews approval.
The schema lacks an approval flag.
Should we treat all reviews as approved
for now, the recommendation priority,
should we prefer category, brand, gender
with case-based ordering? And then PDP
pricing source, should we use the
product default variant or minimum price
variant? So, these are some questions
that we have to give replies to or just
click confirm. I like how in the
questions themselves, it doesn't
necessarily require us to respond with
words. It just asked yes or no questions
and then if we want to say no, then we
can provide additional information. But
these questions are super detailed and
intricate that either way is totally
fine with me. So, the default option
that is suggested for all three options
is great. So, in this case, we can just
go ahead and click confirm. Hopefully,
this will get its confidence higher so
it can actually develop the feature. And
just like that, we got our next PR back.
It's the product details page with the
backend integrated. The PR is ready for
us to review. This time, we're not
getting back any screenshots because
this requires local testing. So, we can
either provide a database URL as we did
before, or we can just go ahead and test
it locally, which is exactly what I'll
do. I'll just run gitpool to pull the
latest changes. Then I'll check out to
the new PDP backend integration branch.
Install some new packages by running mpm
install if there were any, but I think
Webtorm didn't notice me, so I don't
think there are. And we are ready to
test it out. Back on our homepage, we
can see some shoes. We can navigate over
to the product listing page to see the
new real shoes that we have seated our
database with. Then if you now click on
one of these shoes, you should be able
to see it. And yep, you can see that the
correct shoe with the same image and
title actually opens up. Let's go with
this crazy Spider-Manlike shoe.
Yep, it opens up. It's a Nike Air Max
10. It's a men's shoe, and you can see
that it only has one image. Now, in this
case, because we implemented a crazy
seating function, it looks like it
created so many different color
variants, but unfortunately, we didn't
change the actual images for each one of
these color variants. So, we're just
left off with a couple of the same shoe.
But I think you can already imagine how
this would work in a realworld
application where each one of these
would be a different color variation of
the shoe. And then if you had multiple
images from different angles for each
one of these color variants, you'd be
able to check out exactly how it looks
like. You can also select different
sizes. The product details and
everything is now coming directly from
our API. And then we also have the you
might also like section where we have
other shoes coming from our database.
And now you can actually traverse
through the application by selecting
different shoes. So let's check it out
and see how it actually works within our
codebase. If you head over to source app
root and then the product ID details
page, you'll see that it's using all the
proper Nex.js logic such as getting the
params through the props and then
getting the ID from it and then in just
a single line it is fetching all the
product details by passing over that ID.
So let's take a look at exactly how this
get product details server action works.
You can see that it's approached it in a
way that it's calling the database and
it's selecting specific fields that it
wants to retrieve from the database such
as basically everything that it needs to
display. It's getting it from the
products table and then it's using the
left join to join the brands,
categories, genders and all the others
table together. So we can finally get
all the info for that specific product
ID and then it just returns it over
right here in a format that is very
JavaScript friendly that we can easily
use as a JavaScript object on the front
end to display all these pieces of
information. To write all of this by
hand would take days. The code is not
simple yet it is made simple to
understand and it is properly typed so
we can always know what to expect. So
the last thing remaining for us to do is
to just merge this PR. A PR where we
replace the mock data in the product
details page with real backend
integration and it is all fully
functional. So with that in mind, let's
go ahead and get it merged.
And then if you head back over to your
application, you can run gitpool.
And then finally check out to the main
branch. Oh, run a get pool there just to
be safe. And now if you head back over
to the localhost 3000, you can see the
full application. Let's just quickly
manually navigate over to the sign up
just to remember how it looks like.
There we go. It is perfect. There's also
the sign in. So now we can once again
sign up. Since we wiped the database a
couple of times, you have to create a
new account. So I'll do just that and
sign up. Oh, and now that I'm looking at
the homepage, it seems that it's still
rendering the fake mock products, unlike
the product listing page that fetches
the real ones from the database. So,
just to make sure that we don't lose our
coding muscle. It's always good from
time to time to also try doing something
on your own. So, back within our
application, we can do this on main.
That's totally okay. You can head over
to the homepage and right here at the
top, you can delete the mock products
that are there.
And instead of that, we can copy what we
have in the products page where we're
fetching all of the products. So right
here, you can copy this line that says
con products and total count is a weight
get all products. And you can just paste
it right here. Make sure to import the
get all products from the products
action.
So right here, I'll say products coming
from lib actions product. And it's
accepting something known as a parsed
right into it. And this parsed is
actually the filters. In this case, I
don't think we have to pass any filters,
but we can pass just the limit of like
six products that we can display on the
homepage. So, let me see where the limit
is. There we go. Limit is just
filters.limmit.
So, what you can do is just head back
over to the homepage and pass an object
of filters where the limit will be set
to something like six or 12. You can do
whatever you want. But alongside the
limit, it looks like the current get all
products is implemented only for the
product listing page where there must be
some other things like the gender slags,
size slags, color slags, and four more.
So, what we can do instead is head over
into the get all products. Head over
into the normalized product filters type
and then just make all of these
optional. So, I'll just select the
ending of all of these things and I'll
just make every single one optional.
And then if you head back over to get
all products, you can also just add
question mark on these filters to make
sure that it doesn't break in case it
doesn't encounter a specific filter. So,
I'll just add them right here
and here. See, even if you're developing
with AI, you still need to be able to
know how to traverse your codebase and
understand what it is doing. In this
case, it looks like it's doing a
comparison of something that could
possibly undefined with zero. So that's
why it's complaining about it. So I'll
say either this or zero if it doesn't
exist. Perfect. Now, if you head back
over here, it doesn't complain anymore.
We're just getting all products. And the
only thing we care about is just getting
six products. Doesn't matter which they
are. We can remove the user request and
the total count. And now we have to
change the way that we're mapping
through the product. Let's see how we're
doing it in the products page.
We say products.m map and then we figure
out how to render the price and then
render the product card. What I'll do in
this case is I'll just copy this
specific products map and then I'll use
it for the homepage as well because it's
updated and it contains the new way of
how we're rendering the products. So
I'll just replace the one on the
homepage right here. Oh, and it looks
like I missed adding question marks in a
couple of places within the product.ts
actions. So if I head back, I think that
part is right here. There we go. The
price ranges.
And now if you get back, take a look at
that. Even our homepage has six latest
shoes displayed right here. And you can
click on them to be able to visit their
product details pages. Not only that, we
can now traverse to the product listing
page by clicking one of the categories
at the top and immediately be able to
filter and sort through all of these
different shoes. Everything is also
hyperl. So, if you clicked on the logo
at the top or some other links, you'll
be redirected to different parts of the
screen. The footer is looking great as
well. And don't forget that there is now
a product details page which is fully
functional. Even though it looks like
the same images repeated, these are
actually different variations of the
products. This is actually looking
amazing. It is looking very close to a
complete fully functional Nike
e-commerce website. And developing all
of this might have looked smooth on your
screen while you were watching me do it.
But let's be honest, it wasn't totally
effortless. You still had to write
quality prompts, tweak them if needed,
follow up with Devon to fix stuff, and
finally merge everything in. And just
reading and understanding the entire
codebase like this is already a big
skill on its own. Because if you don't
really know what's being generated,
that's not development. That's what many
people call vibe coding. What you do
today, that was actual AI enhanced
development. So what's next? Well, dig
into this database. Don't just trust it,
understand it fully. And if you're
stuck, ask Devon. And if you're still
confused, ping me on Discord or email or
just join jsmastery.com and I'll be glad
to help you out. This is the new era of
development and the better you get at
writing it, the further you'll go. Now,
the next big piece will be the cart
management where you can manage the
guest sessions that we have implemented
the logic for and the full Stripe
integration. Since we already covered a
lot of stuff here on YouTube, I'll be
covering these additional sessions as
part of the jsmastery.com pro circle.
And look, you might be thinking, hold
on, without cart management or checkout,
is this really e-commerce? Totally fair.
And that's exactly why the next step is
just one or two more Devon prompts away.
Devon has an integration with Stripe
MCP, which makes it ridiculously easy to
plug in a real checkout flow. So yes,
what you build is the backbone of a real
store. You're just a couple of steps
away from being production ready. So try
this on your own. See if you actually
learn how to prompt properly and give it
a shot. And if you want to go way deeper
into this AI powered workflow from
Stripe MCPS to advanced integrations,
we're now working on a full AI
development course. The weight list is
open, so if you're interested, you can
find the link somewhere in the video kit
down below. But just think about what
you pulled off today. In a few hours,
you built a full stack e-commerce
application from scratch. Devon handled
the repetitive work and you stayed in
control making the real calls. So, will
AI replace developers? Well, you let me
know. You just watched me build this
application with the help of AI. So, I'm
curious. What do you think? Are you
excited about this AI powered workflow?
And have you tried some AI in your own
workflows? Drop your thoughts in the
comments down below, and I'll be glad to
read them all out and respond. In my
opinion, the future of coding is
collaborative, not just with teammates,
but also with AI. And you've just seen
how powerful and honestly fun it can be.
If you like this one, check out
jsmastery.com
because there we're diving even deeper.
A lot of great stuff is coming up. some
of it AI powered and some of it fully
foundational so you'll learn everything
in depth. One of those courses is the
backend course soon coming up on
jsmastery.com.
So definitely stay up to date regarding
that. Once again, thank you for sticking
around. I really appreciate your time
and yeah, keep coding smarter. I'll see
you in the next one. Have a wonderful
Can Devin AI help you build a Nike-style eCommerce MVP? In this video, weโll master prompting while building sleek product pages powered by Next.js 15, TypeScript, TailwindCSS, and Better Auth. The backend runs on Neon PostgreSQL with Drizzle ORM, with Zustand handling state management โ all wrapped in a clean, modular architecture for faster development. โญ Join JS Mastery Pro: https://jsm.dev/nikecom-jsmpro ๐ Become a Top 1% Next.js Developer: https://jsm.dev/nikecom-nextjs ๐ย Launch Your SaaS Pro Course: https://jsm.dev/nikecom-saas ๐ย JavaScript Pro Course: https://jsm.dev/nikecom-cpjsm ๐ GSAP Pro Course (includes GTAVI Website): https://jsm.dev/nikecom-gsap ๐ย Three.js 3D Pro Course: https://jsm.dev/nikecom-threejs ๐ FREE Video Kit (Repository, Design, Guide): https://jsm.dev/nikecom-kit ๐ AI Development Pro Course Waitlist: https://jsm.dev/nikecom-aipro ๐ Backend Pro Course Waitlist: https://jsm.dev/nikecom-backpro ๐ Tailwind Pro Course Waitlist: https://jsm.dev/nikecom-twpro ๐ React.js Pro Course Waitlist: https://jsm.dev/nikecom-reactpro ๐ React Native Pro Course Waitlist: https://jsm.dev/nikecom-rnpro Devin AI: https://jsm.dev/nikecom-devin Junie AI: https://jsm.dev/nikecom-junie WebStorm: https://jsm.dev/nikecom-webstorm Rate us on TrustPilot: https://jsm.dev/trustpilot https://discord.com/invite/n6EdbFJ https://twitter.com/jsmasterypro https://instagram.com/javascriptmastery https://linkedin.com/company/javascriptmastery Business Inquiries: contact@jsmastery.pro Time Stamps: 00:00:00 โ Introduction 00:03:58 โ Planning 00:14:56 โ Master Prompting 00:20:07 โ Project Setup 00:39:31 โ Design & Theme 00:47:50 โ Landing Page 01:07:34 โ Auth UI 01:13:04 โ Auth Backend 01:47:19 โ Database Architecture 01:59:15 โ Database Schemas 02:12:02 โ Product Listing Page 02:20:40 โ Product Actions 02:36:26 โ Product Detail Page 02:45:20 โ Product Detail Integration