Loading video player...
I still remember the first preview of
server components. I was super
skeptical. You couldn't even call a
weight in them. They had their own magic
syntax for everything. You would have to
name the files differently to indicate
which components were server and which
ones were client. I did not think I'd
ever be a fan, especially when they made
the changes I asked for and I saw the
new magic use server and use client
directives that made this behavior all
possible. I was scared as hell. I really
didn't think it would be for me. And as
most of you probably know at this point,
I was wrong and ended up really really
liking server components to the point
where I was their main defender. Yet
now, for most of the apps that I build
and deploy to production, I don't find
myself using server components very
often. I try to use them for things like
making a better shell for my apps. But
stuff like T3 Chat is basically not
using them at all. I'm just shimming
React Router in there. Which is why I
was kind of scared, but also excited
when I saw this blog post pop up. Are
server components actually improving
performance at all? Hypothetically, they
should be. Server components should be
the magic perfect solution where code
that doesn't need to be dynamic doesn't
get sent to the client at all. We just
send them the HTML and deal with
everything else on the server side.
There's so much that should make server
components an incredible experience both
for developers and for users. But if the
performance numbers aren't there, none
of this matters in the end. And I try my
hardest to be objective when there is
data to point to. And historically,
there hasn't been enough for me to
really lean into. So, I'm really excited
to dive into what Nadia wrote here and
discuss what the performance of server
components actually looks like and if
this was all worth it or if we just went
the wrong direction entirely. Despite my
love of server components, I'm not
actually paid by either the React team
or by Versel. So, we need a quick break
for today's sponsor. Do you know what's
harder than coding with one hand?
Finding good coding resources as a
developer. It's actually getting
incredibly difficult now with all the AI
slop all over the web. It would be so
nice if you could have good resources
from actual experienced real life
developers, not just people cranking out
crappy tutorials on YouTube. Thankfully,
today's sponsor, Frontend Masters, is
here for just that. Tutorials that
aren't just going to teach you how to
code, but are going to help you level up
your skills to become a more senior and
more talented overall engineer. And what
makes their courses special is the
people making them. These aren't made by
full-time content creators and devrils.
They're made by experienced engineers
working at real companies. people like
uh okay, I guess Prime doesn't really
count over at Terminal, but Brian Holt
from datab bricks or Richard from
zed.dev. Few people understand compilers
quite like Zed does to say the least. Or
like David from Microsoft and Xstate,
who's one of the most talented engineers
when it comes to state management work.
Scott from Netflix and so many more. I
am regularly surprised when I look
through the list of the different
courses, just seeing who made them. I'm
actually really excited for the
JavaScript hard parts stream on
September 17th. Check it out if you're
interested. And I'm sure that's going to
be really fun. But the value you're
getting here is insane. Over 250 courses
for a cheap monthly subscription. And
your company might even be down to pay.
It's already cheap, but if you use my
link, you'll get 25% off. That's an
insane deal for access to all of these
resources. You want to learn from the
best? Check them out today at
soyv.link/masters.
I am so excited to have real data on
this. I should have built something like
this myself, but I'm busy to say the
least. Have you heard of server
components? You probably have. It's
everything anyone talks about in the
React community in the last few years.
It's also the most misunderstood
concept. To be totally honest, I didn't
get the point for a while either. It's
way too conceptual for my practical
mind. It is definitely pretty conceptual
and high level considering the like
weeds of inside of components that we
normally live in. Even just the fact
that server components have to start at
the root makes it relatively unintuitive
to to grasp initially. We could fetch
data on the server with Nex.js JS and
API is like get serverside props way
before server components were
introduced. So what's the difference?
Well, serverside props is like the
single worst API ever defined. Like it's
it's so funny. People are mad about the
current APIs. It's like they never
touched serverside props. It was the
[ __ ] worst in literally every single
way. It's a downgrade in every
measurable format for what we have now
with server components. But yeah, we got
to pretend that serverside props doesn't
exist to have this conversation because
otherwise I'll just be angry because I
[ __ ] hated that API so much. Oh god,
I forgot about get serverside props. I
have to use it daily at my next 12
project. Yeah. Uh, fun fact, uh, my blog
originally started because I wanted to
make fun of how much I hated the AirPod
Maxes. But the reason I like brought it
back and started posting a little bit
again before just going out on video was
me wanting to [ __ ] about how awful the
type safety story was in Nex.js four
years ago now, which is kind of wild how
fast the time has gone. But all of this
was me bitching about the misery that
was get serverside props. So here I had
get serverside props as an example to
get data to this component but we don't
know if user exists because we don't
know what we're getting back in the
component because it's a magic thing
that gets passed down. You can manually
type it. So I can define user as user
from Prisma client. But what if I only
select ID from user or select a
different subset. There is no guarantee
in passing the right data down because
this is just two magic things defined in
your page. You have your default page
function which is a component and then
you have the get serverside function and
there's no relationship between these. I
will never for the life of me understand
why this is the part that they copied in
remix cuz this was bad here. It was
worse in remix.h
anyways but I I do not want to just keep
complaining about an API that hasn't
made sense for 4 years. Nadia here did
quite the comparison. She compared how
the patterns differ from both the
implementation point of view, but also
how data is fetched across the different
rendering techniques and the performance
impact of each of them in different
variations, which apparently led to it
all finally clicking for her. So
hopefully, if you're not at that point
where server components have clicked,
our readrough of this will get all of us
there. This is exactly what the article
does. It looks into how client side
rendering, serverside rendering, and
server components are all implemented.
how JS and data travel through the
network for each of them and the
performance implications of migrating
from client side rendering to serverside
rendering to server components because
these are all different. People seem to
think that SSR and RSC's are the same
thing. They're three letters that have
server in it that are for React. But RSC
is just taking components that render on
the client and also rendering them on
server. Server components are components
that only run on the server. Big
difference. I went a semi-real
multi-page app to measure all of this,
so it'll be fun. It's available on
GitHub in case you want to replicate the
experiments yourself. I'm going to
assume you've at least heard of initial
load, client side rendering, serverside
rendering, the Chrome performance tab,
and how to read it. Need a refresher?
Here are some articles. And again,
article and author will always be linked
in the description. Nadia killed it with
this, so definitely give her a follow if
you haven't yet. This is a very good
post, and authors writing stuff this
good deserve some attention. She also
has a book on web performance
fundamentals that seems really, really
good. And judging by what I've seen so
far, probably a worthwhile purchase. CJ
is one of the most trusted people in my
community says that the advanced React
book Nadia did also really good. And I
take his word over mine. I'm sure it's
great. All this is linked in the
description. Introducing the project to
measure an interactive and beautiful
website. One of the pages looks like
this, has an inbox. Some data on the
page is dynamic and fetched via rest
endpoints. Namely, items in the sidebar
on the left are fetched via SL API/
sidebar and the messages on the right
are fetched via SL API/ messages. The
sidebar endpoint's quite fast, taking a
100 milliseconds to execute. The
messages endpoint takes up to one
second. Someone forgot to optimize the
back end. Totally unrealistic. The whole
project is available on GitHub if you
want to take a look. So, let's define
what we're measuring. When it comes to
PF, there's a million and one things
that you can measure. Impossible to say
this website is good or bad PF without
defining what exactly we mean by
performance, good and bad. Oh god, I
love this already. This is a very good
article. For this particular experiment,
I want to see the difference in loading
performance between different rendering
and data fetching techniques including
server components. For the purpose of
understanding them all and also
answering the question, are server
components worth it from a performance
perspective? Going to use the
performance tab of DevTools for
measurement with CPU at 6x slowdown and
network slow 4G. You can use Chrome
DevTools to arbitrarily slow down your
browser, slow down your computer and
your network in particular, but 6x
slowdown means a different thing on a
different computer. So, not always the
best way. And the way that they slow
down your CPU is not necessarily the
best representation of a slow device.
You want to make sure your [ __ ] works on
slow devices. Buy a slow device and test
it on it. It's the only way you can
know. This all does help a ton though. I
do recommend at least doing these types
of tests. She's going to be measuring
the LCP, which is the longest contentful
paint. How long does it take for the
major content, like the core content to
render, even if it's just skeletons? the
sidebar items visible which is when do
the things on the sidebar appear the
messages visible which is when do the
messages appear and then when does the
page become interactive when can you
actually click things and do things and
expect them to work these are shown as
the LCP where like the page skeleton
exists but the sidebar and the inbox
haven't loaded yet then you have the
sidebar loaded in but the inbox still
has to load then you have the inbox
loaded in with the messages then the
button for toggling light and dark mode
becomes interactive because the JS is
loaded and is now active
This is an important piece because when
that happens might even change
throughout. Oh boy, this is a fun call
out. Since in dev everything's using
HTTP1 servers, the limit for how many
things could run in parallel with HTTP1
is six. So you'll have things that would
have otherwise been batched in prods
because you can only load so many things
at once. She wanted to make sure she was
copying how this would actually work in
production with a real CDN. So she has
it locally by imitating CDN behaviors
using caddy as a reverse proxy. So let's
start with the client side rendering
performance. CSR means that you load the
whole script on client and then the
script goes and fetches the data. So
here we have the script and the link. So
this is where the CSS and JS come from.
So you load the HTML. Your browser goes
and fetches these. Now they exist inside
of the context of the browser. Now the
JS can execute and do what needs to be
done. And until then you don't see
anything. This transform this empty div
into a beautiful page. The browser needs
to download and execute the JS files.
The files will contain everything that
you would write as a ReactDev. So here
you have your React components. You have
the layout, sidebar, main content, and
then something that will actually mount
it to the root element. And until then,
it's an empty page. React itself
transforms the entry point app component
into DOM nodes. Then it finds the empty
div by its ID and injects accordingly.
So if you look at the performance tab
here, you have the HTML, the JavaScript,
still an empty screen until the JS runs,
then it paints, then it goes and does
the fetch calls, and then it renders the
rest of the data once it comes in. But
notice here, the key is that the HTML
loads, you see nothing. The JS loads,
you still see nothing. The JS executes,
now you see things. The HTML loading and
the JS loading is not enough for you to
see things. As she says, in real life,
this will be much messier because you
have multiple JS files, sometimes
chained CSS files, and a bunch of other
stuff happening in the main section. You
record the actual profile for the
project, you'll see this. You have the
index.js, the vendor.js, radics editor,
date functions, and the CSS all loading
thankfully in parallel, but all things
that have to complete loading before you
could start seeing any content.
Data fetching for the sidebar messages
is triggered in the JavaScript itself.
You have a use effect where the messages
are fetched and then stored.
It could be any data fetching framework
like tansacquery for example in this
case here we are tanacqu query nice to
see doesn't really matter what's
important here for the data fetching
process to trigger the JS first needs to
be downloaded compiled and executed so
no data is being fetched at all until
all of those things happen first that's
the key thing to understand with spas
you need the page to load trigger the
load for the JavaScript you need the
JavaScript to load execute then figure
out what data it needs to fetch and
after all of that is done you then
finally start fetching data. Which is
why when you don't have that JavaScript
cached, it can take up to 4.1 seconds to
get that first paint because it takes so
much time on this fake slow device and
slow connection to get all the
JavaScript to do that. So, you're
waiting for 4.1 seconds to see anything
on the screen. This is why people don't
like client side rendering. That is
really bad. That's enough time for
someone to think your website's broken.
Aside from everything related to the
nice dev experience and learning curve,
which are huge deals by themselves,
there are two main benefits to SPA
compared to more traditional websites
because obviously this is bad. So like
why would we ever do this? Because of
the dev experience, learning curves, and
a few other things. First thing is
performance. Transitions between pages
when everything's on the client and
there's no back and forth with the
server can be incredibly fast. This is
why we went single page app for T3 chat
because we wanted it to be as fast as
possible when you click around. When I
click other projects, I'm clicking now.
I'm clicking now. As soon as I click, my
other chats appear because I preload all
of the data. And when you click, I'm
just changing out what data is rendered
in JavaScript. Nothing is loading when
you navigate. Yeah. In her example,
navigating between pages takes only 80
milliseconds. It's as close to
instantaneous as it can get. The other
important piece is that it's cheap.
Ridiculously cheap. You can implement a
really complicated, highly interactive,
rich experience. Upload it to something
like the Cloudflare CDN. Have millions
of monthly users and still stay on the
free plan. Perfect for hobby projects,
student projects, and anything with a
large potential audience where money is
a significant factor. Depends on what
the APIs being called are. To be fair, I
still don't think the argument in of
like cost is the real reason that
serverside rendering matters. I have an
article about this on my blog. I never
talk about SSR isn't that expensive.
It's really not. This started with me
bitching on Twitter because I was so
annoyed at people having this
misconception, but SSR is not that
expensive if you game it to be a really
slow render. It can be not free, but
it's effectively free. It's so cheap.
With a SPA, there's no servers, no
maintenance, no CPU or memory
monitoring, no scalability issues. So,
what's not to love? If you need data
from a server, you will still have these
things. It's just that you're not also
rendering React on that server. And
those 4C load times aren't as bad as
they look. It only happens the very
first time a user visits, but it also
happens whenever you push new code up.
Granted, for your landing page, it's
unacceptable, but for your software as a
service project where you expect users
to visit the website often, 4 seconds
will only happen once per deploy. Then
the JS is downloaded and cached by the
browser. Second and following loading
numbers will be significantly reduced.
Once the JS is in the browser, cache is
only 800 milliseconds for the LCP. Now,
final number I'm interested in today is
when does the toggle become interactive?
In this case, since everything shows up
only when the JS is executed, it will
match the LCP time. So, as soon as you
see the content, you can click the
content. Important distinction. The same
time the browser's showing you this UI,
the UI is working because the code that
shows the UI is the same code that does
what the UI does. So, now let's compare
with serverside rendering. No data
fetching. The fact that we have to stare
at a blank page for so long started to
annoy people, even if it was for the
first time only. Plus, for SEO purposes,
it wasn't the best solution. This is
because you do SEO by putting things in
the head tag that describe the page and
also occasionally content on the page
that is being scraped by Google and
whatnot. But if the content isn't there
until you load JavaScript, execute this
JavaScript and fill the page, a scraper
can't find that unless it's running a
full virtual browser to execute the
JavaScript and populate the page, which
is so much work that most search engines
don't bother or they put less effort
into the pages that require that. People
started scratching their heads to come
up with a solution to how slow spa were
while still staying within React ideally
because that's just too convenient. No
one wants to give it up. We know the
entire React app at the very end looks
something like this. DOM elements
rendered a DOM app. But what if instead
you rendered it to string and then you
have this HTML string that you can send
down like the actual string that the
server can then send to the browser
instead of an empty div. Instead of one
empty div that's root ID is app. Yeah.
Now you just pass the right content
down. So you can do very simple client
side rendering where you render the app
to string and then you send that down
and then the HTML has the JS tag so it
can take over on the client right after.
Just needs one additional step. Find and
replace a string in the HTML variable.
Inject it in the server response HTML
with SSR. You replace the root with this
HTML string and then you send it down.
This is a basic functioning server
rendering thing. Funny enough, I wrote
very similar code when I was doing my
benchmarking of Cloudflare recently.
Now, the entire UI is visible right at
the beginning without waiting for any
JS. And this is serverside rendering as
well as static site generation because
render to string is an actual real API
supported by React. It's the core
notation behind most of the SSR and SSG
frameworks and also my benchmark. You
can a lot of people don't know this.
There's an API in React where you can
pass it a component call render string
with a component and it will respond
with the HTML for what it would have
rendered. If you do this for her
existing client rendered project, it
will become a serverside rendered
project. The performance will shift
slightly. The LCP number will move to
the left right after the HTML and CSS
are downloaded since the entire HTML is
sent in the initial server response and
everything's visible right away.
So, this is the key. HTML gets sent
down. It won't look quite right because
the CSS hasn't loaded yet, but you don't
have to wait for all the JS to load. You
just have to wait for the CSS to load.
And if you want, you can inline the CSS
in the HTML. You could even bundle it in
via some fun plugins for next.js. And
now once the whole HTML is loaded,
everything is visible. Nothing works
yet, but it's all visible. So, first
you'll see that the LCP number where the
page skeleton's visible should be
drastically improved. But you'll still
need to download, compile, and execute
the JS in exactly the same way because
the page is supposed to be interactive.
So like again you have to load all the
same JS you're just getting a better
HTML. The only difference between SSR
and CSR if you just turn on SSR for a
project is whatever the initial state
the JS would render is is sent down by
default. So the HTML is sent down with
the empty LCP state and then the JS has
to spin up do all of the same work it
would have otherwise and then all the
content comes in. So you're just moving
the LCP down. You're not going to change
anything else. And while you're waiting
for all of that, the page is visible.
The gap between the page being visible
and waiting for the JS to load to be
interactive is the time where the page
will appear broken. Most frameworks will
now cache events. Like if you click a
thing, but the JS hasn't loaded, there's
like a really small inline bit of JS
that will cache the things you did and
once the JS is loaded, we'll replay
those events. So usually you won't even
feel this. That said, there are other
frameworks that are built around their
hatred of this desynchronization state
where there's UI you could see but can't
use yet. This was Quick's whole thing if
you're familiar with Quick. If you're
not, no need to familiarize yourself at
this point. This is why the time to
interactive is so interesting because
during this time the toggle and the
header won't work. If you implement
serverside rendering this way where
you're not building a queue system or
using a framework that has this working
already, that broken window is going to
suck because those are clicks that
effectively disappear because there's
nothing there to keep them. But most
modern frameworks have solved this
problem and also the only the LCP mark
has moved because the JSL has to load,
execute, and then start fetching. The
data fetching doesn't happen any sooner
with this pattern. So the serverside
rendered version has successfully
knocked down the time for like a page
that's visible by like over 2x but every
other number is effectively exactly the
same. So what if we do some server side
data fetching too because remember the
data is not being loaded that happens
after. If we want to do the data
fetching on the server we can. So here
we fetch the sidebar promise data we get
the message promise data and then we
pass those down and render the app with
that data already loaded. So we don't
have to call react query anymore. And we
rendered a string. We passed the
messages and the sidebar content that we
loaded here. Again, she's not using a
framework like next. She's doing this as
just serve functions that you could put
in express, which is really cool cuz
it's showing like the concepts deeper.
We have a layout, sidebar, main content
all being passed from here. But we also
need hydration to work. So we need these
components to have the data when the
JavaScript loads on the client too. So
the solution there is either you make
them refetch it or the easiest thing is
you pass it as window data inlined in
the HTML and then on client side you
check is window defined. If it isn't you
use what app was passed and if it is
then you use the value on the window
that we assigned. This is this seems
like a hack and I hate that I talk about
serverside props again but that's how it
worked. We're just binding weird globals
to the window. It's not as tacky as it
seems or it is, but it's how we all did
it. And now the page structure changes
again. We have the HTML, the JS loads,
then the JS runs and the page is
interactive. But when the page loads,
all the content's already there. But
that adds new problems. So here we see
it takes 2.1 seconds for the page to get
LCP, but you have the sidebar in
messages all already working by the time
that hits. So you have the full page
content ready to go almost immediately,
but it actually takes longer for the
content to be interactive because in the
time you're waiting for that data to
fetch, you have no HTML. If the server
is doing those data fetches, the ones
that are here, effectively, we've taken
these two fetch calls and put them
before the HTML loads. So the HTML
hasn't finished loading yet. So there's
no way for the browser to start fetching
the JavaScript yet. So you're now
blocking the JavaScript on the API
calls, which means that the time till
you can interact has gotten worse. And
you're staring at a blank page for
longer because you're waiting for all of
that to come in before you see anything.
The benefit of this is that there's no
loading states. You just go from browser
blank page load to content already
there. But this comes at the cost of now
you are delaying how long till users can
click because you pushed these API calls
ahead of the HTML. The LCP degraded
compared to the previous solution
because we're blocking the response on
the data fetches. So how does the NexJS
pages implementation work? So this is
the old school get serverside prop
solution.
Get serverside props. We fetch this
data. We return it. It's doing basically
the exact same thing she did. So it
shouldn't be that different. Yeah,
client side data fetching is pretty much
the same as the server side rendering
DIY solution she did. Less time to
interactive because I'm sure that next
is doing some better things for that.
And the serverside data fetching still
makes the time to LCP worse, but
everything else stays pretty flat from
there. The interaction time is 3.5
seconds, which still better than the
non- Next solutions. Like when people
say use a framework, this is what they
mean. You can [ __ ] all you want about
next performance, but in a a
realworldish scenario, it's still
performing significantly better than the
vanilla solution. So if you look at my
benchmark where I compare the raw
throughput for SSR on Nex.js versus
vanilla React, it makes Nex.js look like
a terrible choice. But when you're
looking at how this impacts actual
client side behaviors, you'll see why
Nex.js exists. It's doing a lot of other
things to make the performance better
for users. Yeah, the LCP is slightly
worse for the custom implementation.
Cybar messages, on the other hand, show
up a second earlier in this case. It's a
very visible use case which of what will
happen when code splitting is performed
differently. Next splits the JS into
many more chunks in the custom solution
so they can all load in parallel which
helps a lot with fetching multiple
things at once. But those parallel files
steal some of the bandwidth from the
CSS. So the CSS download takes longer
than it would have otherwise.
Interesting point, but having them all
in parallel makes it faster down the JS
overall, which helps with the
interactivity gap. Makes sense. And now
for the reason we're all here, server
components. Very curious to see how this
part does because ideally it should be
the best of both where you have an HTML
file that has enough data in it to do an
LCP and start fetching the JavaScript
while also in parallel fetching the data
from the APIs. The biggest issue with
SSR is the no interactivity gap when the
page is already visible but the JS is
still downloading and initializing.
That's not the biggest issue. The
biggest issue with SSR is that you can't
be doing things in parallel. You're
effectively choosing between three
options. Okay, so there's three things
that we have to do. We have to send HTML
to client. We have to load JS and we
have to fetch data. In CSR and SSR,
these things all block each other. We're
effectively changing the order things
happen. With client side rendering, you
first send HTML that is empty to the
client. So it has nothing but the head
tags to fetch JavaScript. You might have
a hard-coded skeleton, but for the sake
of a basic CSR, we're sending an empty
div and a bunch of links to data to go
fetch. Then we load the JS that is
indicated in that HTML in the header or
in the head tag, not the header. Same
difference. Then once the JS loads, the
JS has to be parsed and run. And then
finally we can start fetching data. And
the key is that you don't see the UI
until here. You don't see anything until
this point. So you're just looking at an
empty page until then or a skeleton if
you hardcode one. But you don't you're
not looking at anything until then. And
each of these has to happen one then two
then three then four. They all block
each other. with server side rendering
without data. So we're not data fetching
on the server side. We first generate or
serve a cached version of the HTML. So
the HTML
with skeleton is sent. Ideally you're
getting this from a cache but you might
not be. So we'll assume
whatever we'll assume it's coming from a
cache. Now you can see the content but
everything else is still the same from
there. You have to load the JS, parse
the JS, then start fetching data. What
we've done here is change when you start
to see the page looking how you expect,
but everything is still blocking. Server
side rendering with data is where things
start to get more different though. So
server fetches data to HTML
sent to client. Now you can see, but
considering the fact that data fetch is
the thing that takes multiple seconds,
now you're not seeing anything until
that's done. So it's actually the
longest time till you see anything. But
then there's no loading state. You see
everything immediately, but it's not
interactive yet. The JS still has to
load. And then parse JS and make page
interactive. So now you see better
content. So I also want to make this a
full line because you're seeing the full
content. You just can't interact yet.
The problem here is that you can't do
these things in parallel. In an ideal
world, you'd be able to fetch the data
at the same time you're sending HTML to
the user and then they're loading JS
while you're still fetching the data. I
don't want this to be 1 2 3 4 anymore.
Like if we make this blocks, we have
HTML, we have loadjs,
we have runjs, bit smaller, and then we
have the data fetch, which is the
longest part. So all we've been doing
this whole time is changing the order
that things happen in the standard SSR.
We are just improving the quality of the
HTML that comes down. Maybe we increase
how long it takes for the HTML to come
down, but you're just getting better
HTML. That's all CSR versus SSR is by
default. SSR with data fetching is
reordering things a bit. So CSR SSR then
SSR with data. The difference here is
that fetch data gets moved up. So now
you get a complete page sooner in terms
of what it looks like, but you're not
meaningfully decreasing the amount of
time it takes for everything to happen.
We're just shifting around what is
where. If RSC's are what they're
promising, what it should be doing is
this.
So let's see how the results come out.
Introducing RSC's to recap previous
sections. fetching and pre-rendering on
the server can be really good for the
initial load performance numbers, but
there are issues. She cites the no
interactivity thing here. That's why I
just broke off. Like that's a problem,
but in my opinion, the problem is that
we're not parallelizing anything. Oh,
look. The second issue is data fetching.
I want to prefetch messages on the
server, thus reducing the time to wait
for the messages to appear. It will
negatively affect the initial load and
the time where the sidebar items show
up. Because what's happening here is
since we are waiting to load the JS
until the data fetch on the server, it
takes longer until you can click because
we're not loading the JS as early as we
were before. So this ends up making it
take longer till the page is
interactive,
which sucks. We wait for the data first,
then we pass the data to render to
string, then we send the results to the
client. What if our server could be
smarter? Those fetch requests are
promises. They're async functions. We
don't need to wait for them to start
doing something. What if we could
trigger those fetches, start rendering
React stuff immediately, send down
something to client when the sidebar
promises resolved and the data is
available, render the sidebar portion,
inject it into the server page, and then
send that to the client and do the same
for messages, replicating the exact same
structure of data fetching that we had
in client side rendering before, but on
the server. In theory, if this is
possible, it could be crazy fast. we'd
be able to serve the initial rendered
page with placeholders at the speed of
the simplest SSR and still be able to
see the sidebar and message items way
before any of the JS is downloaded and
executed. So to do this, we have to
understand server components. Typical
React component quite often just lays
out HTML tags on a page like the sidebar
component here. There's a bunch of divs
and links. It's all still JS. Exactly.
This code is included in all the JS
files that contribute to the no
interactivity gap. But there's no
interactivity here. It's just the HTML.
Like we don't need to send this as JS.
we can send less JavaScript ideally if
it's something that doesn't change. A
good example of this is like a terms of
service or like terms and conditions
page because those are just a bunch of
text. You don't have to be able to click
the div. You don't need to send that as
JS. The only reason this code is
included in the JS bundle is because
React needs it to construct the virtual
DOM because React needs to know where
everything is on the page to change
anything on it. But ideally, we don't
have to send this as JS. In the current
SSR implementation, whether it's next
page or her custom hacky solution, the
process of extracting the tree for the
React components happens twice. The
first time when you do the pre-render on
the server and the second time from
scratch on the client side because when
you do serverside rendering, the client
and the server both have to run the same
JavaScript code and come to the same
HTML so it can all link together and
provide you the experience you expect.
And that's hydration if you're curious.
What if we preserve the tree we
generated on the server and just sent it
to the client? So React doesn't need to
recreate the whole thing and just
recreates the parts that are client
side. We could send less.js, reducing
the size of the bundle. We wouldn't have
to iteratively call all of those things
in order to create the whole tree. So
how do we send this data? You hack it
into the window. And this is actually
what happens in most RSC
implementations. She migrated to app
router and sure as hell, this is what
she saw. U next_f.push
all of the data for this tree.
It's a modified but recognizable tree of
objects that represents what should be
rendered on the page. This also does
mean you're sending the data twice.
You're sending it as HTML and you're
sending it in this embedded JavaScript
tag. And when people say that server
components don't need a server, this is
what they mean. You can at build time
generate the HTML that isn't going to be
dynamic and have a bunch of different
HTML pages that have this embedded in
the script tag in the page. So you don't
have to load JavaScript that isn't used.
You can have JavaScript in a React
project that is client only that at
build time generates different HTML for
different routes and then just never has
to run on a server. It's just not
running those components. Part of why I
think the server component name is bad.
Part of why the the rollout was bad.
Server components should be really good.
There are a lot of reasons they're not.
This concept people not getting it is
part of it. With a server component, all
that you're sending to the client is the
HTML and that weird structure that we
just showed. One of the advertised
benefits you'll see that go together
with server components is a reduction in
bundle size. In theory, if all the code
and library stay on the server and only
the final structure sent to the client,
the amount of JS downloaded should
noticeably decrease. We know the impact
of too much JS. This sounds like a good
idea. Let's see how it works out.
There's also async components, one of
the cool parts of server components. You
can have a component that fetches data
and you can suspend above which enables
streaming. In a normal SSR
implementation, the server will generate
the entirety of the HTML and then send
it all at once when it's ready. With
streaming, you can first create a node
stream and send the pipable stream data
down. So, it sends it chunk by chunk as
it's ready. What it does is it sends the
majority of the page and then as you
finish it appends new elements to the
end with script tags that move them to
the right place. It's actually really
cool how it works. You don't need to
know the details. What you need to know
is your components effectively can pop
in when they're ready from the same
server call. The chunk boundaries for
this process are not react components
racing components by the way. It's
components that are wrapped in suspense.
Remember this. It's crucial. We'll see
significant changes when we start
measuring. The working implementation of
this for a multi-page app like her is
insane. Docs don't cover it well. I
spent an embarrassing number of hours
trying to make it work and the end
result would have been multiple really
complicated files and still halfbroken.
It's not a simple rendered to string
like it was with the normal SSR. Yeah,
doing this DIY and not doing it via a
framework sucks because that that
interrupt between the server and client
is non-trivial and some of the hacks
that have been discovered to make it
possible are there's a reason that
there's only two server component
frameworks that are viable. It's not
easy to implement. But when you use app
router, it gets a lot easier. And as she
says here, NexJ's app router is
basically a synonym for server
components and streaming at the moment.
Yeah. Yeah. I wish this wasn't the case.
I really do.
But it is
it's really hard to use it. Server
components now are experimentally
supported in React Router which is cool
but next is still how you're using it
almost always. So let's measure app
router after the lift and shift
migration. She points out the fact that
Nex.js has a ton of other things like
optimizations, caching assumptions,
transformations that would make this not
a viable test. So she just rewrites over
it won't be a fair comparison because
there's no way to distinguish between
Nex.js benefits versus server component
benefits. But if she migrates to pages
first, then it's possible to compare
between the pages version that is doing
serverside rendering and the app router
version that's doing RSC stuff. She
already has the existing version with
pages and client side data fetching.
This is the app with the smallest LCP so
far, the second row in the table.
Migrate that to the new framework with a
completely new mental model, completely
new way to fetch data. Let's see. Lift
and shift as much as possible. Make sure
the app works and nothing is broken. In
the context of this experiment, it means
reimplementing routing a bit and using
client in every entry file. This will
force app router into client components
everywhere. So isolate the effects of
the new framework. Every benefit or
degradation here will be because of the
framework, not because of server
components or streaming. So this is
without server components just moving
everything over to app router. She saw a
meaningful improvement from the pages
implementation. 1.76 seconds to 1.28
seconds. Although it looks like the
sidebar took longer to load from 3.7 to
4.4 4.2 to 4.9 for the main content.
Interesting that LCP got much better.
Everything else got worse.
Yeah, 500 milliseconds better for LCP,
700 millonds worse everywhere else. Very
interesting. Apparently, app router
delays alljs until after CSS is loaded.
The pages version didn't do that and
would load them in parallel. As a
result, the JavaScript loading stole a
bit of bandwidth and the CSS was loading
slower in pages, thus delaying the LCP.
That's actually a really cool detail I
did not know. I get why they would do it
in order to keep like weird edge cases
with layout shift from happening. That's
very interesting. Also, app router is
very busy on the main thread. at least
100 milliseconds worth of tasks more
than pages was cuz app router does a lot
more stuff on the client side. Yeah, in
total the effect I'd say is a bit meh.
Maybe further refactoring would make it
better. Not much you could do on her
side other than maybe inlining the CSS
cuz she's using Tailwind anyways.
There's a flag for that which would keep
the CSS from blocking which could help
here. Yeah, even CJ didn't know this.
That's Yeah, that's how you know we're
deep. It's actually it's really nice to
see people who don't necessarily know
all of the history of why all of these
things happened on the React and Next
teams, but do know how to profile things
really well because she's finding things
I wouldn't have because I'm too in the
weeds of how these things were all
implemented to think about this level of
it. There's an XKCD on this. Silicut
chemistry is second nature to us
geochemists. So, it's easy to forget
average people probably only know the
formula for olivine and one or two feld
spars and quartz of course.
Yeah, I don't know what people don't
know in this world because I'm too deep.
But it's nice seeing that like because
she doesn't know about the inline the
CSS feature in this, she's finding
things I wouldn't have found. But now we
can compare with the server component
implementation when we start when we
start removing the used clients in
certain places. The JS was reduced. All
right. On some pages it was just a
little bit. So homepage only took a 2%
but others by a lot. login page went to
almost zero from kilobyte values to
bytes. Most of the shared chunks didn't
change at all and the performance impact
on all the metrics that are important to
her on the inbox page were exactly zero.
So server components by themselves
without rewritten data fetching in my
app didn't have any performance impact
and I suspect in most real world messy
apps where use client is slapped
randomly here and there without much
overthinking and eventually bubbling up
to the very root of most pages it'll be
the same. Yeah, I think that's fair.
What happens once you change the data
fetching a bit? Instead of use affecting
the data on client, you have an async
sidebar component that waits for data,
parses it, and then renders accordingly.
The complicated part here is that when
you do this, the sidebar component can't
have a parent that's a client component
that mounts it. Like sidebar can't be
called from a client component. The root
has to be server components. And then up
to where this is rendered, it has to be.
If you have client components between,
you can render them, but you have to
pass sidebar as a prop to those or as a
child to those other things. Once you
did it, let's see the results. O
1.78 seconds for LCP. That's a little
scary that it got worse for that. It
shouldn't theoretically, but it stays at
1.78 for showing all of the content
here. Did she not do suspense for this
one?
Ah, she didn't. Yep. Called it. So what
happened here is she's fetching the data
and it blocks the page load so nothing
gets to client until the data is
fetched. But you can wrap components
with suspense and show a loading state.
This is the one I care about. There we
go. 1.28 seconds for LCP, 1.28 seconds
for the sidebar cuz the sidebar loads
fast enough. It gets streamed in almost
immediately.
Wait, why is messages 1.28 as well? That
should not have happened. I'm not sure
about her LCP measurement here. I might
have to run this myself. Interesting.
It's so fast that all the numbers merge
together again. Something somewhere does
some form of batching, I'd assume. And
those three ended up in the same chunk.
But what she did is she increased the
load time to be 3 seconds for the
sidebar and 5 seconds for messages. So
that it will increase how long the whole
thing takes which makes the benefits
much more visible here where you can see
we got to start loading the CSS and then
the JS way earlier but the HTML keeps
loading because it's still sending data
when it's ready but with traditional SSR
you have to wait for the page to load
then all this stuff happens but the HTML
is not sending anything more so it's the
TLDDR here client side rendering is the
worst for initial load point of view as
expected but once the page is
interactive as as soon as it appears and
transitions between pages are fast.
Serverside rendering can vastly improve
load numbers, but it comes at the cost
of the no interactivity gap and also
skeletons like become a harder problem
to solve there. You're stuck waiting on
browser load states much more so I think
client side with skeleton is more ideal
than traditional SSR in most cases.
Fetching data on the server will slow
the initial load will also make full
page experience visible much earlier.
Priority from traditional SSR to server
components with streaming, namely from
pages router to app router can make
performance worse if you're not careful.
You need to rewrite data fetching to be
from the server. And don't forget
suspense boundaries to see any
improvements. This is significant dev
effort and it could require
rearchitecture of your entire app.
Migrating from pages to app router can
make no interactivity gaps worse because
of the delayed JS download, but that
might be browser dependent and also you
can preload the CSS. Server components
alone don't improve performance if the
app's a mix of client and server
components. They don't reduce bundle
size enough to have any measurable
performance impact. Streaming and
suspense are what matter. The main
performance benefit come from completely
rewriting data fetching to be server
components first. Yep. Like
realistically speaking, if you're happy
with the way your routing is working and
you don't have any interest or plans to
move data fetching to server, you
probably shouldn't make the move yet.
Where server components get really cool
is when you are putting some effort into
thinking through what components are
server, what components are client, and
what ones will be streamed in when
they're ready. Chat's been calling out
that she didn't mention partial
pre-rendering at all.
Eh, the way that she was testing PPR
doesn't really matter. What partial
pre-rendering is is when you generate
the HTML here during this step on server
PPR is caching that result and putting
it on a CDN so that you don't have to
wait for a server to send the response.
But since she's doing a fake local
reverse proxy anyways that's going to
respond almost immediately. The benefit
of taking that chunk and putting it on a
CDN is near zero. So not having partial
pre-rendering here doesn't matter for
the way she's benchmarking. So I don't
know why everybody's saying that. Meh.
Next 16 also preloads the cacheed bit
which removes the network hop cost. I'm
not sure. Oh, so like on the server side
it skip steps because it has that cached
part of the tree. Not sure I follow. Oh
yeah, that this is for navigation. So I
don't think any of these measurements
were navigation though. So the the thing
that Nan just said which is actually a
really cool benefit. basically partial
pre-rendering it renders until the first
await or the first time you call
something that's user specific like you
check their headers or whatnot. So it
looks it goes down your react tree until
you hit an await and then it caches
everything until then throws it in a CDN
and now you can load that without the
server having to load. But more
importantly you can prefetch that for
all of the links that are currently
available on the page. So when you click
one it can have that partial pre-render
that piece ready to go when you click.
So you see that immediately and then the
rest of the data starts streaming once
the server spins up connects and sends
you that data which is cool but that's
not what was being tested here. It is a
really cool benefit and it gives you
that same feeling that was being
described earlier with client side where
when you click something it changes
immediately. Um what are you saying
Wendro? I say pages router with client
side fetching has the best performance
overall. No it doesn't. I get the same
numbers. Pages router with client data
fetching does not have the best numbers.
Takes 4.2 2 seconds before you see
messages. Also, do you see how short
these times get once things are cached?
It gets [ __ ] free with app router
with server side stuff. It has the best
time to interactive with data fetching
without caching if you ignore the fact
that a bunch of the content is still
missing. But it has worse time to
interactive as soon as you consider the
JS might be cached and that you're
interacting with content that has to
appear still. I just there is no world
in which when implemented correctly,
pages router is better than app router.
It just isn't. So I'm like could you
click the toggle really really quickly
like milliseconds faster when you're
still waiting for content that would
have been loaded in the app router
version? Maybe. But that's purely
because of the way the CSS is loading
and like that's not a real problem. You
lose bandwidth for fetching the unneeded
pages though, right? Yes. But like
almost none and it doesn't do that until
everything else is loaded, which is
fine. I think it's great that modern
frameworks will prefetch things so that
when you click them, it feels faster. No
one is getting build because they loaded
another 500 kilobytes of HTML that makes
navigating feel better. I think that's
fine as long as it's not blocking other
data fetches from happening that are
more important. Overall, this is really,
really good though. Thank you again to
Nadia for writing this. This is a
fantastic write up. one of the best like
first principles, making sure you
understand the concepts and you're
testing the right things and doing the
due diligence to test the actual pieces
that matter throughout it. I've never
seen someone do a breakdown of these
different methods and what the
performance actually looks like while
also educating the reader as to how
these things differ. This whole thing
was great and the only missing details
are actually part of what makes it so
good because all of those are just in
the weeds [ __ ] that only matter if you
care too much like I do. And even then,
she gave me a great opportunity to talk
more about it. Once again, I highly
recommend her work. If you want to learn
more about these things, advanced React,
web performance fundamentals, and more,
check out her work. It's all linked in
the description for a reason. She's good
at what she does. Thank you all as
always. Hope you learn something. And
until next time, keeps over rendering.
Server components should be making our apps way faster, but are they? Thank you Frontend Masters for sponsoring! Check them out at: https://soydev.link/masters You should buy the author's book: https://www.advanced-react.com/ SOURCE https://x.com/adevnadia/status/1980492315581198412 https://www.developerway.com/posts/react-server-components-performance Want to sponsor a video? Learn more here: https://soydev.link/sponsor-me Check out my Twitch, Twitter, Discord more at https://t3.gg S/O Ph4se0n3 for the awesome edit 🙏