Loading video player...
Google's anti-gravity just released and
in this video I'm going to show you how
to use it, how to set it up on your
local machine and I'm also going to show
you three major features that you need
to know if you're using anti-gravity
here to do vibe coding. So first thing
first I'm going to show you the one of
the feature which is the agent manager.
You can think of it like a orchestrator
that orchestrate multiple AI agents
running at your same times inside of
your coding IDE so that we're not
wasting any time for our vibe coding
here. And the second feature is the
browser agent. This will basically help
us to navigate to our applications and
examine and also help us to figure out
what can we improve for application. And
lastly, I also want to show you is the
ability to have multiple models inside
of our coding editor. For example, we're
not limited to just the Gemini models.
You can see here we also have the
ability to use claw models, the GBT
models, any models we like. Okay, so
pretty much these are the three main
features that I'm going to show you for
anti-gravity. And I'm also going to show
you the development workflow that I'm
going to follow to make sure that we're
getting the highest accuracy for using
Google anti-gravity for the lab coding
here. And to do so, we're simply going
to provide the prompts on how we're
going to build things. Then we're going
to pass it to the plan mode here to
basically create a task list on how the
AI agent here is going to perform the
task. Then we're going to pass the task
list here, the instructions to the large
language model to build the
applications. Then we're going to test
out the applications using the browser
agents that we just talked about.
basically have it to generate a report
on what things did well and basically
continue to cycle through until we have
a perfect application built. So pretty
much that's what we're going to cover in
this video. So with that being said, if
you're interested, let's get into it.
All right, so to get started with the
Google anti-gravity, first thing first,
we're going to download this on our
operating system. So in this case, I'm
going to download this on my Mac OS. And
once I download this, open the
applications here is navigating me to
the onboarding session. So I'm just
going to follow this setting up import
from VS Code settings. Now, in terms of
the editor themes, I'm just going to
choose the import themes that we have,
which is the solarized light, and click
on nest. Here, in terms of the
antigravity agents, we're just going to
choose the recommended, which is the
agent assisted development. And here in
terms of configurations, I'm just going
to use the default one here and click on
next. All right. So, once we set up with
the onboarding process, here is
basically our anti-gravity co-editor
here. So, one of the main feature from
anti-gravity is the agent manager. So
here if you want to access the agent
manager simply just going to do command
E or you can click on the open agent
manager right here and here you can see
this will basically navigate us to the
agent manager. So basically what this
agent manager does think of it like an
orchestrator which will help us to
orchestrate multiple agents running at
the same time. So for example here you
can see we have different workspace that
we have created inside of our agent
manager for example we have our main
website we have our bookkeeping
applications and here we can also be
able to add a new one here by simply
just clicking on the open workspace
right here. So what we're going to do
here is I can be able to open a new
conversation for example I can be able
to use the context here to reference any
files or folders inside of our projects.
I can also be able to attach any files
or images we have and simply I can also
be able to change the conversation mode.
For example, if we want to do a plan
mode, if we want to do a fast mode like
direct executions, right? And also we
can change different models. For
example, we can use Gemini 3 Pro and
also claw as well as GBT as well. So
that we're not limited to the specific
model that the Gemini provides. So in
this case, I'm just going to um mention
a prompt here saying that please help me
to look through the entire codebase and
try to understand everything and give me
a rundown on what is currently going on
with this application. So I'm just going
to provide a simple prompt here. Try to
have it to index our codebase. So I'm
just going to submit this request and
let it do its thing. And while it's
doing that, I also want to create a new
conversation here with the ARF type
website. So here I'm just going to say
based on the uh index.html here. So
based on this page, I want you to
examine this page and try to understand
what are some areas that we can improve
to improve the overall high conversion
rate on this landing page. So once we
have our prompt, we're just going to
send this request. And at the same time,
I can also create a new conversation and
be able to have another conversation
with the same workspace that I have. For
example, here I can be able to provide a
prompt that I wrote. And this is
basically a context on what are the
information that we need to be able to
have them to create a better version of
the website. So what I want to do here
is that I want to say based on this
prompt right here that I have, I want
you to do a research online to find out
what are some themes, what are some UI
themes that we can use to copy and be
able to improve our current website
design. And I also want you to open the
browser here to navigate to this website
and be able to take some screenshots and
see and be able to do a comparison. All
right, so now you can see that this is
the prompt. Not a perfect prompt, but
we're just going to submit this request
and see how Gemini handle this. So,
while Gemini process this, one of the
feature from Gemini anti-gravity here is
basically have the ability to open the
browser and analyze the changes. So, for
example, here you can see that it's
going to create a to-do list. So, here
you can see on the right, which is our
UI theme research and comparison, and
what it does first, it's going to
understand the prompt, right? And also
try to analyze the current website by
opening the website browser. And as we
speak, you can see that it opens the
website browser. And basically you can
see that it's going to taking some
screenshots and try to analyze the
entire website. So let's wait for a bit
until it fully analyzed this. All right.
So after it has analyzed the entire
website here you can see that these are
the things that it captured. And here is
basically the research that's going to
do right. So it has analyzed the entire
website by taking the screenshots and
now it's going to do the research on the
UI for the themes like the enterprise or
the high ticket sales and try to select
top candidates here to make decision. So
while I was doing that I'm also going to
jump back to another conversation with
another agent in this case the
bookkeeping applications that we have
been working on and here you can see
that this is the current status right
now right so you can see that has
complete these things and here is
basically the project overview that has
understand and now what we can do here
is I'm going to have a new conversation
with this AI agents by basically saying
that can you be able to open the browser
here and be able to log in with the
following credentials and I'm just going
to provide the credential and basically
what I wanted to do is to I want you to
log into this application, take a
screenshot of the dashboard and also
take a screenshot of all the subpages
that we have inside of the authenticated
account and after that I also want you
to play around with the match page where
user can be able to match the
transactions with receipts and see if
those features are working and if not
then please make sure to log them down
exactly what are the problems and based
on the user experience how can we be
able to improve a better UX for this
application and here I'm just going
going to have Gemini here to send this
request and let it do its thing. All
right. So, while it's doing that, let's
also check back with the Eric Tech
website. So, right here you can see that
it has done the research for the UI
themes. And here you can see that has
completed the research and the
comparison task. And now, if I were
navigate to the website here, you can
see that this is what it looks like. So,
you can see we have our Amazon, we have
our Microsoft, and here we also have our
call to actions. And also, we have our
education footer. And here is basically
the light themes. We can choose the
light themes here. And this is basically
our light theme right here. Okay. So
overall you can see the style here the
theme looks much more better compared to
the last one. Now whilst doing that
let's also check back with the
bookkeeping applications and we can see
here that it in fact did open the
application in the browser and try to
navigate to different parts of the
applications like the receipt page you
can see here and after it takes some
screenshots and it will also navigate to
the transaction page and after
everything's done it will basically try
to analyze everything and eventually uh
it should be able to generate a plan
based on what we have here. So here you
can see it generates a detailed report
with the screenshots and recommendations
in the UX recommendation review. So
let's take a look at what this generate
for the UX review doc. All right. So
here you can see that inside of this
review report you can see we have a
bunch of screenshots for each pages like
receipts statements and match pages. And
here you can see that inside of our
findings we have all the findings that
we find for each page as well as the
problems the recommendations and
basically we can also be able to add
comments onto this report to basically
have it to do a revision of the report
and be able to continue from there. All
right. So so far we have went over is
the agent manager on how we can be able
to have the agent manager here to
orchestrate different AI agents running
in the background and also the browser
mode which we can have our AI agent here
to open the browser for applications and
try to verify or analyze the changes
that we have inside of our application.
So now what I want to talk about is how
we can be able to use different large
language model. For example, here you
can see we have a new prompt and here we
can choose our AI agents or in this case
our large language model to be able to
have a different model here to process
our request. So in this case I'm just
going to send this request and the goal
here is basically adding a new section
called project section inside of our
static landing page. All right. So
eventually here you can see that it has
added this feature for the feature
project section inside of our website
with all three video demo cards are
working correctly with the hover
effects. So right now if I were to look
at the preview this is the screen
recording that it has made. So if you
scroll down this is the feature and if I
were to scroll down this is the feature
but it doesn't show the uh image from
the Google drive link. So in this case
I'm just going to come back. So here I'm
just going to say that the current cars
for the projects does not show any
images. So please make sure to add the
following after the image URL to be able
to have it to display the image. And
basically this is the slash view that I
was talking about to basically add it to
the each thumbnail that we have. And I'm
just going to send this request and see
how it works. And in the meantime here
I'm also going to um copy the rei file
for each projects that I have. So for
project one this is the rei file. And
for the project two
this is the REI file. And then for the
project three, this is the REMI file.
And please make sure to update the right
information for the project description
for each cards. So in this case, I'm
just going to send this request to claw
4.5 here to then have it to update the
project cards. So in this case, we can
see it's going to process this request.
And let's wait for a bit until it fully
process this. So now you can see that it
has done the verification for the
project and here if we will scroll down
you can see that if I were to open the
application this is what it looks like.
So this is our static page which will
have our feature projects. So this is
basically our three projects that we
have. We also if I were to click on this
you can see it navigate us to the right
projects inside of our YouTube channel.
So it also has the descriptions for the
projects, the skill sets, the tools that
we use and all those kind of things.
Now, before we jump into the next
section, let me give a quick shout out
to the sponsor of this video, Testbrite.
Testbrite is an AI agent that's
specifically for software testing. With
the release of the Testbrite MCP, you
get a chance to use the Testbrite MCP
inside of your coding IDE like clock
code, cursor, windserve, and more. With
simply just adding the configurations
inside of the MCP settings, you can use
it right away. With the test MCP, it not
only understands what you want because
it first read through the codebase,
understand the documentations as well as
validates the results that your Asian
roads and it is doing that by
automatically generating the test plan
for the PRD documentations and producing
the test cases and test coverage without
any manual inputs. And then from there,
it will basically start to execute the
test and then be able to send the
reports back to you by telling you
exactly what's broken. with normal
coding accuracy of 42% from other coding
agents. We can be able to improve that
with test MCP with the feature delivery
accuracy of 93%. So if you're interested
to try it out, you can check out this
video that I made or you can check out
the link in the description for more
details. Now one of the cool feature
that I realized from anti-gravity is the
generate commit message. So here you can
see that for the source control let's
say if I were to delete bunch of like
end to end testings or testing files or
images for the test results I can
actually be able to click on generate a
commit message based on the changes that
we have made and what's really cool is
that before I have to use AI here to you
know generate this based on the onstage
changes I have to write it down but this
one this time it's going to give us a
shortcut button here that can help us to
be more specific exactly on what are the
changes that we have made for the
onstage changes. So here you can see
delete all the steel results, the
reports and the associate tests as well
as the helper files for the end to end
testing. So in this case I'm just going
to commit this change and push the
change onto the branch here. You can see
that it has our uh commit message that
we just created with AI. All right, so
pretty much that's it for the review on
the anti-gravity from Google. And in
this video we went over how to use the
agent manager here. Basically try to
spin up multiple sub agents running at
the same time in the background. Right?
We can be able to have AI agent here
working on the same projects or a
different projects at the same time. And
the other one here is we can also be
able to use the browser agent from
Google Anti-gravity which can be able to
navigate to the application in a Google
Chrome browser. And it can also be able
to verify the changes taking screenshots
and try to identify where the issue is
and how to better fix it. And lastly, we
also went over how to select different
models. For example, we're not limited
to the Gemini models that we have, but
we also have the option to choose Claude
and also GBT model as well. And lastly,
we also went over a development workflow
here simply providing the prompts and
then we pass it to the anti-gravity
using the PLMO here to basically
generate a to-do list on how the AI
agent here is going to perform the task.
Then it's going to basically execute the
task using the model of our choice. And
then eventually we're going to use the
browser agent here to test it and
generate a report. And eventually we can
be able to take this report and continue
to prompt it and refine our application.
So pretty much that's what I showed you
in this video. And if you do found value
in this video, please make sure to like
this video, consider subscribing for
more content like this. But with that
being said, I'll see you in the next
video.
Google Antigravity just released, and it might change how we code forever. In this video, I walk you through exactly how to set up Google Antigravity on your local machine and break down the three major features you need to know to master "vibe coding". We dive deep into the Agent Manager to orchestrate multiple AI agents, use the Agentic Browser to test UI changes in real-time, and swap between models like Gemini 3, Claude, and GPT. I also show you a full development workflow: from generating a plan and building the app to verifying it with the browser agent and generating AI commit messages. ━━━━━━━━━━━━━━━━━━━━━━━━━━ 🔗 RESOURCES & LINKS 🚀 Get a special discount on your TestSprite subscription with my affiliate link! https://www.testsprite.com/?via=eric 💬 Free Community & Support Join our Discord: https://discord.com/invite/erictech 📅 Work With Me - New Projects - Free Strategy Call: https://calendar.app.google/sB9KrJP6e8j3EPmd9 - Technical Consultation (Paid 1:1): https://calendar.app.google/BU9D589X3KNxnTeg6 🤝 Let's Connect LinkedIn: https://www.linkedin.com/in/ericwtech/ ━━━━━━━━━━━━━━━━━━━━━━━━━━ ⏱️ Timestamps: 00:00 - Intro 01:34 - Installation & Onboarding Setup 02:14 - Feature 1: The Agent Manager (Orchestrator) 04:43 - Feature 2: Agentic Browser & UI Validation 05:32 - Multi-agents Dev Workflow 08:01 - Feature 3: Using Multi-Models (Claude, GPT, Gemini 3) 10:37 - TestSprite MCP 11:39 - AI Generated Commit Messages 12:25 - Wrap up #GoogleAntigravity #AICoding #SoftwareDevelopment