Loading video player...
AI coding agents are really incredible
right now. You point cloud code, a
cursor to a massive codebase, would have
thousands of files, millions of lines of
code, and it just figures it out, right?
It gs through directories. It reads the
right files. It understands how
everything connects. And for the most
part, seems pretty impressive, right?
But what if I told you that all of that
searching, all of that reading, all of
those tokens you're paying for is one of
the most inefficient way of doing
things? What if the AI has been doing
things the hard way this entire time
because we just didn't know any better?
Because guess what? AI coding agents
just got access to something that
they've never had before. It's a new
superpower. A system that your IDE has
been using since many, many years while
your AI was out there gripping and
searching like a caveman. Well, cloud
code just shipped with native LSP
support and suddenly everything that you
thought was impressive now looks very
very primitive. Let me show you what
changed and why this might be a lot
bigger than it sounds. So this is what
happened. On December 20th, Anthropic
quietly released a new version of cloud
code and the change log is one line
called LSP tool support. What exactly
does that mean? Let me ask you
something. You're working on a codebase
in your IDE, right? You're looking at
some code and you see a method being
called and you need to look at what that
method actually does and you do that by
reading the implementation, right? What
do you do? You controlclick the method
name or commandclick if you're on a Mac
and you transport straight to the
definition, right? The IDE takes you
there even if it's in a different file,
a different package or maybe a different
module entirely. Your IDE just knows
where that method is. Or maybe you're
debugging information. You need to see
where are all the places where a certain
method is being called. What do you do?
You right click and then you say find
all references like in VS code, right?
You get an instant list of where the
references are or you can hover over a
variable and the ID shows you okay here
is the type, here is the docs, here's
the signature, all that stuff, right?
You do this so many times in a day
without thinking about it. The command
click or control click is like muscle
memory at this time for programmers,
right? That is LSP. That's language
server protocol. And that's what powers
all of the magic in IDs, in most IDEs.
And here is the thing that's been
happening so far. Your AI assistant
couldn't do any of this until now,
right? If Claude had to figure out a way
to see where a function was defined. It
didn't have that command click or
control-click superpower. It was
literally searching through all the
files. It was doing a grip read grip
again read more and all the while it was
just burning tokens and hoping that it
found the right thing. Meanwhile, your
IDE just knew one click instant answer.
But now Claude Code has that same
superpower. Cloud code can now
controlclick too. Well, metaphorically
speaking, controlclick. can ask where is
a certain method defined and it can get
an instant and precise answer from this
thing. So it doesn't have to do GP for a
lot of these use cases. No more
searching through entire files and just
hoping that it'll find what it was
looking for. But how does this work? How
does your IDE just know where everything
is to begin with? that is a separate
program running in the background called
a language server. There is this thing
that's constantly analyzing your code.
It parses every file and it kind of
builds a complete map of your project,
right? It figures out okay what's
defined where, what calls what, what
types flow through where and how like
everything connects together. It's uh
it's like having like a person who's
cataloging like an obsessive librarian
who's accounting and memorizing your
whole codebase and keeping track of
everything. You can ask this anything,
right? You can ask where is this
particular method defined and it'll say
this is line 42 in user service file,
right? Who calls this function? Well,
here are seven places it's used. So, it
basically keeps track of everything.
Instant answers. No more searching
required, right? Because the language
server already knows. So this kind of
intelligence ids have had for a long
time. Intelligj has known where your
Java methods are defined. VS code knows
where your TypeScript and JavaScript
methods are defined. But here's the
problem. Every IDE had to build this
from the scratch. So if you have
IntelliJ and Eclipse, right, both are
working with Java code bases. Well, both
of those teams had to implement the
exact same cataloging logic to figure
out where things are in a Java codebase.
And they had to do this separately,
right? From the ground up. The same work
the intelligent team had to do and the
Eclipse team had to do. Now if you
multiply that by every language, right,
Java Python JavaScript Go Rust and
you multiply that by every editor, the
same thing has to be done for VS Code
and Vim and Emacs and Sublime Text. You
get the idea. It's a bit of a mess. And
this is the reason why in 2016 Microsoft
said, "What if we just standardize how
these tools ask these questions, right?
create a common protocol, a language
that any editor can speak to to a
language server. Right? You want to
create an editor. If there is an
existing language server for Java, for
example, you can just call that language
server and your editor now has Java
support. That is LSP. That's the
language server protocol. So there is
this Rust analyzer for Rust. There is
Pyrite for Python. There is TypeScript
language server for TypeScript. So these
are all language servers that you build
once and any editor can take advantage
of it to scan the codebase and
understand where things are, right? And
any tool that speaks to the protocol can
use them. Your IDE uses them. VS Code
uses them. Vim uses them. And now cloud
code can use that same tool. Let's make
this concrete. What does controlclick or
commandclick actually look like for
cloud? Right? So say you're working with
cloud code. you ask it to refactor
something or trace a bug or let's say
you even ask a question like understand
how a piece of code works before lspa
would have to gp through your whole
project. It had to read all the files.
It had to search for patterns. Uh so
let's say you are you're looking for a
method like get user by ID. You find get
user by ID in this file. Now you have to
look at the whole file and understand
it. Search for other instances of that
particular token and so on. It is not
bad but it's like searching and feeling
around in the dark. part. Now with LSP,
Claude can just ask the language server
where is get user by ID defined and it
gets an instant answer just like an IDE
would give you. It gives you the file,
the line number and the signature,
right? Who calls this function? Language
server gives you the list, every caller,
the exact location, right? Everything
you expect an ID to do. So no more
searching, no more reading entire files
hoping to find the context. Now cloud
code can get instant precise answers
from a system that actually understands
your code and not just search. This is
huge right? This means you get faster
response right lesser time spent
searching and more time basically
thinking and you get lesser token usage
as well. So you get lower cost and this
I think improves the effectiveness and
the accuracy of the coding agent. No
more searching and just potentially
missing things out. So cloud isn't just
coding by searching anymore. It's coding
with the same superpowers that your IDE
has had for years. Now speaking of IDE,
I can't talk about language server
protocol without talking about the one
elephant in the room which is Jet
Brains. You know a lot of us work on
Java and work on Intelligj. For the
longest time, Jet Brains was the
creators of IntelligJ idea, they have
resisted language server protocol,
right? They've had their own system. And
honestly, I think it's a little more
powerful than the language server
protocol because they have kind of
custom made their system specifically
for Java. Whereas language server
protocol is a more generic protocol that
fits any language. And Jetbrains
refactoring tools are like very popular,
right? You can do a lot of cool stuff
with the support that the IDE gives you.
But here's what happened. When Jet
Brains was building their kind of like
walled garden, the entire ecosystem
outside of Jet Brains, they've
standardized on the language server
protocol. So now AI coding tools are
building on top of the language server
protocol which isn't available in
intellig idea, right? Clot code now uses
the language server protocol. cursor
which is built on VS code which is kind
of the home for language server
protocol. VS code has got very good
support for LSP and other tools like
GitHub copilot and windsurf they all
live in editors which do speak LSP
meanwhile chip brains they were working
on this thing called fleet right it was
their VS code competitor they were
working on it for years and they killed
that and now they're working on
something else and I don't know what's
going to happen to that but they are in
a place where the LSP doesn't quite work
anyway I don't want to get too much into
the details of this but just know that
for Intelligj language server protocols
don't work but language server protocol
is an open standard whereas intelligj's
intelligence with code is more
proprietary and the AI tools are
building on the open standard now why
does this actually matter right cloud
code now uses LSP cool but why should we
care there are two benefits one is the
cost and second is the accuracy now
imagine you're asking cloud code to go
change a certain method in a certain
file and find all references and change
that by the time Claude has figured out
what to change by searching through all
these files, it has burned through
thousands of tokens just on reading and
tokens are money, right? It's your
money. This kind of searching can take
about like 20 seconds, 30 seconds on
large projects and they can burn through
money really, really quick. But with LSP
integration, claude now asks the
language server, where is this you get
user by ID, where is it defined? and 15
milliseconds later the LSP responds it's
in this file this particular line number
right claude asks where are all the
references and it gets an instant answer
it's the exact locations no grip no
searching for entire files to figure out
the context just precise deterministic
answers and early reports suggest that
refactoring tasks that used to consume
anywhere like 50,000 to 100,000 tokens
can now be done with 5,000 to 10,000
tokens that's a 10x reduction that's 10x
X, by the way, not 10% reduction, 10 X
reduction. That's a huge difference. And
it's not just about cost as well. It's
also about correctness. When Claude
searches GPS for function names, it's
doing string matching. It might find the
actual function definition. It might
also find a comment that mentions that
function. It might find a variable which
has the function underscore backup, some
name which was renamed somewhere else
and it's not being used. And the worst
part is it can find the same name but in
a completely different module because
guess what string matching does not
understand scope right when claude finds
a particular type it doesn't understand
that whether two things with the same
name are actually the same thing as far
as language is concerned. So it has to
use more context to figure that out. But
LSP does know this because guess what
the language server actually compiles
your code right? It knows what function
is where, what a variable does, what is
the type, what scope there is in, what
is being imported, what is being
inherited, all that is known, right?
LLMs can hallucinate, but LSPs don't.
That's the benefit. Now, before you
think I'm just singing all praises for
anthropic and this integration, let me
share some of the criticism because
there is one drawback to this as well,
which is pretty insightful. So, Jose
Valam, who's the creator of Elixir, has
this to say. LSP APIs are awkward for
agentic use because they require passing
file colon line colon colon. You can't
simply ask tell me where foo bar is
defined. Let me explain this and why
this is such an important distinction.
You remember how lsp works. It's a
protocol designed for your cursor.
Here's a typical flow, right? You're in
VS Code and your cursor is sitting on a
function. Let's say get user by ID and
you controlclick it. What actually
happens behind the scenes? Your editor
says to the language server, hey,
there's this user's cursor which is
sitting at this particular file, line
number 42, column number 15. Now, what
is in there? And the server responds,
that's a reference to get user by ID,
which is defined in this other file,
line number 70, column number four.
Right? Now, notice what's happening
here. The input is coordinates, file
coordinates. And the output that's
that's coming from the language server
is also coordinates. Everything here is
based on where your cursor is pointing.
All right. LSP uses document URIs and
cursor positions rather than abstract
syntax trees and compiler symbols. This
was actually a deliberate design choice,
by the way, because they wanted to keep
this protocol language agnostic. And
this makes perfect sense for an IDE. You
click somewhere, you want to know about
that spot. The whole interaction model
is I'm pointing here. Tell me about
here. But now think about an AI coding
agent. Claude code doesn't have a
cursor. It doesn't point at things. What
Claude wants to understand is a concept
in your code, right? It needs to find
the do authentication function. It's a
concept. It's a class name. Where is the
user class defined? What calls this
particular logic? These are semantic
questions. These are symbol based
questions. They're not coordinate based
questions. So what does clot code have
to do to use this LSP? Well, first it
has to find the thing that it's asking
about what it is, where it is, which is
usually gripping or searching through
files to locate that first. And then
once it has the file, it figures out the
line number and the column number and
only then it can ask the LSP for more
details. So it may not need to grip
everything, but it has to grip to find
the first entry point into the codebase
where it gets that first file
coordinate. And compare that to what
would actually be ideal for an AI. If an
AI were supposedly could ask, hey LSP,
where is the function get user by ID in
this project? If it just could ask by
name, by concept, and get an answer,
that would be great. But LSPs don't work
that way. It wasn't designed for that.
It was designed for a human with a mouse
and a keyboard pointing at things. So
what we are doing here is essentially we
are bolting on a completely different
technology like cursor centric APIs onto
today's AI agents, right? We are
adapting tools a little bit which was
designed for mouse and keyboard to work
for language models and coding agents.
And it works. It for the most part it
works. It's definitely better than doing
grip because since once cloud does find
something, it can get precise
information about that and all the way
through. But is it optimal? Probably
not. Right? There's a bit of a an
impedance mismatch between how the LSP
protocol thinks and how the AI thinks.
There are a few other criticisms as
well, right? There are there's also some
criticism about how Anthropic has
implemented this because the whole
plug-in system is pretty new. There are
some security concerns. Some security
researchers have raised some concerns
about how version pinning for these
plugins are not being done. How there
aren't checks for language server
installations and honestly it's a little
buggy but this is fresh. It is early
early adapter territory. If you are the
type who likes to wait for version 2.0
of anything you might want to give this
a few more releases before you adapt it.
But here is the critique that I think
matters the most. LSP gives you read
operations. You can look up where things
are but you cannot change things not
semantically. That's the limitation of
LSP. Right? If you take Jet Brains
refactoring tools, they can rename a
method and guarantee that every
reference updates correctly, right? They
can do things like extracting an
interface and it can guarantee that the
code still compiles, right? Even
inlining a function, they can guarantee
that the behavior is preserved. They are
write operations and semantic write
transformations. But LSP doesn't have it
for now. So neither does cloud code even
with LSP. Now imagine how much smaller
the context would be for a tool that can
semantically rename a function rather
than editing hundreds of files one by
one. Right? Think about this. Right? Now
cloud has to rename a function. Right?
What does it have to do? First it has to
find all references which is where LSP
can help now. And then once it's found
it what does it have to do? It has to
open every single file and replace and
edit it. It still has to do the search
and replace, right? And then eventually
hope that it didn't miss anything. A
proper refactoring engine would just do
it. One command and you would get
guaranteed correctness if that engine
was present. So Microsoft has something
like this for C#. Jet Brains has this
but that's proprietary software and AI
coding tools don't have access to it
yet. So let's zoom out and see what this
actually means for us for us in the
industry for AI tooling. I think this is
the beginning of a trend. AI tools are
going to keep absorbing the capabilities
that made the traditional IDs powerful.
I think LSP is just the first step. I
expect to see AI agents kind of gain
more access to things like debuggers,
profilers, type checkers and even build
systems. All the infrastructure that we
have built over the decades for our
tooling. These are tools which these
people are going to start handing over
to these coding agents, right? These
tools that figure out how to leverage
the existing ecosystem will go to the AI
agents and the AI agents that start
adapting these tools will pull ahead and
for traditional proprietary IDEs, it is
going to be a strict competition and I
think Jet Brains knows this and that's
the reason why they killed Fleet uh and
they're kind of pivoting to this thing
called air which is their new agentic
development environment. They're trying
to catch up. I'm curious to see what
they do with it. But whether the
traditional ID vendors can move fast
enough or whether AI native tools will
actually absorb their features first, it
remains to be seen. But as far as using
cloud code is concerned, now your AI
assistant just got a little smarter and
cheaper. But keep the critique in mind.
We are still in this retrofit phase,
right? We're adapting human tools for AI
use. I think the really interesting
stuff happens when someone actually
builds protocols designed from the
ground up for how AI actually thinks
about the code. That's the breakthrough
that's still waiting to happen. I'd love
to see where the industry goes from
Access all exclusive full courses by becoming a member: https://www.youtube.com/channel/UCYt1sfh5464XaDBH0oH_o7Q/join