Loading video player...
Codex CLI might be the most powerful
code editing tool that you didn't know
you already had. But Claude Code has
slash commands, agents, and planning.
So, which one is actually better? Cloud
Code is the 500 lb gorilla that everyone
has been talking about, including me.
But can codeex CLI be just as good? By
the end, you'll know which one fits your
budget and your workflow. But more
important really, can codec cli be
formed to look a little bit more like
clawed code? That's the actual question
we dive into. Let's get into it. All
right. So, if you've watched this
channel at all recently, and in fact
many of the channels around here, you've
noticed Cloud Code. Cloud Code is a CLI
application. Basically, you launch it in
the terminal, and that's where you do
your editing. It has a little bit more
of a UI than just your normal type in a
command terminal experience. And I'm
going to assume that you've seen a
little bit of this in many other videos.
If not, I have many of them that will
describe this. Go check one of those
out. Codeex CLI is OpenAI's version of
Cloud Code, which is anthropic, the
Claude models. So, if you come to their
GitHub environment, you'll see an easy
way to install it. If you're on a Mac,
you can use brew install um and you can
use npm to install it as well. This is
extremely similar to Cloud Code. Same
kind of thing. What I do want to point
out here and this one is critical and
something that we'll be talking about in
just a minute is you can use codeex
codec cli with your chat GPT plan. So
basically if you subscribe to chat GPT
20 bucks a month kind of subscribe you
have the use of GPT5 as your coding
engine included period. They don't have
definition of other thresholds and
limits and everything else. Admittedly,
it's a little scary that they don't have
these listed out and there's no easy way
to see. At least I haven't seen it. I've
looked around. Let me know in the
comments if you have a way to uh to find
out what the limits are. But as of the
time of this, I couldn't find a way to
determine where I was in any threshold.
But there's a big plus to that. It
sounds like what they're trying to say
is we're going to compete with
Anthropic. Feel free. Have as much use
as you want. And that's why I'm saying
this thing might be the best coding tool
that's just sitting back there waiting
for you to use with a fantastic coding
model. GPT5 is a really fantastic model.
And really this system here, the codec
CLI system really gives you all of the
opportunity to use it, but the devil is
in the detail of using it. So that's
what we're really going to look into is
how is it to live with. Clog code has
done a lot to make it very comfortable
to use. Let's just say Codeex CLI is
lacking in some of those departments,
but can we make it close? So, if you
don't want to get an anthropic kind of
account, you don't want to subscribe
there, you might be okay with using what
we've got here. Let's go take a look at
what we might have to do. Okay, let's
just get started. I'm going to kick
these off in a rather special way. So,
basically what I'm getting into is my
advanced version of both of these. This
will be in high thinking mode for GPT5.
That's what we're seeing over here. And
this one will end up in opus mode.
Essentially the high mode for uh the
clog code environment. I'm going to ask
both of them to tell me about the
project. First thing you'll see is
you're going to see all of the thinking
and everything else that happens with
codeex in the clear and it gets quite
noisy trying to understand what it's
doing. You can see all of this thinking
is it not reporting to us. It's not yet
talking to us. This would be collapsed.
But you will make mistakes at some point
and think what it's saying is stuff that
it's about to do. Whereas it hasn't
actually made up its mind in many cases.
If you follow the thread down two or
three paragraphs, you'll realize it
says, "Oh, I don't need to do that. I'll
do this instead." I have actually
interrupted the model multiple times
because I thought it was going the wrong
direction. But I didn't realize I was in
a thinking block and not in a kind of a
a message block. And you'll see the
message block in a second when it comes
back. And so here is what uh Claude Code
gives us. It collapses all of its
information inside of these blocks here.
Creates bullets to kind of give you
placeholders of where it is doing work.
Uh, and then it also handles markdown
very elegantly, which Codex CLI
amazingly still does not handle
markdown. As you can see, these bold
comments, things like that. It's it's
not doing anything for us. So, it makes
it a lot harder to read. And I can't
overstate this enough. this harderto-
read concept really ends up being a
cognitive cost that you need to be sure
you're okay paying. Yes, you might be
able to use this for free. And let me
TLDDR this right now. If if you don't
have an anthropic account and you're not
using clawed code but wondering about it
and you are a subscriber to chat GPT, go
try codeci. Hopefully this video will
give you a couple quick steps on how to
step into it and see what it can do. It
is actually very excellent if you don't
have something else. So if you don't
have clogged code, then it's excellent.
What you're seeing now is this report
coming back and it's very difficult to
read through because it's markdown
instead of being kind of cleaned up
markdown or managed markdown that's
being rendered. All right, so that's
number one. Readability. Readability is
very very different between the two. As
you can see, it gets very very wordy.
Okay, the next thing is slash commands.
If you've used cloud code, you know that
if you hit slash, you'll see a series of
slash commands. And I'm using the arrow
keys to go up and down on them now to
move around in all of the different
actions that you can take. A lot of
these are the default actions that come
with the product itself. But there's
also some that I have made myself like
project settings. I'm going to take that
one now. You'll probably see this change
colors in the background once it's
written. But slash commands also exist
over here. So the slash commands that we
see with codec cli are much fewer. In
fact, that's all of them. So you cannot
add your own slash commands. This is not
a thing that's offered yet for codeci.
So you can get by with the slash
commands that come with these systems.
They have the same basic concepts behind
them. But if you ever extend slash
commands or start to imagine that you
might want to be able to run your own
commands, I'll show you how to do that
in a moment. But it is not nearly as
simple as a slash command. So that
they've done something very nice in the
cloud code world by by just allowing you
to create your own local slash commands
in a project that can be checked in with
that project so everybody gets it or
they can be local to you. It can be in
your home directory so that all cloud
instances get it. They have done a lot
of work to make this very very useful to
a developer and it's those kinds of
tools that tooling is really the
distinction largely between these two
systems. Okay. A very common slash
command that we do that both of them
have is a nit. And so I'll do the
beginning of a knit on both of these and
show you that really this memory file
that maybe you've seen in cloud code
that you're used to is called claude MD
and it'll save it in the the project
directory. It's called agents MD over
here. While they're going through this
init, I can actually type a command. If
I send this command while it's working,
it will buffer this command. I have no
such thing over on the other side. So,
as it's working through and you realize,
oh, you know what? I'm going to need it
to do next, which often happens or you
know what else I need it to think of
before it tells me it's done. This very
frequently happens. Oh boy. Um, you
cannot do that with Codeex CLI at this
point. You have to wait until it's
complete or interrupt it, tell it that
message, and also tell it to continue
what it was previously doing. It will do
that. So, you can once again get roughly
to the same experience. But again, I'm
going to say that magic word ergonomics.
It just doesn't feel the same. It feels
like you're creating your own personal
patch to solve the problem. Okay. So,
very briefly, let's take a look at what
these did. If we look in the codeex
world, it created this agents fold file.
And if we look in the cloud side, it
created a claude file. And there's
similar files. Of course, they're
markdown files that have a bunch of
information that is there to help the
system understand how the thing works. I
won't go into the performance of these.
In cloud code, we have a very neat way
of being able to add memories. Always
run lint. So, I am memorizing or really
adding to our memory this always run
lint. You can see it's telling us, do
you want me to log this into the claude
file that we were just looking at? Or do
you want a local version of that claude
file that's an addendum essentially, or
do you want it to go all the way up to
your home directory and save that in a
clawed file. So what I'll do is I'll
just put it in what we call project
memory. And this project memory is
actually the file that we were just
looking at back here. And down here at
the bottom, you'll see that our
memorization has occurred. It's just
added to this cloud file. There is no
such concept over here in codec. So
there's no way to affect memory here
that I've seen. Let's see if we can ask
because it might do it. Let's see. Oh,
that's looking good. That's looking
good. I'll update agents MD with always
run. Excellent. So maybe they just don't
have a methodology for us to see that
that's doable. All right, great. So this
sounds like another potential parody. We
do have a way to affect memory here.
There we go. Address the warnings.
Great. So this is saying always run the
lint like we said it except they put it
in their own language rather than what
we said. So that's fine. They they do
have par. I didn't see that. But that
becomes part of that same challenge.
Let's see if they have a hash mechanism.
They don't. It's not obvious the way
that you have to patch this stuff in.
The fact that this one has a common
mechanism of if you hit mash or hash and
it tells you add it to memory, try
typing these kinds of things in. That's
just quite nice. All right, let's talk
about models and kind of information.
So, if we do slash status, we get some
information on the status of both of
these systems. They'll tell you where
they're working and what system they're
logged in with, what account you're
using, these kinds of things. And in
both of them you'll see that there's a
section for the model itself using chat
GPT or GPT5 sorry uh with reasoning
effort high. Now I'll show you in a
moment how I turned this on that is
non-trivial. That's another topic that
we have. Uh but you can also see over
here uh in the model I'm using claude
opus. So they're both allowed to use a
higher stated model. And I will say if
you're just a GPT or a CHEGPT
subscriber, 20 bucks once again, you can
get a higheffort, high reasoning GPT5,
which is a very smart thinking model,
you cannot do that with the $20 version
of Anthropic. $20 does not get you Opus.
$100 a month gets you Opus. In theory,
you have all of the high GPT5. Now,
admittedly, it's not pro, but high GPT5
we believe kind of is on equal at least
performance of Opus. It is an excellent
model and I would say if this is already
sitting in your back pocket and you
haven't tried it, give it a shot. But
now that we have this info, what can we
do over here in claude? We could say,
oh, I want to change my model. And if I
go into the model picker, it will tell
me, oh, you can do default, which it
uses opus for certain things, otherwise
uses sonnet, or I can always be opus,
always be on sonnet. So you can do these
different plans. You can set things here
within the tool. No such idea over here
in codeex. So codec cli managing this
kind of thing is at the global level.
You can't even do it in the local
settings files of a project. You can't
say in this project I want you to run at
a certain certain intensity of thinking
that's set at the global level. All
right. And so this brings us to a pretty
complicated conversation. I'm not going
to dive too deeply into if y'all want to
know more about configuring uh codec CLI
reach out and maybe I can make a video
about that. But I bet not enough people
yet really want to hear about this. But
basically, there's a way in codec CLI to
be able to configure the system when it
launches. And this is in I'm I know I'm
in cursor. I apologize. I've opened up
my home directory.codex folder. So this
is thecodex folder which is basically
where Codex stores all of its highle
stuff from kind of a personal account
standpoint. So this is my accounts
version of codeex. And in here you'll
see that I have things like my model
reasoning effort is high. Model
reasoning summary would be concise but
it's not yet supported by G gpt5. Uh I
am running danger full access which is
the thing that gets me into essentially
yolo mode. I was telling you that I'm in
a special mode here. So no approval. I
don't need this thing asking me for
approvals. And here's a tricky little
way that I've set up a sound effect
because I like sound effects when these
things finish. So this was my method and
I have a little function in here that it
runs which just plays a sound. I won't
go into that either. But what you do see
in here which is very neat. This is how
they're trying to manage this. Not
project level but launch level. So you
can launch this with different profiles
like a -p. If you do codeex-p
denote which profile you're looking for
and then this name is what you send you
space and then hi yolo would set this
into these values. So these values are
not there by default. If I type in
codecs, but these are and so when I go
into a profile, it'll run this profile.
You can have as many as you like. So, I
think there is something nice to say,
oh, in any given environment, I can drop
out and then kind of launch into high
mode so that I can think pretty hard
about this problem that I'm having, or I
can drop out and then come back in in
normal mode so that I don't have all the
extra extra thinking for something that
really medium thinking would certainly
do just fine for. It is complex to get
to this point. I understand that a very
small uh sliver of people will ever get
to the point of editing these files, but
this is the way where you're really
creating a system that works for you. If
you want something that feels like
clawed code, you will be in this space.
So, understand that's part of the
equation as well. However, you remember
that I said you drop out and you change
your profile to launch back into a
different model, if you will. That's a
problem. Let's take a look at that.
Okay, so you remember how we were saying
that you could just drop out of a
session and then change your model in
codeex and then restart your session.
You could change your codeex-p to use a
different profile. You noticed I didn't
have any extra profiles in there. Well,
I might have a low profile or something.
I could kick off that different profile
and then go ahead and launch Codeex. So,
if you recall that there is no way in
codeex at all to resume a conversation.
context is gone. Completely gone. In
clawed code, there's a - C command. Now,
that will continue the last
conversation. So, I'm going to go YCC.
YC is the way that I launch mine into
yellow mode. Okay. And that gets us back
up exactly to the last conversation.
Load it in here. You can also run
resume. And when you select resume,
there will be different conversations
that you can select from to move around
to specifically the conversation you'd
like to get into. So there are a lot of
ways in claude code to work through the
history. If you remember inside of this
codeex folder inside of my home
directory, you can see all these
sessions. These sessions are the
sessions that we've been having with
codecs and it is writing the history for
these things. And so since it is
obviously writing the history, but we
can't continue with one of them, I got
interested and decided to write my own
viewer to see if we could quickly take a
look at the different files that are in
that folder. And this is what the viewer
would look like to look through that
information. So it exists. Some reason
Codeex doesn't allow you to at least go
get the very last one of these and pull
it in as current context. So, this is
another really major oversight that if
they left it out but are keeping
history, something's wrong. It feels
very strange that this would have been
left out because it is a truly helpful
thing to be able to step out of the
model, especially if you can change your
profile and then be able to go back in
exactly where you were. I mean, all of
us know that from chat GBT that you can
go get an old chat and continue anytime
later. That's invaluable in many cases.
All right, let's cover two really big
ones. Uh, they're too dense. Uh, I've
done a little bit on each one of these
in previous videos, you know. Um, so if
you want to find out more about those,
check that out. But at the same time,
one of them is almost impossible to
actually repeat, maybe truly impossible
to actually repeat. And the other can be
repeated, but with some pain. So, let's
do that one first. First one is planning
mode. So, planning mode is something if
you shift tab here, you'll notice that
as I shift tab, there's a indicator down
at the bottom of which mode I'm in. I
can get into planning mode. So now
anything I ask of clawed code, it will
just tell me back. It doesn't actually
write any files. It doesn't have write
privileges. And in fact, if I tell it,
hey, can you please write this to a
file? It will actually come back and
say, oh, that's pretty cool. You're
asking me to write something to a file.
I'd love to do that, but I don't have
rights. You want to change the mode so
that I can write it. So you can switch
out of the mode and then ask it to write
and it'll write it just fine. And that
leads us to how you might do that. There
is no such thing in codeex. Now
admittedly, if you have codecs, then
that means you have chat GPT. You likely
do anyway. All users have some version
of chat GPT. I would say go use chat GPT
for your planning if you need to. It
just is an extra space. So, it's kind of
nice that it's inside of cloud code. And
if we're trying to see if we can daily
drive the thing and make it relatively
the same. How do you solve this problem?
Well, one problem that one way you can
solve this, and I've done this is I'll
describe this briefly, is ask for the
thing to just give you a readout. Just
give me a report back. Don't do any
changes to the code. Say that upfront.
Maybe say it again at the end. That's
part of the complexity is it could start
kicking off and doing some coding. So be
ready on your escape key to stop. And
then once you get a plan that you like.
Going back and forth with it is a little
more challenging cuz it will kick into
build mode at some point. You can have
it write it to a file. If you get it
written to a file, that's the file you
can have a conversation about and it
will then write to that file. So if then
your conversation is kind of iterative,
it'll be updating that file because
it'll know what you're talking about
rather than the whole project. I know
that sounds strange and eventually when
you want to run that plan, you can just
say, "Hey, can you read this file and
let's execute it? I want to build what's
inside of that file." That does work
quite well and it's nice to have the
plan as kind of a file somewhere that
you can touch later. So the same thing
can kind of be done whether or not you
want it in cloud code. You can shift
into plan mode, get the plan, shift out
of plan mode and say write this as a
file. You don't have to if you just say
go ahead and start executing on it. It
it has internal tools to kind of keep
track of that as a to-do system etc etc.
So that's one way to equate planning.
You can also do that file trick for
things like slash commands. Write the
prompt that you would put in a slash
command in a file and then you can say
inside of your project, hey, can you run
that command that's in this file? And
basically the same kind of things will
happen. Maybe not perfectly the same but
roughly the same. The last one is
agents. And there is just no analogy in
the codeex world for agents. There's
nothing we can do about it. Agents in
claude code are really wonderful. They
have, if used correctly, they're very
valuable, but have a very focused usage.
The concept of an agent is basically
being able to run a special prompt. And
that prompt goes through whatever
execution it needs to do in its own
little sandbox of memory. And what that
really means is I can peel something off
that says go run lint and clean
everything up and then check it all in
in an agent. And that agent has the same
access to the project just like this
normal chat would. But everything it
does and the very long context that
might come from running and iterating on
all the test cases and everything else
will not pollute this context. So you
might think of it as kind of decoupling
an action that you can then put onto
another context that you can pass
information into and information can
come back out of. There's a lot of
really interesting uses. That's not a
great use case for it, though it works.
The real separation of the context is
what it's about. That's one of its major
features is being able to separate that
context. All right. So, there are other
things in cloud code that I didn't get
to, um, subtasks and things like that
that just there's no good way to do.
It's not worth even having a discussion
about it. But, if you're just generally
trying to figure out how to use these
two together, I think I've covered
enough ground to kind of come to, drum
roll please, a conclusion. Okay, I don't
think I hit anything in all of that. I
think you can probably guess my
conclusion to some degree. If you have a
subscription to Chat GPT, you already
have codec CLI. you already have access
to GPT5 at high. This is great. This is
something that you can do, but you use
you have to use the API to do it. You
you don't have high otherwise if you're
just a $20 subscriber. So, it's a really
valuable exercise of just kind of diving
into it, putting it into a folder. You
don't have to write a program. You can
just put a file in there and say, "Hey,
let's edit this file." And watch GPT5
work with you to edit a file. So, there
is some value to it. It's a little bit
painful. If you're going to use it as
your daily driving editing experience, I
would say at this point
there it's I mean is it possible? Yes.
And if you're try trying to save 20
bucks instead of spending 40 bucks, you
only want to spend 20, then absolutely.
There's a lot of these little
mitigations that you can put in place
you'll get used to.
There's no way I'm giving up my cloud
code just yet for that though. Honestly,
my actual take on it, I wanted to be
able to adopt codeex and say, "Yeah,
it's essentially the same thing with
some addendums to it." And it really is
not even very close. And in fact, the
very first thing I shared, and I shared
it first because I thought a lot of
people might not make it this far in,
just the readability and the cognitive
cost of using it is much, much higher,
order of magnitude higher easily. And
that enough is the thing that I would
not want to use this on a daily basis if
I have to be a little bit more
efficient, maybe even five times more
efficient, but still feel the pain of
having to interact with this system. The
pleasure of working with cloud code if
you've used it, you know. So, it's kind
of a if you know, you know. All right. I
hope this helped at all. I didn't really
get to every single item, of course.
There's just too many things and a lot
of them just don't have analoges. Uh
hopefully this was enough to say if you
were just trying to use it from a daily
standpoint, how does it feel? Uh if you
like this kind of stuff, please
subscribe. It tremendously helps the
channel and I really do appreciate to th
those of you who have recently
subscribed. That's fantastic. It really
does help. Once again, thanks for coming
along for the ride on this one, and I'll
see you in the next
Codex CLI might be the most powerful coding tool you already have access to—but how does it really compare to Claude Code? In this video, I put Codex CLI (OpenAI) and Claude Code (Anthropic) side by side to see how they perform in real-world coding scenarios. We’ll look at: Readability and markdown rendering Slash commands and customization Memory handling and workflow ergonomics Model access (GPT-5 vs Claude Opus/Sonnet) Planning mode, agents, and daily usability By the end, you’ll know whether Codex can replace Claude Code in your daily workflow—or if the extra polish and tooling from Claude makes it worth the upgrade. This isn’t just about speed and features—it’s about the cognitive cost of working with each tool day in, day out. If you’re already a ChatGPT subscriber, Codex CLI might be sitting in your pocket ready to use. But is it enough to ditch Claude Code? Let’s find out. #AI #Codex #Claude #ChatGPT #AItools #CodingWithAI #OpenAI #Anthropic #ClaudeCode #CodexCLI 00:00 - Intro 00:38 - What is Codex CLI 01:06 - Installing 02:58 - Readability 05:19 - Slash Commands 06:43 - Init 07:06 - Buffer commands 07:48 - Memory 09:43 - Models 11:36 - Configuration 14:07 - Resuming 16:13 - Agents and Planning 16:36 - Planning Mode 19:06 - Agents 20:36 - Conclusion