the updAIt - 02.04.26

February 4, 2026

A brief look at OpenClaw, Google Antigravity, and Codex

Words have a way of picking up new meaning faster than we expect. Anyone who knows me knows about what some would call an “unfortunate addiction” (I wouldn’t, but some would) to the Chicago White Sox. This isn’t about them—hang in there.

On December 9th, the White Sox landed the first overall pick in the 2026 draft. The expected choice is a UCLA shortstop named Roch Cholowsky (pronounced “Rock”). Almost immediately, “Roch Bottom” started circulating as a punny joke. For me, hearing “Rock” bounced from Cholowsky, to Dwayne, to something else entirely: the Family Guy bit where Peter sings Rock Lobster by the B-52s.

For a while, that’s all my brain did with it. Less than a month later, Rock lobster had my mind on a completely different topic. From Clawdbot, to Moltbot, to what’s now called OpenClaw, a new domino in AI has fallen—and most people aren’t aware of it yet.

Most conversations about AI are still framed around prompts. Ask a question, get an answer. Ask again, refine, repeat.

That model is already starting to break.

What’s emerging now isn’t just smarter responses, but systems that can do work, keep working, and get better at it over time. Instead of waiting to be asked, these systems are beginning to operate continuously. The shift isn’t subtle, but it’s easy to miss if you’re still focused on chat windows.

OpenClaw, Antigravity, and the Codex app each point at a different part of this transition. None of them are finished products. But together they outline where AI use is heading.

OpenClaw: From Requests to Ongoing Work

OpenClaw represents a real break from how most people currently use AI.

Instead of typing a prompt and waiting for a response, OpenClaw is built around a system that can run 24/7 and interact with your computer the way a person would. Rather than something you talk to, it’s better thought of as an expert you assign work to—one you can give direction, step away from, and come back later to see progress.

The system can work toward defined goals and take in new information as it becomes available, adjusting how it works and identifying follow-on tasks without needing you present the entire time. The significance isn’t that it can do everything. It’s that it can become very good at specific things while you’re doing something else.

The Mac Mini demo that went viral mattered less for what it showed on screen and more for what it suggested: an AI that doesn’t shut off when you close the window.

On This Week in Startups, linked below, Matt Van Horn said systems like this don’t need to be perfect to be useful. They just need to repeat work, learn from outcomes, and keep improving. When you remove sleep, memory limits, and constant re-prompting, expertise compounds differently.

Antigravity: Control Catching Up to Capability

Google Antigravity is moving toward the same destination, but from a more structured starting point.

Where OpenClaw emphasizes flexibility and local execution, Antigravity focuses on breaking work into parts, coordinating reasoning across systems, and putting limits around what an AI is allowed to touch.

It’s less visible and less experimental, but it’s a strong signal of how large organizations are thinking about this shift. The goal isn’t just AI that can act. It’s AI that can act within boundaries that make sense at scale.

Codex App: Productivity Without Autonomy

Codex sits at a different point on the spectrum.

It doesn’t try to operate on its own. It helps people who already understand their work move faster by keeping context, reducing back-and-forth, and lowering the cost of iteration.

That restraint matters. Codex shows that AI doesn’t need autonomy to be valuable. In many environments, avoiding autonomy altogether is the security model. That’s why tools like this will coexist with more powerful systems rather than being replaced by them.

Where This Starts to Combine

Some more advanced users are already starting to combine these approaches rather than treating them as separate choices.

OpenClaw provides capability, but very little protection on its own. Systems like Antigravity emphasize structure and limits, even if that reduces flexibility.

In practice, this shows up as a split: one layer focused on reasoning and constraints, another responsible for carrying out the work. That separation isn’t mainstream yet, and it isn’t clean—but it’s a predictable response when capability moves faster than safety.

Security Risk: This Part Is Being Rushed

This is the part that’s easiest to gloss over.

Systems like OpenClaw can do real work on real machines. If they’re given access they shouldn’t have, or instructions that aren’t fully thought through, they will carry them out exactly as described.

That includes reading files they shouldn’t, running tools they shouldn’t, and making changes that are difficult to unwind.

Today, the responsibility for managing that risk sits almost entirely with the user.

Who This Is Actually For Right Now

Despite how it’s often marketed, OpenClaw is not something most people should be running today.

Using it well requires comfort with system access, permissions, limits, and containment. Codex is usable now. Antigravity is aimed at organizations. OpenClaw is for people willing to trade simplicity and safety for leverage.

That won’t always be true—but it is right now.

Where This Leads

This doesn’t stop at productivity gains.

Systems that can work continuously, improve on their own, and operate across tools will replace real portions of real jobs. Not all at once, and not evenly—but gradually and then suddenly.

At first, they make strong workers much stronger. Then they reduce the need for entire layers of coordination, follow-up, and first-pass decision-making. Over time, some roles shrink. Others disappear.

The pattern is familiar. Early adoption by people who can manage complexity, followed by products that hide that complexity and bring the capability to everyone else.

Google, OpenAI, and others are moving in this direction. OpenClaw simply shows the raw version before it’s been packaged safely.

The real shift isn’t that AI is answering your requests better, but that eventually, you won’t make those requests at all.

Previous
Previous

the updAIt - 02.28.26

Next
Next

the updAIt - what are you talking about boxes (WAYTA Boxes)