the updAIt - issue 2


📌  A quick note before we start

AI talk can sound like a foreign language. So throughout this article, wherever I use a technical term, I've added a "Jared, What Are You Talking About?" box right below it — written in plain English, no background needed. You'll spot them by the 🦞 lobster. Use them here, and use that same framing whenever you're reading any AI article going forward.


Since My Last Article

In my last article, I said AI was moving from a tool you talk to, into a system that goes and does things on its own. I pointed to OpenClaw as proof that this wasn't a future thing — it was already here. I said the big players would chase that direction fast. I may have underestimated how fast.

OpenClaw was built by one guy, Peter Steinberger, as a side project in November 2025. Within weeks it had nearly 200,000 stars on GitHub and 2 million people using it every week. Then the chaos started.

OpenAI, Meta, and Microsoft all came calling. Satya Nadella called Steinberger directly. Mark Zuckerberg messaged him on WhatsApp. On February 15th, 2026 — in less than 60 days from when OpenClaw went viral — Sam Altman announced Steinberger was joining OpenAI to build the next generation of personal agents.

The same day that announcement dropped, Meta launched Manus Agents — their own autonomous agent product — and Moonshot AI (a major Chinese AI company) launched Kimi Claw, a browser-based version of OpenClaw that runs in the cloud with 5,000 pre-built skills and 40GB of storage. Both launched within hours of the OpenAI announcement. That is not a coincidence. The industry heard the same signal everyone else did and moved immediately.

And for those keeping score on whether I was right that the industry would sprint toward what OpenClaw represented — here's a short list of what got built in its wake in less than 60 days:

  • Kimi Claw — Moonshot AI's browser-based version. No install needed, runs 24/7 in the cloud, 5,000+ pre-built skills. Launched the same day as the OpenAI announcement.
  • Manus Agents — Meta's answer. Launched the same day. Integrating into WhatsApp, Facebook, and Instagram by mid-2026.
  • Quill — Positioned as the "safe enterprise version of OpenClaw" — built for legal, finance, and healthcare where you need a human approving each step before it runs.
  • NanoBot — A stripped-down version. 4,000 lines of code versus OpenClaw's 430,000. Install it in 60 seconds. Built to reach platforms in China like WeChat and DingTalk that OpenClaw doesn't support.
  • ZeroClaw, NullClaw, PicoClaw, IronClaw, TinyClaw — A wave of open-source spin-offs each solving a specific problem: running on cheap hardware, stronger security, coordinating multiple agents at once, better long-term memory.
  • AstrBot — Built specifically for the Chinese market with 800+ plugins for local platforms.
  • White Claw — Okay, not real. But honestly at this point, useful for talking to people who bring up AI at parties.

One weekend project. Less than 60 days. A $2 billion acquisition by Meta, an OpenAI hire, a Perplexity platform launch, and more copycat products than I can fit in one list. That's not a trend. That's a market telling you exactly where it's going.

One weekend project. Less than 60 days. A $2 billion acquisition, an OpenAI hire, and more competing products than I can count.

The Problem Nobody's Talking About: Token Burn

Most people thinking about AI tools are focused on the wrong number. They ask what something costs per month. The better question is: what does it cost per task?

Every time an AI agent does something — reads a document, writes a sentence, searches the web, sends a message — it uses tokens. A quick question might use a few hundred. A complex task that involves researching competitors, writing a report, and formatting it five different ways could burn tens of thousands. Run that across a whole team doing things in the background all day, and your $20/month tool starts looking like an AWS bill.

🦞  Jared, what are you talking about?

AI Agent = Imagine a robot that lives inside your computer. You give it a job to do — "research these three competitors and write me a summary" — and it goes and does it on its own. You don't click every step. It figures it out. That's an agent.

Tokens = That robot has to pay for everything it does. Tokens are its currency. Think of them like an allowance or a budget you give the agent before it starts. Every word it reads, writes, or processes costs tokens. When the budget runs out, it stops. A simple task might cost a few hundred. A big complex one could cost tens of thousands.

API Key = This is like your personal credit card number for accessing an AI's brain. When your agent plugs that key in, it charges your account for whatever work it does.

API Call = Every time your agent does something using that key — searches, writes, thinks — it's making a call. Like swiping a card. Each swipe has a cost.

This is exactly why Perplexity's launch of Computer this week is worth paying attention to. Instead of one AI doing everything, it routes each task to whichever of 19 different models is the best fit for that specific job. Need fast and cheap? Use Grok. Need deep reasoning? Use Claude. Need to pull from a lot of web sources? Use GPT. The result is you get better outcomes while spending fewer tokens — because you're not paying for a Ferrari to go get groceries.

OpenClaw let people build this kind of setup themselves, by hand. Perplexity packaged it into a product anyone can use. The underlying idea is the same: smarter routing means better results for less money — as long as budget is a factor you're managing. More on that in a minute.

🦞  Jared, what are you talking about?

Routing = Instead of always using your most powerful (and expensive) AI for everything, smart systems send each task to the right model for that job. Like using a food truck for lunch instead of a Michelin-star restaurant. Same hunger solved, very different cost.

Agent Architecture = This is the overall plan for how your AI agents are set up, what they can do, which models they use, how much they're allowed to spend, and how they hand tasks off to each other. Think of it like org chart design — but for robots. A well-designed architecture means your agents do more, waste less, and don't go rogue on your credit card.


The Speed of Change Should Unsettle You

Here's the number that stopped me this week.

On September 29th, 2025, Anthropic released Claude Sonnet — keep that date in mind. When it came out, it was considered near-frontier AI. Meaning: one of the smartest, most capable models available to the public. Cutting-edge.

On February 16th, 2026 — less than five months later — Alibaba released Qwen 3.5. It benchmarks competitively with that same Claude Sonnet. Qwen 3.5's Flash tier costs roughly 1/13th of Sonnet's per-token price. In some benchmarks it responds six times faster. And the full model is open-source under Apache 2.0, meaning anyone can download it and run it. Free. Without internet.

September 29th: cutting-edge AI. February 16th: free, runs on your laptop, no internet required. That's less than 5 months.

🦞  Jared, what are you talking about?

Open-source = The full design and code is free and public. Anyone can download it, use it, change it, build on it — no license fee, no monthly cost.

Apache 2.0 = A type of open-source license that basically says: use this however you want, commercially or personally. Just give credit.

Benchmark = A standardized test that measures how capable an AI model is. Think of it like a GPA or an IQ test for AI models.

Running locally / on your desktop = Instead of sending your questions to a server somewhere else (like you do with ChatGPT), the AI runs entirely on your own computer. Your data never leaves. Nothing is sent anywhere. Tools like Ollama and LM Studio are free apps that let you do exactly this — download a model and run it on your own machine in minutes, no technical background needed.

Let that land. What required enterprise infrastructure and a premium subscription in September now runs on a laptop in February. The pace of that compression is unlike anything that's happened in software before.

For businesses with real privacy concerns — legal, healthcare, finance, HR — this changes everything. The question is no longer "can we afford capable AI?" It's "do we have someone who knows how to set it up?"

That's a very different conversation than most organizations are having right now.


A New Prediction

Last article I predicted the big players would target OpenClaw's abilities as the direction consumers wanted. I may have underestimated how quickly. This time I'll make a new prediction — something to think about.

For most people, and most businesses, the future of AI is going to come in tiers based on what you can afford to spend. Smart agent architecture — choosing the right models for specific tasks, setting budgets, running cheaper local models where you can — will be what separates the organizations that scale AI well from the ones that can't absorb the cost. There will be a whole industry built around helping regular businesses use AI efficiently. That's the mass market.

🦞  Jared, what are you talking about?

Agent architecture = The blueprint for how your AI setup runs. Which models do which jobs? How much can each one spend? What happens when one finishes and needs to hand off to another? Good architecture means your AI does more, wastes less, and stays inside your budget. Bad architecture means it burns through money doing things the slow expensive way when a cheaper tool would've worked fine.

But there's a second tier that none of those rules apply to.

Governments, militaries, and the largest companies on earth are not building AI systems around individual affordability. The US Department of Defense is not worried about its monthly token bill. A sovereign wealth fund managing trillions doesn't pick models based on what runs on a Mac Mini. When you take cost completely off the table, the gap between what they can run and what everyone else can run doesn't just grow — it compounds every month.

The US Military is not budgeting for tokens. That gap between what they can run and what you can afford gets wider every single month.

While the rest of the world figures out efficiency, the top-tier players run unlimited agents, around the clock, processing information at a scale nothing else can match. And here's where it gets uncomfortable for anyone running a business.

Companies like Google, Anthropic, OpenAI, and Meta will eventually be able to enter any market where their AI is now capable of doing the core work. They have the models. They have the infrastructure. They have no meaningful token constraints. The businesses most at risk are the ones whose entire value comes from doing something that AI can now do — information retrieval, first drafts, workflow management, rule-based analysis, contract review, threat monitoring, customer service, basic research.

This isn't that different from what Walmart did to small businesses. Walmart didn't make a better product. They just had the scale to operate at a cost that local stores couldn't survive against. The handful of companies sitting on top of AI infrastructure can eventually deploy that same logic across nearly any knowledge-work industry — entering markets at a cost and speed that most standalone businesses simply cannot compete with.

The businesses that survive this won't necessarily be the ones using the most AI. They'll be the ones that have something AI can't replicate — deep relationships, specific expertise, trust built over years, or the judgment to know when not to use it.

Walmart didn't make a better product. They just had the scale to operate at a cost nobody else could match. The top AI companies are building the same advantage across every knowledge-work industry at once.

When Big Labs Ship Products, Industries Don't Evolve — They Reprice Overnight

There's a pattern that most people haven't caught up to yet.

When a company like Google or Anthropic releases a real product — not a research paper, an actual thing you can use — entire industry categories can reprice in a matter of weeks.

Take Google's Lyria 3. Suno, an AI music generation company, was valued at $2.5 billion. It was genuinely impressive, best-in-class. Then a major lab folded comparable capability directly into its own platform — free, as a feature inside a product people already had. The standalone value of anything Suno was doing didn't slowly decline. It became a question mark in an afternoon.

The same thing is playing out across much bigger industries. Anthropic's Claude Cowork isn't just a productivity tool. It's a direct challenge to entire categories of software, legal research tools, cybersecurity workflow platforms, and more. When an AI can handle multi-step work that previously required specialized software — automatically, at scale — that software's reason to exist doesn't fade over a few quarters. The market figures it out fast.

🦞  Jared, what are you talking about?

SaaS = Software as a Service. Every app you pay a monthly fee for: your project manager, your CRM, your legal contract tool, your invoicing software. Anything you access through a browser and subscribe to.

Claude Cowork = Anthropic's product that lets AI agents manage files, coordinate tasks, and run workflows across your computer on your behalf — doing what many of those SaaS tools do, automatically.

Valuation getting crushed = When the thing a company charges a lot for... suddenly comes free inside a tool everyone already has. The business world doesn't wait politely for the next earnings call. It adjusts immediately.

The most dangerous competitor you'll face in the next couple of years probably isn't another company in your industry. It's a feature that one of the big AI labs ships on a random Tuesday.


The Bigger Picture

Here's how I'd summarize where all of this is going.

AI capability is becoming free. The models that cost serious money six months ago are now open-source and running on laptops. That means having access to capable AI is no longer a competitive advantage by itself. Everyone will have it.

What separates organizations going forward isn't whether they're using AI. It's how well they've set it up. Which models are they using for which jobs? Are they managing token costs intelligently? Are they building agents that actually work together, or just throwing expensive models at every problem?

For most businesses, that's the game for the next few years. Efficient architecture, smart cost management, and building internal knowledge on how to deploy it well.

But above that layer, for organizations with no budget ceiling — the gap between what they can do and what everyone else can do is going to compound in a way that's hard to overstate. And the largest AI companies themselves will eventually have the capability to walk into nearly any knowledge-work industry and operate at a cost no standalone competitor can match.

The people who will still be standing aren't the ones who tried to out-compute the biggest players. They're the ones who built something that can't be automated — real relationships, earned trust, genuine expertise. The things that still require a human in the room.

Access to capable AI is no longer a competitive advantage. Everyone will have it. What matters now is judgment — knowing how to use it, when not to, and what only a human can do.

What are you seeing in your own world? Are the conversations changing at your company — or are people still treating AI like just another subscription to add to the stack?

#AI   #AgenticAI   #Leadership   #FutureOfWork   #OpenClaw   #DigitalTransformation   #Strategy

Next
Next

the updAIt - issue 1