In partnership with

> SPOTLIGHT

WHAT MATTERS TODAY

Anthropic reached $30B in annualized revenue. OpenAI sits at $24B. This is the first time a challenger has passed the incumbent on this metric. The trajectory: $9B at end of 2025, $14B by February, $30B now. The company added $6B in a single month and serves over 1,000 enterprise customers each paying $1M+ annually. OpenAI projects $14B in losses this year and profitability no earlier than 2030.

⮕ Every enterprise comparison between Claude and GPT now happens with Anthropic as the revenue leader. "Safer" is no longer the only reason to choose it. If you build on the Claude API, model investment will accelerate — but when the IPO arrives in Q4/2026, Anthropic's priorities will shift from developer goodwill toward margin. Review your cost structure before pricing adjusts.

Ronan Farrow and Andrew Marantz published their investigation on April 6 based on interviews with 100+ people, a 70-page memo from former chief scientist Ilya Sutskever, and 200 pages of internal notes by Anthropic CEO Dario Amodei. Sutskever's memo starts with a list of alleged deceptions — first item: "Lying." Amodei's notes conclude: "The problem with OpenAI is Sam himself." Board members saw the documents as proof Altman "could not be trusted." The same week, CFO Sarah Friar was excluded from financial planning, and COO Brad Lightcap absorbed duties of a departing CMO.

⮕ This is a governance risk premium that every enterprise AI decision needs to price in. Dependence on a platform whose leadership faces this level of scrutiny is a real business risk. If your stack runs primarily on OpenAI, this is a strong signal to add a second provider to your architecture.

> SIGNAL HEADLINES

CAPTURE THE SHIFT

OpenAI published a 13-page "Industrial Policy for the Intelligence Age" proposing automated labor taxes, a public wealth fund, a 32-hour workweek trial, and a safety net that auto-activates when AI displacement crosses defined thresholds. Attached: $100K grants and $1M in API credits. Frontier labs are no longer asking to be left alone. They are writing the social contract themselves.

Starting May, Claude Code subscribers can no longer use their subscription allowance for third-party harnesses like OpenClaw. Each request requires a separate API key. Anthropic issued $1 in credits and a 30% discount to ease the reaction. Flat-rate subscriptions cannot sustain agentic usage at scale — every AI platform will face this eventually. Audit your harness costs now.

AI queries are up 300 percent year over year. A new discipline is emerging: AEO, or Answer Engine Optimization — the goal is no longer to rank first, but to become the answer the AI returns. What Google took 15 years to corrupt, AI search is replicating in 18 to 24 months.

Anthropic targets October 2026 at $60B+. OpenAI targets Q4 2026 at $730B but projects $14B in losses this year and peak burn of $85B in 2028. Investors tried to sell $600M in OpenAI shares on the secondary market with no buyers. Anthropic attracted $1.6B in demand at a 50 percent premium. The market is pricing these two companies on entirely different logic.

> ONE PRACTICAL TODAY

Build a self-updating personal knowledge base using an LLM agent

Most people use LLMs to search or generate. A more powerful use case: have the LLM build and maintain a full knowledge base on a topic you are researching. One active research topic contains around 100 articles totaling 400K words — none of it written by hand. The LLM writes and updates everything. The human role is to curate sources and ask questions.

Here is how to build one:

Step 1. Create three folders: raw/, wiki/, and a CLAUDE.md file. Drop unmodified sources — articles, papers, repos, screenshots — into raw/. The LLM reads here only. It never writes to it.

Step 2. Write CLAUDE.md as a schema: the wiki's purpose, naming conventions, and which entity types get dedicated pages (each model, each company, each concept). This tells the agent what to build each time it runs.

Step 3. Use Obsidian Web Clipper to save articles into raw/. Once you have five to ten, open Claude Code or Cursor, point it at the folder, and instruct: "Read everything in raw/, then create or update wiki/ according to CLAUDE.md."

Step 4. Once the wiki hits 30 or more articles, ask the agent complex questions directly. It knows your full domain and updates the wiki from its answers.

Step 5. Run periodic health checks: "Find pages in wiki/ missing important information. Fill from raw/ where possible. Flag what needs new sources."

Techzip note: This is not RAG. RAG indexes to retrieve. This approach has the LLM actively synthesize and maintain knowledge. No vector database, no custom tooling — just markdown and an agent.

Full workflow: gist.github.com/karpathy.

> PRESENTED BY DEEL.

Hiring in 8 countries shouldn't require 8 different processes

This guide from Deel breaks down how to build one global hiring system. You’ll learn about assessment frameworks that scale, how to do headcount planning across regions, and even intake processes that work everywhere. As HR pros know, hiring in one country is hard enough. So let this free global hiring guide give you the tools you need to avoid global hiring headaches.

> WORTH READING

ANALYSIS & THESIS

The piece argues that writing well with AI requires more rigor and editorial judgment, not less. When AI handles the mechanical work, human judgment — structure, taste, willingness to push back on weak output — becomes the only thing that matters.

Why it made the cut: if you use AI to write regularly and quality has been inconsistent, this piece identifies exactly where the breakdown happens.

Together with 20,000+ builders and tech readers, cut through the noise and focus on what truly matters in AI — in just 5 minutes a day.