In partnership with

> SPOTLIGHT

WHAT MATTERS TODAY

OpenAI introduced a new subscription tier at $100/month on April 9, filling the gap between its $20 Plus plan and $200 Pro plan. The new tier offers 5x more Codex usage than Plus, with limits raised to 10x through May 31. The explicit framing, per TechCrunch and CNBC, is that this is OpenAI's direct response to Anthropic's $100 Claude Code subscription.

⮕ The developer subscription war is no longer about price: both products now cost $100/month. Differentiation will shift entirely to output quality at the task level. For developers currently on Claude Code, this is a direct invitation to run a side-by-side comparison. The company that retains engineers will be the one that consistently produces fewer bugs, not the one that holds a pricing advantage.

Amazon CEO Andy Jassy's annual shareholder letter, released April 9, disclosed that Amazon will spend approximately $200 billion in capital expenditures in 2026, the majority going toward AI infrastructure. For the first time, Jassy confirmed AWS AI revenue is running at a $15 billion annual run rate, a figure 260x larger than AWS cloud revenue at the same stage of its development. Jassy noted that Amazon already holds customer commitments for a substantial portion of the planned spend, citing a $100 billion-plus deal with OpenAI as one anchor.

⮕ When a cloud business reaches $15 billion ARR in three years and the parent company is committing $200 billion more on top of it, the infrastructure layer has stopped being a bet and started being a compounding advantage. For teams evaluating compute partners, Amazon's spend signals a decade-long runway of AWS capacity expansion. Lock-in is not a risk at this stage: it is the point.

Bret Taylor, co-founder and CEO of Sierra — the enterprise AI agent company, told (TechCrunch) on April 9 that agentic AI does not add a layer on top of existing software: it replaces the interface entirely. Sierra launched Ghostwriter, an agent that builds and deploys other specialized agents. Taylor described standing up a custom agent for Nordstrom in four weeks. His framing: most companies do not want to make software. They want outcomes.

⮕ Taylor is describing a structural change in what enterprise software is sold as. If outcomes replace UIs as the unit of value, then the moat of traditional SaaS, the interface itself, becomes irrelevant. For anyone building a product with a UI layer on top of an AI workflow, the question has shifted from "how do we make this easier to navigate" to "at what point does the interface disappear entirely, and what does the pricing model look like when it does."

> SIGNAL HEADLINES

CAPTURE THE SHIFT

Elon Musk posted on X that xAI is running seven models in parallel training at Colossus 2 (over 700,000 GPUs), including two 1T-parameter models, two 1.5T models, one 6T model, and one 10T model. No other lab has publicly confirmed training at this parameter scale. Musk acknowledged xAI has "some catching up to do." Pre-training on the 10T model is expected to take two months.

Canva acquired two companies founded by the same brothers on April 8: Simtheory, an AI agent management platform, and Ortto, a customer data and marketing automation tool. The dual acquisition follows three other deals in the past six weeks. Canva is building toward a full-cycle marketing platform where design, publishing, and performance measurement live inside one agentic system. Terms were not disclosed.

Florida Attorney General James Uthmeier announced an investigation into OpenAI on April 9, citing the FSU shooting of April 2025, in which court records list 272 ChatGPT conversations as evidence. The suspect allegedly asked ChatGPT how the public would react to a shooting at FSU and what time the student union would be busiest. OpenAI said it will cooperate. This is the most direct instance yet of a state-level investigation tying AI output to real-world harm.

A federal appeals court in Washington, D.C., denied Anthropic's request to temporarily block the Department of Defense's supply-chain risk designation. The DOD blacklisted Anthropic in early March after contract negotiations stalled over whether Claude could be used for autonomous weapons. A separate San Francisco court granted Anthropic an injunction on a different statute last month. The result: Claude use is not banned, but Anthropic is excluded from new Pentagon contracts for now.

> ONE PRACTICAL TODAY

Give your AI agent a permanent memory with GBrain

Most AI coding agents start every session blank. They do not remember the architecture decisions from last week, the reasoning behind a naming convention, or why a particular file structure exists. The result: you spend the first ten minutes of every session re-explaining context that has not changed. GBrain, an MIT-licensed project from Y Combinator president Garry Tan, builds a Postgres-backed semantic memory layer that persists across sessions and connects directly to your agent via a CLAUDE.md configuration.

Here is how to set it up:

Step 1. Clone the repository at github.com/garrytan/gbrain and run the install script. GBrain uses Postgres with pgvector for vector similarity search. Supabase Pro ($25/month) is the zero-ops recommended backend.

Step 2. Paste the provided CLAUDE.md snippet from the repo into your project's CLAUDE.md file. This configures your agent to query the memory layer automatically at the start of each session, without any manual prompting from you.

Step 3. Run the indexing command on your markdown files, architecture notes, or project docs. Initial embedding costs approximately $4 to $5 for 7,500 pages via OpenAI's text-embedding-3-large model.

Step 4. Open a new Claude session and ask about a decision from a previous session. The agent should retrieve the relevant context without you providing it. If it does not, check that the CLAUDE.md snippet is in the correct directory.

Techzip note: The value here is not speed. It is continuity. A coding agent that remembers your system architecture, prior decisions, and naming conventions produces structurally better output than one starting from scratch every time. For teams running multi-week projects on Claude Code or similar agents, the $25/month infrastructure cost pays for itself the first time you do not have to re-explain why a core design choice was made.

> PRESENTED BY HUBSPOT

Turn AI into Your Income Engine

Ready to transform artificial intelligence from a buzzword into your personal revenue generator?

HubSpot’s groundbreaking guide "200+ AI-Powered Income Ideas" is your gateway to financial innovation in the digital age.

Inside you'll discover:

  • A curated collection of 200+ profitable opportunities spanning content creation, e-commerce, gaming, and emerging digital markets—each vetted for real-world potential

  • Step-by-step implementation guides designed for beginners, making AI accessible regardless of your technical background

  • Cutting-edge strategies aligned with current market trends, ensuring your ventures stay ahead of the curve

Download your guide today and unlock a future where artificial intelligence powers your success. Your next income stream is waiting.

> WORTH READING

ANALYSIS & THESIS

Every runs six products, a media company, and a consultancy with 25 people, using four custom AI agents built with Notion AI. Parrott walks through exactly which tasks each agent owns, what the actual prompts look like, and where the failure points are. One agent alone handled a full OKR cycle in two days.

Why it made the cut: most writing about AI-assisted operations stays at the abstract "we use AI to move faster" level. This piece gives specific systems, specific prompts, and honest failure points. It is the type of operational writing that practitioners can actually act on.

Romero argues that token spend has become a fundamentally broken performance metric. Engineers at top labs are now spending more on tokens than they earn in salary. Some are deliberately running bot loops with the sole goal of burning tokens. Romero traces this back to Jensen Huang framing token spend as a status signal, and argues that measuring AI productivity by tokens generated is structurally equivalent to measuring a construction crew by how much lumber they moved, not by what they built.

Why it made the cut: if the measurement is wrong at the foundation, the business decisions built on top of it are also wrong. Romero gives a name and a mechanism to something practitioners have quietly noticed but not fully articulated.

Anthropic released its most capable model, Mythos, to only 12 large enterprise partners for defensive cybersecurity work. Fernholz examines whether the selective release is primarily a safety decision or also a strategic one. His argument: limited distribution protects against distillation by competitors, and early access for AWS and JPMorgan builds an enterprise flywheel that is difficult to unwind.

Why it made the cut: it is the sharpest recent piece on the tension between safety framing and competitive strategy at the frontier. Both motivations can be true simultaneously. Fernholz does not force a verdict, and that restraint is what makes the piece worth reading.

Together with 20,000+ builders and tech readers, cut through the noise & focus on what truly matters in AI — in just 5 mins a day.