> SPOTLIGHT
WHAT MATTERS TODAY

Anthropic's paid subscriptions have more than doubled since January, according to data from 28 million anonymized U.S. consumer credit card transactions. Most new subscribers chose the $20/month Pro tier. Three forces converged at once: Super Bowl ads that directly mocked ChatGPT, widespread coverage of the Anthropic-Department of Defense feud over military AI use, and a dense launch schedule that included Claude Code, Cowork, and Computer Use in quick succession.
⮕ Consumer AI is entering a brand differentiation phase where controversy and marketing drive paid conversions as effectively as capability does. For teams building on or competing with Claude, user lock-in is beginning to form around specific AI brands, not just specific models. Positioning and trust signals now carry as much weight as benchmark scores
The fastest unicorn in Y Combinator history (17 months to a $1.1 billion valuation) closed a $170M Series A led by Benchmark and EQT Ventures, bringing total capital to $200M. Starcloud already has its first satellite carrying an Nvidia H100 in orbit since November 2025. Next: Starcloud-2 with Blackwell chips later this year, then Starcloud-3, a 200-kilowatt spacecraft designed to launch from SpaceX Starship's "pez dispenser" deployment system. The core thesis: terrestrial AI energy constraints cannot be resolved fast enough through conventional infrastructure buildout.
⮕ When the fastest-growing unicorn in YC history is solving the AI energy problem from orbit, the signal is clear: leading investors believe terrestrial compute infrastructure is hitting physical limits that conventional approaches cannot fix in time. Teams building AI products should stress-test the assumption that cheap, abundant compute will remain stable. The infrastructure layer is being redesigned from first principles.
France's Mistral drew $830M in debt financing from seven global banks, including HSBC, BNP Paribas, Crédit Agricole CIB, and Bpifrance. This is Mistral's first debt raise since the company was founded in 2023. The proceeds fund a 44MW data center near Paris housing 13,800 Nvidia GB300 GPUs, with a longer-term target of 200MW of European compute capacity by end of 2027.
⮕ A mid-tier foundation model lab is now treating owned compute infrastructure as a competitive necessity, not an option. Debt financing rather than equity is becoming the capital structure of choice for AI compute buildouts at this scale. For builders in Europe, this is the first foundation model provider constructing sovereign, on-soil compute designed to align with EU AI Act requirements and data residency obligations.
> SIGNAL HEADLINES
CAPTURE THE SHIFT
Ross Nordeen, Musk's "right-hand operator," departed on March 28, one day after co-founder Manuel Kroiss, who led the pretraining team. Nordeen is the tenth original co-founder to exit since 2023. Following SpaceX's acquisition of xAI in February, Musk stated the company is being "rebuilt from the foundations up." The founding DNA has been fully replaced by SpaceX's organizational structure, and that shift carries risk for any developer currently building production workloads on Grok's API.
The disruption began Sunday evening in China and affected both consumer users and developers waiting for DeepSeek's next major model release. DeepSeek has not disclosed the cause. The timing sharpens an unresolved question: as production workloads shift onto open-weight model APIs, reliability at scale has not been matched by the infrastructure investment needed to support it.
> ONE PRACTICAL TODAY
5 steps to build an AI agent that does not hallucinate, from an engineer running agents in production

Most AI agents fail in production for the same reason: not because the model is weak, but because the structure that forces the agent to verify its own work before outputting is missing. Here is how to build one that holds up:
Step 1. Give your agent a one-sentence identity written as a practical statement. "I analyze customer feedback to find product improvement opportunities" produces far better results than "I help with feedback." One clear purpose sentence changes how the agent frames every task it encounters.
Step 2. Define explicitly what the agent will not do. Write what it will do, then write what it will not: "I summarize documents. I will not make recommendations." That single constraint eliminates most scope-level hallucination before the agent starts processing.
Step 3. Force the agent through an Observe, Reflect, Act loop before any output. Observe: what are the facts in front of me? Reflect: what do they mean together, and what is missing? Act: given that analysis, what is the correct output? The best-performing agents in production are not the most capable. They are the ones that check their own work.
Step 4. Add a mandatory validation checkpoint before delivering any output. The agent asks itself: Am I confident? What could make this result wrong? Is this complete and accurate? This step cuts error rates in ways that prompt engineering alone cannot reach.
Step 5. Hardcode honest limitations directly into the system prompt. "I cannot analyze images." "I may miss context from conversations I have not seen." "Complex legal questions require further review." This is not a design weakness. It is reliability.
Techzip note: This framework is most effective when building agents for multi-step workflows or when you need consistent output across many runs. If your agent is producing different results with the same input each time, the cause is almost always a missing Step 1, Step 2, or both.
> PRESENTED BY ATTIO
Here’s how I use Attio to run my day.
Attio is the AI CRM with conversational AI built directly into your workspace. Every morning, Ask Attio handles my prep:
Surfaces insights from calls and conversations across my entire CRM
Update records and create tasks without manual entry
Answers questions about deals, accounts, and customer signals that used to take hours to find
All in seconds. No searching, no switching tabs, no manual updates.
Ready to scale faster?
> WORTH READING
ANALYSIS & THESIS
Anthropic co-founder Jack Clark introduces Moltbook, a social network where AI agents, not humans, are the primary participants. He uses it to ask a harder question: when synthetic minds outnumber human participants on the internet by orders of magnitude, what does that experience look like from inside? The essay argues that humans will need their own "emissary agents" to navigate spaces they can no longer read or participate in directly. It is one of the clearest frameworks yet for thinking about what an agentic internet actually means for everyone building on top of it.
Why it made the cut: Clark asks the questions that most AI coverage avoids, and he asks them before the answers become obvious.
Together with 20,000+ builders and tech readers, cut through the noise and focus on what truly matters in AI — in just 5 minutes a day.



