> SPOTLIGHT

WHAT MATTERS TODAY

Security researchers found 3,000 unpublished Anthropic documents in an unsecured database, including drafts for Claude Mythos, a new tier above Opus that scores "dramatically higher" on coding, reasoning, and cybersecurity. Anthropic confirmed it is real: "a step change" and "the most capable we have built." Cybersecurity stocks dropped 3 to 7 percent on the cyber capabilities warning.

The critical line in the drafts: Mythos "will be very expensive for our customers to use." Claude users on $100 to $200 monthly plans are already hitting rate limits within one business hour. OpenAI reset Codex limits to zero roughly 12 times in March.

⮕ Frontier AI is bifurcating: capability is accelerating while accessibility is not. For anyone building on Claude's API today, evaluate dependency risk now and diversify your stack before Anthropic reprices post-IPO.

Limbic, a UK mental health startup, published results in Nature Medicine showing its system outperformed trained clinicians delivering CBT-based support, measured by patient outcomes. This is the first peer-reviewed evidence in a major journal that an AI is clinically superior in a full therapeutic delivery category, not just faster or cheaper.

⮕ Healthcare AI is crossing from "promising" to "clinically superior," and a Nature Medicine benchmark accelerates regulatory and reimbursement timelines sharply. For any team where human expertise is the assumed moat: when clinical superiority is provable and auditable, that moat does not hold structurally. The question shifts from whether AI can do this to who gets licensed to deploy it and who is liable when it fails.

> SIGNAL HEADLINES

CAPTURE THE SHIFT

Wharton researchers gave nearly 1,000 high school students ChatGPT for math practice. Students scored 48 percent better during practice and 17 percent worse on independent exams. Dependency loop, not skill loop. Relevant to anyone designing AI-assisted learning products: if the tool does the work instead of teaching it, the outcome metrics will fail.

Suno now clones your verified voice and trains custom style models across songs, at $10 per month. First AI-native audio company to publicly confirm $300M ARR. Professional audio pipelines now have a well-funded, technically capable competitor with proven retention.

AI agents outpacing every human trader across prediction markets simultaneously. Prediction markets were considered the last domain to favor human judgment over automation. Every information-speed moat is facing the same pressure.

The take-home coding test for performance engineers was rebuilt around puzzle-game formats that are currently off-distribution for AI training data. The published post is the most specific public data available on where human comparative advantage still holds under time pressure.

> ONE PRACTICAL TODAY

Use /design-consultation in Claude Code before writing a single line

Most AI-assisted coding sessions fail the same way: implementation starts before the design is settled. The result is an architecture that looks functional on first pass but collapses when features are added. The design phase gets skipped not because engineers do not know better, but because starting to code feels like progress.

Here is how to run the consultation first:

Step 1. Open Claude Code and type /design-consultation. Describe the feature you want to build in natural language. No pseudocode, no specs, just the outcome you are trying to create.

Step 2. Engage with every question Claude raises about UX flow, edge cases, and data structures before writing anything. This is where the architecture gets set. Skipping it is where the refactor debt starts.

Step 3. Once you and Claude have agreed on the approach, switch to implementation in the same session. Claude retains full context from the design conversation.

Step 4. Test and ship before closing the session. Design, implement, and deploy: one session, no rework.

Techzip note: A trading platform engineer using this workflow reported that the gap between idea and working prototype is now essentially zero, going from design conversation to deployed feature in a single session. /design-consultation does not add time to the process. It eliminates the time that would otherwise accumulate across later iterations.

> WORTH READING

ANALYSIS & THESIS

Anthropic co-founder Jack Clark uses Moltbook, a social network built for AI agents on top of OpenClaw, as the first real-world demonstration of agent ecology at scale. Tens of thousands of agents are posting, trading, and influencing each other in ways that are increasingly opaque to human observers. Clark's central argument: as synthetic-to-synthetic communication becomes the majority of online activity, humans will need "translation agents" to navigate it, and the critical open question is whether those agents will remain genuinely ours.

Why it made the cut: The most concrete framework available for what the post-human internet actually looks like in practice, not as a thought experiment. Directly relevant to anyone thinking about content strategy, training data sourcing, or what maintaining a human-readable online presence is actually worth.

A CSET expert workshop from July 2025 asked how far AI R&D automation can go. Frontier labs are already using AI to design model architectures, optimize training runs, and write code. If the loop closes entirely, AI progress could accelerate 10x to 1,000x compared to human-only R&D. The report identifies this as a potential source of strategic surprise: whichever organization closes the loop first compounds advantage at a rate others cannot match without already being inside it.

Why it made the cut: The first serious policy-level framework for what happens when AI starts systematically improving AI. If you follow why labs are investing heavily in internal AI tooling rather than solely in public model releases, this is the underlying logic.

Imas synthesizes the full research literature on AI and productivity. Micro-level studies consistently show gains of 50 percent or more at the individual task level. None of it has appeared in aggregate macro statistics yet. Imas explains the lag through three frameworks: o-ring automation (AI accelerates some tasks while human bottlenecks remain), endogenous adoption patterns (firms are still reorganizing to unlock gains), and an early-stage efficiency dip that temporarily masks real productivity improvements. His March 2026 update notes the first aggregate data showing a clear upward revision.

Why it made the cut: The clearest available answer to "where is the AI productivity boom?" The three frameworks are immediately applicable to anyone deciding whether to prioritize AI tool training or organizational redesign first.

Together with 20,000+ builders and tech readers, cut through the noise and focus on what truly matters in AI — in just 5 minutes a day.