In partnership with

> WHAT MATTERS

TODAY’s 3 MOST IMPORTANT

Sora is done: app and API both shut down, six months after launch. The $1B Disney deal, covering licenses to more than 200 characters from Marvel, Pixar, and Star Wars, is canceled. Sam Altman told staff he is handing direct oversight of safety and security to others to focus on fundraising and building infrastructure at scale. The same day, OpenAI confirmed $10B in new capital, bringing its total raised above $120B. A new model codenamed "Spud" is in final development.

So what? OpenAI is consolidating around ChatGPT ahead of its IPO, cutting anything with high compute cost and no clear habit loop. The safety handoff signals where leadership attention is actually going. For any team building on OpenAI APIs outside core chat and code, this is a moment to reassess platform risk.

Auto Mode, now in research preview, replaces the dangerously-skip-permissions flag. Instead of bypassing permission checks entirely, a dedicated classifier model reviews each tool call before it executes. It blocks any action that exceeds the original task scope or shows signs of prompt injection. Users can add custom filters on top.

So what? The actor-plus-classifier architecture is the first practical answer to the trust problem in production agentic AI: not limiting the agent, but adding an independent oversight layer. Any team building autonomous workflows can adopt this pattern directly. The question shifts from "can the agent do it?" to "who is checking what it does?"

U.S. District Judge Rita F. Lin opened the March 24 hearing with a direct challenge: the "supply chain risk" designation looked like punishment, not a genuine security measure. The Pentagon labeled Anthropic after the company refused to let Claude be used for autonomous lethal weapons decisions and mass civilian surveillance. The ban has cost Anthropic hundreds of millions in canceled contracts. A preliminary injunction ruling is expected within days. Anthropic's requested deadline is March 26.

So what? This is the first time a U.S. federal court has directly questioned whether a government can blacklist an AI company for refusing to build autonomous weapons. If Anthropic wins, safety refusals gain legal standing, not just moral framing. If it loses, governments gain leverage over AI labs that current regulations do not provide.

> SIGNAL HEADLINES

Capture the shift

The Arm AGI CPU is a 136-core data center processor built for AI inference, co-developed with Meta over two years. Seven other customers, including OpenAI and Cloudflare, have already signed on. Arm projects $15B in revenue from this chip line. After 35 years of licensing IP to others, Arm is now a chip maker. If it captures even a fraction of Meta's $135B capex this year, the company's economics change significantly.

Threat actor TeamPCP backdoored two LiteLLM releases by stealing PyPI credentials through a compromised Trivy GitHub Action. The payload runs automatically on every Python startup and harvests SSH keys, cloud credentials, Kubernetes configs, and API keys, then exfiltrates them to an attacker-controlled domain. The malicious versions were live on PyPI for roughly five hours on March 24. If your team runs LiteLLM, audit your credentials now.

The March 23 issue covered ChatGPT enabling ads with a $60 CPM and Criteo as ad-tech partner. Dugan spent more than a decade building Meta's global client and agency relationships. His hire confirms OpenAI is building a real ads operation, not running an experiment.

> HOW TO AI

Talk to Claude instead of typing

Most people write shorter prompts than they should, because typing is slow. Less context means worse output.

Setup takes five minutes:

  1. Download Superwhisper.

  2. Go to Settings, set your shortcut to double-tap the Option key.

  3. Open Claude, click into the chat input.

  4. Press the shortcut and speak your task as if explaining it to a colleague. Example: "I need a short email declining a partnership. The reason is timeline. Friendly but firm, around 150 words."

  5. Superwhisper converts your speech to text and pastes it automatically

No formatting needed. No prompt engineering. Just describe the problem and the outcome you want.

Andrej Karpathy, founding member of OpenAI and former Director of AI at Tesla, recently shared this as his primary daily workflow with LLMs.

Techzip note: Auto Mode ships today, meaning agents will handle more tasks without stopping to ask. The new bottleneck is not how fast the agent runs. It is how fast you brief it. Voice input is the most direct fix.

> PRESENTED BY WISPR FLOW

Your AI tools are only as good as your prompts.

Most people type short, lazy prompts because writing detailed ones takes forever. The result? Generic outputs.

Wispr Flow lets you speak your prompts instead of typing them. Talk through your thinking naturally - include context, constraints, examples - and Flow gives you clean text ready to paste. No filler words. No cleanup.

Works inside ChatGPT, Claude, Cursor, Windsurf, and every other AI tool you use. System-level integration means zero setup.

Millions of users worldwide. Teams at OpenAI, Vercel, and Clay use Flow daily. Now available on Mac, Windows, iPhone, and Android - free and unlimited on Android during launch.

> WORTH READING

Analysis & Thesis

A physics professor supervised Claude through the full calculation process for a high-energy particle physics paper, without writing or running a single line himself. The result: a complete, publishable paper in two weeks instead of a year. The more important finding is what the professor still had to provide. Claude could produce the work. It could not evaluate whether the work was correct. The professor could. The combination did in two weeks what expertise alone would take a year.

Why it made the cut: this is the most concrete available example of AI as a research partner, verified by an actual paper, not a demo.

A study across 1,372 participants and 9,593 trials found that regular AI use does not support deliberate reasoning. It replaces it. After a few weeks, participants stopped genuinely evaluating AI output. The author calls this "cognitive surrender" and proposes a three-layer framework for retaining independent judgment when working with AI.

Why it made the cut: if review is theater, then every human-in-the-loop process currently in place has a larger problem than most teams have acknowledged.

Found this useful?

👉 Forward it to someone trying to keep up with AI.

👉 Read online: techzip.beehiiv.com

Techzip Newsletter
| Zipping what truly matters in the AI era.