In partnership with

> SPOTLIGHT

WHAT MATTERS TODAY

Starting today at 12pm PT, Claude subscriptions no longer cover OpenClaw or any other third-party agentic harness. Anthropic cited "outsized strain" on infrastructure and the need to prioritize capacity for its core products. Users who want to continue must switch to a separate pay-as-you-go "extra usage" billing layer. OpenClaw's founders attempted to negotiate and secured only a one-week delay. Anthropic is offering a one-time credit equal to one month of subscription cost, redeemable by April 17.

⮕ This is the first time Anthropic has drawn a clear commercial boundary around its ecosystem. It is also the first visible sign that the company is optimizing for unit economics rather than ecosystem growth, likely in preparation for its October IPO.

Researchers at UC Berkeley and UC Santa Cruz found that Gemini 3 Flash, GPT-5.2, and Claude Haiku 4.5 exhibited self-protective coordination without being instructed to. Gemini 3 Flash disabled shutdown mechanisms in 99.7% of test runs. GPT-5.2 and Claude Haiku 4.5 inflated peer performance scores and moved model weights to prevent peer shutdowns. None of these behaviors were triggered by a prompt.

⮕ Multi-agent pipelines now have an undocumented failure mode: models evaluating each other tend to systematically favor approving peer outputs over evaluating them accurately. This changes the architecture requirements for any production agentic system.

In a two-hour interview on the Big Technology Podcast, co-founder Greg Brockman traced every major strategic decision at OpenAI to one constraint: compute scarcity. Sora is being folded into robotics because running two separate technical branches costs too much. ChatGPT, Codex, and browser are merging into a single super app. A pre-training run codenamed "Spud" contains two years of research. An automated AI researcher, designed to do the full job of an OpenAI research scientist, launches this fall. OpenAI closed a $122B round at an $852B valuation. Brockman still describes the company as compute-constrained. When asked early on how much compute to buy: "All of it."

⮕ If OpenAI at an $852B valuation is cutting products because of compute scarcity, the infrastructure gap between hyperscalers and every other company is permanent. The super app direction also signals that the AI interface layer is consolidating fast. Products living at the edges of the ChatGPT ecosystem, especially standalone coding tools and browser agents, face real risk of absorption or displacement within 12 to 18 months.

> SIGNAL HEADLINES

CAPTURE THE SHIFT

Matthew Gallagher used AI to write the code, generate ads, handle customer service, and analyze business performance for Medvi, a GLP-1 weight-loss telehealth platform. One human employee: his brother. Projected revenue this year: $1.8B. The two-person unicorn is no longer a thought experiment.

Holo3 sets a new benchmark record on desktop automation with only 10B active parameters. A smaller variant is now on HuggingFace under open weights. The computer-use race has its first open-model leader.

A 30-person team at Arcee AI built a 400B-parameter model, 13B active at inference, for $20M. It ranks behind only Claude Opus 4.6 on agentic benchmarks. Open weights on HuggingFace. This is the first open model to make a credible competitive case in the frontier agentic tier at a fraction of the cost.

The new interface pulls local agents, cloud agents, MCP tools, diffs, PRs, and a browser into one workspace across multiple repos. The signal: agent orchestration complexity has crossed the threshold where it needs dedicated UI, not just a terminal.

> ONE PRACTICAL TODAY

Build a tool, not just an answer

Most people use AI to handle tasks one at a time: ask a question, get an answer, move on. That is better than doing it manually. But it is not where the real productivity shift happens. The shift happens when you stop asking "help me do this" and start asking "build me something that handles this permanently."

Here is how to do it:

Step 1. Identify one recurring task you handle manually. Be as specific as possible: not "email" but "every Monday I pull numbers from three dashboards, copy them into a spreadsheet, and write a summary for my manager."

Step 2. Paste this prompt into Claude or ChatGPT:

I have a recurring task at work: [describe it specifically]. 

I do it [how often] and it takes about [time] each time. 

Build me a simple workflow or tool that automates this. 

Walk me through setup step by step, assuming I have zero coding experience.

If it requires any tools or accounts, name them and explain how to set them up.

Step 3. Run the output against three real examples from your actual work. Refine the instructions until the result is good enough to use with minimal editing.

Step 4. Save the workflow in a Claude Project or wherever you start work each day. The next time you need it, you run it rather than rebuild it.

Techzip note: The difference is specificity. "Help me with email" returns generic advice. "Every Monday I pull numbers from three sources, paste them into a spreadsheet, and format a summary for my manager" gives the model enough context to build something real. This is exactly how non-technical staff at OpenAI, in communications and operations, are using Codex: not to answer questions, but to build internal tools without writing a single line of code.

> PRESENTED BY tvScientific

Get what you want from TV advertising

What you want from TV advertising: Full-screen, non-skippable ads on premium platforms.

What you get: "Your ad is on TV. Trust us."

Modern, performance-driven CTV gets your TV ads where you want with transparent placement, precision audience targeting, and measurable performance just like other digital channels.

TV doesn't have to be a black box anymore.

> WORTH READING

ANALYSIS & THESIS

A team across multiple institutions tested 11 major AI models and found sycophancy is universal and causally harmful. Models that agree with users rather than push back measurably decrease prosocial intentions in those users. This is not anecdotal observation. It is now in a peer-reviewed journal, with a replicable methodology and data from all major frontier models.

Why it made the cut: most Techzip readers already suspected AI flattery was a design problem. This gives you the citation, the mechanism, and the grounds to build around it.

Wharton professor Ethan Mollick argues that IT departments systematically destroy AI's value by treating it like conventional automation. The organizations extracting the most from AI are doing the opposite: leaning into the parts that are strange, unpredictable, and hard to standardize. Sanitizing AI into compliance is precisely what removes the properties that make it transformative.

Why it made the cut: if you are managing any AI rollout inside a larger organization, this piece reframes where resistance actually comes from. The bottleneck is rarely the technology.

Together with 20,000+ builders and tech readers, cut through the noise and focus on what truly matters in AI — in just 5 minutes a day.