In partnership with

> WHAT MATTERS

TODAY’s 3 MOST IMPORTANT

According to Bloomberg, Apple will allow Siri to route certain requests to competing AI assistants, including ChatGPT, Claude, and Gemini, as part of its iOS 27 overhaul. This is the most significant concession in Siri's history: Apple is acknowledging it cannot win the AI race alone and is choosing distribution over capability ownership. The change is expected to be announced at WWDC or earlier.

Apple just turned over 1 billion iPhones into a potential distribution channel for every AI assistant on the market. This is the largest platform shift since the App Store launched. If you are building an AI product that could surface through voice, the time to prepare integrations is now, not after Apple makes the official announcement.

OpenAI's advertising pilot surpassed $100M in annualized revenue in under two months, working with more than 600 advertisers at a $60 CPM with Criteo as its ad-tech partner. Combined with Sora's shutdown, the cancellation of Adult Mode, and the hire of former Meta VP Dave Dugan to lead global ad sales, the strategic picture is clear: OpenAI is building three revenue legs (subscriptions, enterprise APIs, and advertising) ahead of its IPO. Everything that does not generate sufficient compute ROI is being cut.

AI-native advertising is scaling faster than most analysts projected, and OpenAI is no longer running an experiment. It is building a real business unit. Every AI product competing with ChatGPT in the consumer space now faces a rival with an ad budget; every brand currently buying traditional digital ads will soon receive a pitch from OpenAI..

In an interview with journalist Alex Kantrowitz, three-term Senator Mark Warner said directly: "I want to be more optimistic, but I am terrified." Companies have been reporting to him directly about cuts to entry-level hiring: one law firm has paused all new associate intake, another cut its back office from 23 people to 3. Graduate unemployment currently sits at 9%. Warner's projection: 30%.

This is the first time a senior US senator has attached a specific number to near-term AI displacement in a public record. That legitimizes policy pressure on how AI is deployed in professional services in a way no academic paper can. If you are building tools that replace knowledge work, the window before regulation catches up is narrowing faster than your roadmap. If you are a knowledge worker, the entry-level market is shifting this year, not next.

> SIGNAL HEADLINES

CAPTURE THE SHIFT

US District Judge Rita Lin officially ordered the Department of Defense to rescind all restrictions on Anthropic, citing "illegal First Amendment retaliation." This is the first time a US court has granted legal protection to an AI company for refusing to build autonomous weapons, a precedent that matters for every lab with a safety policy.

A paper from Stony Brook University and Columbia Law School shows that fine-tuning reactivates verbatim recall of copyrighted content that safety training had successfully blocked, with a similarity coefficient of r ≥ 0.90 across 30 different authors. OpenAI previously testified in court that this was "impossible."

Developers can now package workflows, skills, and MCP servers into installable bundles and share them across teams. Box launched a plugin on day one to process earnings call documents and extract structured data at scale. Codex is becoming a platform, not just a tool.

> ONE PRACTICAL TODAY

Build a voice agent that books appointments with gpt-realtime-1.5

Problem: Many teams are paying $15-25 per hour for someone to do one thing: collect information from callers and write it into a calendar. The task is repetitive, predictable, and requires no human judgment. But it requires voice.

OpenAI published a demo this week: a health clinic voice agent built for a Singapore clinic using gpt-realtime-1.5. The agent speaks naturally with patients, asks the right questions, and books appointments in real time without a human handoff. Full code and documentation are at developers.openai.com/realtime.

Here is how to replicate it:

  • Step 1: Go to developers.openai.com/realtime, get an API key, and read the quickstart.

  • Step 2: Write a system prompt with three components: the agent's role ("You are a scheduling assistant for [business name]"), the list of information to collect (name, reason for visit, preferred time slot), and the expected output format (JSON to write to your calendar API).

  • Step 3: Connect to any calendar API. Google Calendar and Calendly both support webhooks. The agent calls the function as soon as it has collected complete information.

  • Step 4: Test with your own microphone before deploying.

Techzip note: The use case is not limited to clinics. Any business with recurring appointments (salons, consulting calls, SaaS demos, legal intake) can fork this template directly. If the agent handles 3 calls per hour instead of a receptionist, the AI cost is roughly 90% lower than labor, and it never double-books.

> PRESENTED BY

The Future of AI in Marketing. Your Shortcut to Smarter, Faster Marketing.

Unlock a focused set of AI strategies built to streamline your work and maximize impact. This guide delivers the practical tactics and tools marketers need to start seeing results right away:

  • 7 high-impact AI strategies to accelerate your marketing performance

  • Practical use cases for content creation, lead gen, and personalization

  • Expert insights into how top marketers are using AI today

  • A framework to evaluate and implement AI tools efficiently

Stay ahead of the curve with these top strategies AI helped develop for marketers, built for real-world results.

> WORTH READING

Analysis & Thesis

MIT Technology Review analyzes the strategic divergence between OpenAI and Anthropic on military AI: OpenAI chose soft legal boundaries, Anthropic chose moral refusals, and both choices produced opposite outcomes this week. The piece asks whether a safety-first positioning is still a competitive advantage or has quietly become a liability.

Why it made the cut: the clearest framework available for understanding why the two leading AI labs ended up in opposite positions on the same policy question in the same week

Researchers prove that fine-tuning, even on unrelated data, reactivates verbatim recall of copyrighted content that safety training had blocked, with a similarity coefficient of r ≥ 0.90 across 30 authors. Safety guardrails are not permanent. They are mutable under fine-tuning.

Why it made the cut: every team building on top of a fine-tuned model needs to understand that alignment is not a fixed property. This paper changes how you should assess deployment risk in any regulated environment.

Analyst Ben Thompson argues that the current framing of AI is stuck between two useless positions: hype ("AI will do everything") and dismissal ("it is just autocomplete"). He proposes a third framework built on agency: the right question is not "can AI do this?" but "who has agency in this system, and how does AI shift that?" The piece is the clearest available tool for evaluating any specific AI deployment without falling into either trap.

Why it made the cut: essential reading for anyone pitching or justifying AI investment who needs a better frame than the two dominant extremes.

Found this useful?

👉 Forward it to someone trying to keep up with AI.

👉 Read online: techzip.beehiiv.com

Techzip Newsletter
| Zipping what truly matters in the AI era.