In partnership with

> SPOTLIGHT

WHAT MATTERS TODAY

A misconfigured .map file in Claude Code's npm package exposed 512,000 lines of TypeScript across roughly 1,900 files. Anthropic has acknowledged the leak and removed the file.

What was inside: a 3-layer memory architecture (a MEMORY.md index pointing to topic files, with transcripts stored grep-only), a background process called autoDream that consolidates memory between sessions, and 44 feature flags with codenames including KAIROS (a persistent daily-log assistant mode), COORDINATOR_MODE (multi-agent orchestration), and PROACTIVE (autonomous execution).

This is the most detailed public look at how a top-tier AI coding agent actually works under the hood. The memory system is not proprietary magic, it is a structured file convention anyone can replicate today.

OpenAI just closed the largest private funding round in tech history: $122B at an $852B valuation. Amazon leads with $50B; Nvidia and SoftBank each put in $30B. The company now generates $2B per month in revenue, with enterprise accounting for more than 40% of that.

The bigger announcement: OpenAI is merging ChatGPT, Codex, and its agents platform into a single unified product. The pitch is one superapp that handles everything from conversation to coding to autonomous task execution.

The funding round essentially ends any near-term question about OpenAI's survival. At $2B/month in revenue and $122B in fresh capital, they have runway to outspend every competitor on model training, infrastructure, and distribution.

A new analysis from Tanay Jaipuria maps the bifurcation happening across the AI application layer. Tools like Cursor and Intercom are going down — building proprietary models trained on domain-specific data. Tools like Crosby AI and WithCoverage are going up — moving toward fully managed services where AI handles entire workflows end-to-end.

The companies staying in the middle, shipping generic AI features on top of foundation models with no vertical differentiation, are facing pressure from both sides.

Every AI product company is making a strategic bet right now, whether deliberately or by default. Neutral middleware is not a stable position. For your work: When evaluating AI vendors or building internal tools, ask where they sit on this map. Are they building toward proprietary data advantages or toward full workflow ownership? Generic wrappers around foundation models are getting competed out from both directions.

> SIGNAL HEADLINES

CAPTURE THE SHIFT

Oracle is cutting 20,000 to 30,000 jobs as it redirects budget from labor to AI infrastructure. The layoffs span support, IT, and back-office functions.

OpenAI's Sora is burning $1M per day while usage has collapsed since its December launch. Internal documents show the product never hit the retention rates the team projected.

Microsoft added Copilot Critique and Copilot Council to its 365 suite. Critique reviews your work for logical gaps and blind spots; Council assembles a panel of AI advisors with different perspectives on your problem.

70% of Americans fear AI will cause widespread job losses in the next decade, per a new Quinnipiac poll. 52% say they personally worry about losing their own job to AI.

Mistral raised $830M in debt financing to fund a new data center cluster in Paris. The round is structured as a credit facility rather than equity, preserving Mistral's cap table ahead of a likely IPO.

> ONE PRACTICAL TODAY

Steal Claude Code's memory architecture for any AI project

The biggest takeaway from the leak: Claude Code's long-term memory is not an AI trick. It's just a folder structure. You can set it up in 10 minutes and use it with any AI tool, whether that's Claude, ChatGPT, Cursor, or something else entirely.

Step 1: Create a MEMORY.md file at the root of your project folder.

This is the index. It does not contain actual information — it only contains pointers. One line per topic, each linking to a separate file:

- [Project overview](memory/overview.md) — goals, stack, key decisions
- [Auth system](memory/auth.md) — JWT flow, refresh token logic, known bugs
- [Database schema](memory/db.md) — table structure, relationships, migration notes

Keep this file short. Claude Code's version caps index entries at around 150 characters each so the whole index fits in a single context read.

Step 2: Write the actual content inside the linked topic files.

Each memory/*.md file holds the detailed notes for that topic. Think of it as a living document — you update it as your project evolves. Don't put everything in one file. Separate concerns the same way you would separate code modules.

Step 3: Keep a SESSION_LOG.md during each work session.

As you work, jot down decisions you made and why: "Decided to use Postgres over SQLite because of concurrent write requirements." "Dropped the caching layer — latency wasn't the bottleneck." These notes are rough and disposable — their only job is to capture what happened before you close the tab.

Step 4: At the end of each session, run a consolidation prompt.

Open a new chat, paste in your recent SESSION_LOG.md, and send this:

Read these session notes and update the relevant files in the memory/ folder. Merge any new decisions into the appropriate topic file. Keep each file concise.

This is exactly what Claude Code's autoDream process does automatically in the background. You're just doing it manually, which takes about 2 minutes.

The result: next time you open the project, you start by telling your AI: "Read MEMORY.md and the files it links to." It will have full context on your architecture, your past decisions, and your known issues — without you re-explaining anything.

> PRESENTED BY THE CURRENT

The Free Tech Newsletter That Readers NEVER Skip

Your uncle forwards you sketchy tech articles. Your coworker won't stop talking about AI taking everyone's jobs. And you're stuck Googling the same five questions every week.

The Current is a daily tech newsletter written by Kim Komando that helps you stay up to date on AI, tech, and trends in about 5 minutes a day.

Each morning she breaks down what’s happening in tech so you can quickly understand what matters without digging through a bunch of different questionable sources.

In each issue you’ll find things like:

  • Important AI updates

  • Useful tech tips

  • How to avoid the latest scams

It’s a simple read designed to help you eliminate the hours you probably spend Googling the same 5 tech questions

> WORTH READING

ANALYSIS & THESIS

Ben Thompson's deep dive into Apple's 50 years of vertical integration is one of the best frameworks for understanding why Apple keeps winning despite moving slowly. The argument has direct implications for how AI companies are thinking about platform lock-in right now.

Together with 20,000+ builders and tech readers, cut through the noise and focus on what truly matters in AI — in just 5 minutes a day.