AI Briefing — 2026-02-28

Top Stories

OpenAI raises $110 billion at $730 billion valuation

OpenAI closed a $110 billion funding round led by Amazon ($50 billion), NVIDIA ($30 billion), and SoftBank ($30 billion), marking one of the largest private funding rounds in history at a $730 billion pre-money valuation. The company also reported ChatGPT has reached 900 million weekly active users. The capital injection dramatically shifts the competitive landscape for AI infrastructure and compute.

Impact: 3 · Novelty: 2 · Target: 3 · Total: 8/9

Sources:


OpenAI signs defense contract with U.S. Department of Defense

OpenAI CEO Sam Altman announced a defense contract to deploy AI models within the Pentagon’s classified military network, with stated “technical safeguards” addressing concerns around military AI use. The deal marks a significant policy shift for OpenAI and contrasts with rival Anthropic’s refusal to allow Pentagon access for autonomous weapons and domestic surveillance applications.

Impact: 3 · Novelty: 2 · Target: 3 · Total: 8/9

Sources:


Pentagon moves to designate Anthropic as supply-chain risk

The U.S. Department of Defense is moving to designate Anthropic as a “supply-chain risk,” effectively cutting off the AI company from defense contracts over its refusal to allow unrestricted Pentagon access for autonomous weapons and domestic surveillance. Anthropic stated it received no direct communication from the Department or White House and would challenge such a designation in court as “legally unsound.” Employees at Google and OpenAI signed an open letter supporting Anthropic’s position.

Impact: 3 · Novelty: 2 · Target: 3 · Total: 8/9

Sources:


OpenAI fires employee for prediction market insider trading

OpenAI terminated an employee for insider trading on prediction markets including Polymarket and Kalshi, using non-public information about OpenAI product announcements (OpenAI o1 and GPT-4.1) to generate what the company described as “significant” profits. This appears to be the first known instance of an AI company employee fired for prediction market insider trading, establishing a precedent for how prediction markets intersect with corporate insider trading policies.

Impact: 2 · Novelty: 3 · Target: 3 · Total: 8/9

Sources:


Block lays off 40% of workforce, cites AI tools as driver

Block, the fintech group headed by Jack Dorsey, announced it will cut its workforce by nearly half, shedding more than 4,000 jobs from its 10,000-strong workforce. Dorsey explicitly attributed the job cuts to AI tools, stating “Intelligence tools have changed what it means to build and run a company” and predicting “a majority of companies” would reach similar conclusions within the next year. Shares soared more than 25% in after-hours trading following the announcement.

Impact: 3 · Novelty: 2 · Target: 3 · Total: 8/9

Sources:


Below the Fold

OpenAI and Amazon announce strategic AWS partnership

OpenAI and Amazon announced a strategic partnership bringing OpenAI’s Frontier platform to AWS, expanding AI infrastructure, custom models, and enterprise AI agents. OpenAI also introduced the Stateful Runtime Environment for Agents in Amazon Bedrock for persistent orchestration and secure execution of multi-step AI workflows.

Impact: 2 · Novelty: 1 · Target: 3 · Total: 6/9

Sources:


Suno reaches 2 million paid subscribers, $300M ARR

AI music generator Suno has reached 2 million paid subscribers and $300 million in annual recurring revenue, demonstrating a viable commercial business model for generative AI in creative applications.

Impact: 2 · Novelty: 1 · Target: 3 · Total: 6/9

Sources:


Perplexity launches “Computer” unifying multiple AI models

Perplexity launched “Perplexity Computer,” described as a system that “unifies every current AI capability into a single system,” betting that users need access to multiple AI models through a consolidated interface rather than single-model tools.

Impact: 2 · Novelty: 2 · Target: 2 · Total: 6/9

Sources:


Security analysis finds fundamental vulnerabilities in AI agent architectures

A security analysis demonstrates fundamental vulnerabilities in how AI agent systems handle tools, file access, and external commands, arguing that current agent architectures have security models that cannot be safely sandboxed.

Impact: 2 · Novelty: 2 · Target: 2 · Total: 6/9

Sources:


Optimization reduces Claude Code MCP output by 98%

A technical optimization for Anthropic’s Claude Code reduced Model Context Protocol (MCP) output token usage by 98%, dramatically improving context window efficiency for the AI coding assistant.

Impact: 1 · Novelty: 2 · Target: 3 · Total: 6/9

Sources:


Unsloth releases Dynamic 2.0 GGUF format for local LLM inference

Unsloth released Dynamic 2.0 GGUF format, an advancement in quantized model format for running large language models locally with more efficient model loading and execution.

Impact: 1 · Novelty: 2 · Target: 2 · Total: 5/9

Sources:


Google Gemini CLI reverses account bans for “antigravity” content

The Google Gemini CLI team reinstated accounts that were banned for “antigravity” content, an issue stemming from automated moderation incorrectly flagging certain scientific topics.

Impact: 1 · Novelty: 1 · Target: 2 · Total: 4/9

Sources:


OpenAI updates mental health safety features and policies

OpenAI shared updates on its mental health safety work, including parental controls, trusted contacts, improved distress detection, and recent litigation developments.

Impact: 1 · Novelty: 1 · Target: 2 · Total: 4/9

Sources:


Credibility Flags

No uncertain credibility stories in this briefing.

Run Metadata