Codes of Culture | Issue 95
From $852B valuations to 500,000 lines of leaked code
Welcome back to Codes of Culture. I’m Ashumi Sanghvi.
I have been thinking about the difference between a funding round and a land grab. Yesterday, with OpenAI closing $122 billion at an $852 billion valuation, it feels like a land grab. The numbers are unreal! The investors are not backing a product. They are staking a claim on the layer on which everything else will run. Once you see that frame, it is hard to unsee it, and that’s what I kept returning to this week. Last week, I also shared a quick link news recap about OpenAI sunsetting Sora, but I wanted to go a bit deeper into why it matters in today’s issue.
In other news, (more exciting news!) I will be at Watches and Wonders in Geneva in a couple of weeks as a guest of Hermes (still in shock!), and I look forward to sharing more about it soon. Plans are also in full swing for the F1 Miami race weekend, and the official Miami fan fest has been announced, along with Cannes Lions programming for many of Future+’s partners and clients. If you are looking to have a presence at either, please reply to this email so we can share more information.
If you are new here, or want to catch up on the best of Codes of Culture, we’d recommend you start here and remember to subscribe for full access to our news, insights, podcast and global events.
📖In this issue:
OpenAI closed Silicon Valley’s largest-ever funding round
Starcloud raised $170 million to build data centres in space
Bluesky built an AI app that lets users write their own algorithm
Sora shut down by OpenAI six months after launch
Anthropic accidentally leaked the source code for Claude Code
1. AI, CAPITAL, AND CORPORATE STRATEGY
OpenAI closed Silicon Valley’s largest-ever funding round.
What’s happening: OpenAI closed a $122 billion funding round at an $852 billion post-money valuation, the largest in Silicon Valley history. The round was anchored by Amazon, Nvidia, and SoftBank, with continued participation from Microsoft and co-leadership from Andreessen Horowitz, D.E. Shaw Ventures, MGX, TPG, and T. Rowe Price. For the first time, OpenAI extended participation to individual investors through bank channels. The company is generating $2 billion in monthly revenue and consolidating its product stack into a unified AI system that combines ChatGPT, Codex, browsing, and agentic workflows. A potential IPO is being prepared for as early as late 2026.
TLDR:
$122 billion at an $852 billion valuation. OpenAI has broken its own funding record twice in twelve months.
The structural signal is not the size. Amazon, Nvidia, and SoftBank supply the compute and distribution that OpenAI’s products run on. They are now also shareholders. The line between supplier and backer has gone.
Amazon’s $50 billion includes $35 billion conditional on AGI or an IPO. The capital has terms attached that tie investment to product outcomes.
The unified superapp signals consolidation of the interface. One system for ChatGPT, Codex, browsing, and agentic workflows has fewer entry points and more lock-in.
Retail investor access through bank channels is a pre-IPO positioning move. OpenAI is broadening its shareholder base before going public.
Why it matters: For the capital allocators and infrastructure-adjacent operators in Future+’s network, the important signal is not the size of the round but the collapse of the boundary between supplier, distributor, and shareholder. Amazon, Nvidia, and SoftBank are not betting on a product. They are underwriting the physical layer that Frontier AI runs on, and now hold equity in the company that those products build. AI is no longer being financed as a product category. It is being financed as infrastructure. The competitive position is increasingly determined at that layer, and everything built above it is downstream.
2. INFRASTRUCTURE AND FRONTIER CAPITAL
Starcloud raised $170 million to build data centres in space.
What’s happening: Starcloud, a Redmond-based space compute company founded in 2024, closed a $170 million Series A at a $1.1 billion valuation, led by Benchmark and EQT Ventures. The round makes it the fastest company in Y Combinator history to reach unicorn status, just 17 months after demo day. Total funding stands at $200 million. Starcloud launched its first satellite in November 2025, carrying an Nvidia H100 GPU, the first terrestrial-grade chip of that class to reach orbit, and subsequently trained an AI model in space. Starcloud-2, due later this year, will carry Nvidia’s Blackwell B200 and AWS server hardware.
TLDR:
The premise: terrestrial data centres are colliding with hard limits around power, land, water, and permitting. In low Earth orbit, solar power is abundant, cooling is passive, and grid dependency disappears.
Starcloud-1 proved the thesis at a technical level. The company now holds proprietary data on operating high-performance AI chips in orbit, knowledge that competitors are not yet able to replicate.
Cost-competitiveness depends on launch compression. CEO Philip Johnston estimates viability when launch costs reach roughly $500 per kilogram, tied to commercial Starship availability in 2028 or 2029.
Starcloud is not alone. SpaceX, Blue Origin, Google’s Project Suncatcher, and Aethero are all pursuing adjacent positions. The infrastructure race has moved beyond Earth.
Benchmark’s board seat is the signal beneath the valuation. This is conviction capital pricing a long-duration infrastructure map, not current revenue.
Why it matters: For the sovereign wealth funds, infrastructure investors, and enterprise operators in our network building positions in the AI compute stack, Starcloud is a proxy for where the constraint is heading. Space computing will not replace terrestrial infrastructure in this decade. But the capital being deployed now is pricing the trajectory of launch economics and energy access over the next ten years. When Benchmark leads a round of this size into a company with one satellite and no commercial revenue, they are making a structural view on where compute will live. The geography of AI infrastructure is becoming a strategic question, not just a technical one.
3. SOCIAL INFRASTRUCTURE AND OPEN PROTOCOLS
Bluesky built an AI app that lets users write their own algorithm.
What’s happening: At the ATmosphere developer conference in Vancouver, Bluesky unveiled Attie, a standalone AI app built on the AT Protocol and powered by Anthropic’s Claude. It lets users build personalised social feeds in plain language rather than code: describing what they want to see and letting the system construct the logic. Because the AT Protocol is an open data system, Attie draws on a user’s full activity across the Bluesky ecosystem rather than a single platform’s proprietary graph. The longer-term roadmap extends to user-built social applications. The launch coincides with Bluesky disclosing a further $100 million in funding, giving the 43.4 million-user platform more than three years of runway.
TLDR:
Attie is a standalone logic layer, not a feature inside the main app. That separation is the point: it positions recommendation as a configurable infrastructure rather than a platform-controlled output.
The AT Protocol’s openness is the structural enabler. Feed logic can be built and ported across any app in the ecosystem without requiring any platform to release proprietary data.
Bluesky’s founder, Jay Graber, is explicit: dominant platforms use AI to increase time spent and harvest training data through systems users cannot inspect. Attie is designed as the inverse.
The roadmap extends to user-built social apps in natural language. Schneider has compared the potential of the Atmosphere ecosystem to WordPress, a platform layer that enables independent construction.
$100 million in new funding gives Bluesky a runway to pursue this as a structural infrastructure bet rather than a monetisation experiment.
Why it matters: For the media platforms, cultural institutions, and brand operators in our network that have spent years building audiences inside closed recommendation systems, the strategic relevance here is not Attie as a consumer product. It is what happens to the distribution logic as the open protocol model scales. The algorithm has functioned as a proprietary moat; it determines what gets seen, by whom, and in what context. If the recommendation becomes a configurable infrastructure that any operator can build on, that moat erodes. The contest over who controls distribution logic will define media, culture, and commerce in the next cycle.
4. AI OPERATIONS AND COMPETITIVE INTELLIGENCE
Anthropic accidentally leaked the source code for Claude Code.
What’s happening: Anthropic confirmed it had inadvertently published internal source code for Claude Code within a release package on the npm registry. A source map file pointed to a zip archive containing nearly 2,000 TypeScript files and more than 500,000 lines of code. The package was pulled within hours, but the code had already been widely mirrored, reaching over 84,000 GitHub stars before DMCA takedowns were issued. Anthropic attributed the exposure to a release packaging error. No customer data or credentials were involved. The incident came days after a separate accidental exposure of an internal blog post detailing an unreleased model known internally as Capybara.
TLDR:
What leaked was not the model weights but the agentic harness: the orchestration layer that instructs Claude Code how to use tools, manage memory, and govern its behaviour. That layer is where Claude Code’s commercial distinctiveness lives.
Claude Code had reached an annualised run-rate of $2.5 billion as of February, with enterprise accounting for 80% of revenue. The leak provides competitors with a detailed blueprint for one of the fastest-growing AI products on the market.
The exposed code revealed an autonomous background mode called KAIROS and a three-layer memory architecture built to prevent agent confusion in long-running sessions. These are hard-won engineering decisions now in the open.
This was the second accidental exposure in a week. The pattern points to process gaps where engineering velocity meets release infrastructure; not a structural security failure, but a material operational risk at scale.
Code that reached 84,000 stars before removal has already circulated beyond containment. The legal response was fast. The practical containment was not.
Why it matters: For the enterprise AI buyers, frontier founders, and operators in our network who build on or compete with Anthropic’s tooling, this incident is worth reading as an operational signal rather than a security story. At the pace the leading AI companies are shipping, release hygiene is a material risk. What circulated was not just code but the architectural decisions that make Claude Code commercially distinctive; the orchestration logic that sits above the model and shapes how agents reason, remember, and act. Anthropic’s position remains strong, but the incident is a reminder that in AI, the infrastructure built around the model can be as strategically sensitive as the model itself.
5. AI STRATEGY AND PRODUCT
Sora shut down by OpenAI six months after launch.
What’s happening: OpenAI discontinued Sora, the AI video generator it launched in September 2025, along with the Sora API. The product peaked at roughly one million users before active users fell below 500,000. Video generation at scale was reportedly costing around $1 million per day in infrastructure against limited revenue. OpenAI is reallocating that compute toward coding tools, enterprise AI, and agentic systems. The Disney partnership, a three-year licensing agreement covering more than 200 characters across Marvel, Pixar, Star Wars, and Disney properties, together with a planned $1 billion Disney investment, collapsed. Disney was informed less than an hour before the public announcement.
TLDR:
Six months from launch to discontinuation. Early App Store traction proved insufficient once retention and compute economics came into focus.
Video generation operates on a fundamentally different cost curve from text. The infrastructure burden is heavier, the consumer monetisation logic weaker, and the path to margin far less forgiving.
Disney’s $1 billion commitment was structured in stock warrants, not cash. The partnership carried public weight before it carried operational finality; a pattern worth noting for anyone tracking AI partnership announcements.
The compute freed from Sora is being redirected toward enterprise, coding agents, and agentic systems. That reallocation is the signal, not the shutdown.
Consumer AI products are now being held to product-market fit and margin discipline. The phase where demo quality and partnership announcements could sustain a product is ending.
Why it matters: For the founders, operators, and investors in our network evaluating AI product bets, Sora is not a cautionary tale about video. It is a data point about where the market’s tolerance has moved. The window in which novelty and launch momentum could carry a product through weak retention is closing. What replaces it is durability: whether the tool generates revenue at the compute cost it requires, whether users return, and whether the workflow it fits is real. OpenAI’s pivot toward enterprise and agentic systems is where the recurring revenue logic is strongest. The consumer AI layer is being filtered, not abandoned. The filters are getting sharper.








