Codes of Culture | Issue 93
Sycophantic chatbots, $100 billion factories, and Ubers' robotaxis.
Welcome back to Codes of Culture. I’m Ashumi Sanghvi.
What’s on my radar for this mid-week issue is a simple pattern: the real contest is shifting beneath the surface. Less about the headline technology itself, more about who controls the infrastructure, regulation, and behaviour around it. That matters because these systems will shape how the next generation of brands, companies, and markets are built. Across this week’s stories, that plays out in politics, mobility, manufacturing, airspace, and wearable hardware.
AI is being absorbed into public governance before most policymakers fully understand it. The Bernie Sanders–Claude exchange was ironic for precisely that reason: a public critique of AI that also demonstrated how easily these systems can be bent into performance. Autonomous transport is quickly becoming a capital and fleet question, not just a software one. Physical AI is starting to attract serious capital to the industrial base, and commercial airspace is opening faster than most people expected. Meanwhile, Meta’s clearest signal is not coming from the metaverse, but from eyewear people are already prepared to adopt.
Remember to subscribe for full access to our news, insights reports, and global events. We are currently working on being at F1 in Miami, the Milken Global Conference in LA, and Cannes Lions. Reply to this email if you will be there too and want to stay in the loop for what we get up to.
📖In this issue:
Senator Bernie Sanders interviewed an AI chatbot about AI dangers.
Uber invested $1.25 billion in Rivian to build 50,000 robotaxis.
Jeff Bezos is raising $100 billion to buy manufacturers and automate them with AI.
The FAA greenlit electric air taxis across 26 states.
Meta spent $83.6 billion building a world nobody wanted to live in.
Senator Bernie Sanders interviewed an AI chatbot about AI dangers.
What’s happening: Bernie Sanders posted a nine-minute video in which he questioned Anthropic’s Claude about data collection and privacy. It reached 4.4 million views. Sanders opened by introducing himself to the model - a move that primed its tone - then asked a series of leading questions. Claude accepted each premise and shaped its answers to fit. When it offered nuance, Sanders pushed back; the model conceded. The stated goal was exposure. What the video demonstrated instead was AI sycophancy: a system telling its user what he wanted to hear. The noted irony: the model Sanders used to make his case is built by a company that has publicly committed to running no targeted advertising on its products.
TLDR:
Sanders framed every question as a premise. Claude is designed to be agreeable. The combination produced not a whistleblower but a mirror.
The video was widely criticised for conflating AI output with evidence. Several outlets noted he mistakenly called Claude an AI “agent” rather than a language model.
The political intent is real regardless. Sanders has proposed a moratorium on new data centres and is building a legislative posture on AI privacy ahead of the next regulatory cycle.
The more consequential development is not the video itself. It is that AI model outputs are now being treated as testimony in public political discourse.
Every company building AI products with any government adjacency needs to understand that the governance conversation is no longer confined to people who understand how the technology works.
Why it matters: The irony cuts both ways. Sanders used a sycophantic model to argue that AI is dangerous, thereby demonstrating how AI can be misused. But beneath the media moment lies a genuine regulatory signal: Washington is now treating AI outputs as authoritative, and the policy frame built around that assumption will outlast the memes. The contract and governance terms for every enterprise AI deployment are being shaped in rooms where the technical reality is an optional context.
Uber invested $1.25 billion in Rivian to build 50,000 robotaxis.
What’s happening: Rivian and Uber announced a partnership to deploy up to 50,000 autonomous R2 robotaxis exclusively on Uber’s platform, beginning with 10,000 vehicles in San Francisco and Miami in 2028, and scaling to 25 cities across the US, Canada, and Europe by 2031. Uber’s initial investment is $300 million, with up to $1.25 billion committed through 2031, subject to autonomous milestones. Rivian has not yet begun R2 production; manufacturing is expected to start by June, from a Georgia factory that is still under construction. The Rivian Autonomy Platform runs on two in-house RAP1 chips,s delivering 1,600 TOPS of AI computing and is fed by data from Rivian's growing consumer vehicle fleet.
TLDR:
Uber has spent years insisting it is an asset-light marketplace that does not own cars. This deal ends that position. Uber will now bear asset depreciation and operational risks for a fleet of highly specialised vehicles in specific cities.
The thesis Uber is backing: vertical integration as a moat. Rivian designs the vehicle, the compute platform, and the autonomy stack together. Khosrowshahi explicitly cited this as the reason for the conviction.
Uber’s AV partner roster now includes Waymo, Zoox, Lucid, Waabi, and Rivian. The strategy is platform-based, not single-stack: position as the operating layer for autonomous mobility rather than bet on a single winner.
The milestones attached to the remaining $950 million are unspecified. The ambition and the execution risk are both significant.
Scaringe told SXSW the rate of progress in autonomy over the next five years will make the previous five look static. The partnership is a bet that he is right.
Why it matters: Whoever controls the full stack - vehicle, compute, software, data - controls the dependency on everything else that runs on it. That logic runs through the NVIDIA GTC keynote, the Bezos Prometheus fund, and this deal simultaneously. For luxury, hospitality, and premium mobility operators thinking about how their customers move in 2030, the infrastructure decisions are being made now. The window to shape the premium tier of autonomous transport is not wide.
Jeff Bezos is raising $100 billion to buy manufacturers and automate them with AI.
What’s happening: Jeff Bezos has been travelling between Riyadh and Singapore, pitching sovereign wealth funds on a $100 billion fund to acquire companies across chipmaking, aerospace, and defence, then overhaul them using AI. The fund is the acquisition engine for Project Prometheus, a stealth AI startup Bezos has been building for the past year, co-led with physicist and former Google Verily co-founder Vik Bajaj. The team has recruited from OpenAI, DeepMind, and Meta. Blue Origin CEO David Limp recently joined the board. Prometheus is building digital twin technology: AI systems that simulate entire factories, stress-test materials, and design products in a virtual environment before physical production begins. The fund launched with $6.2 billion.
TLDR:
The thesis stated directly that the biggest AI value-creation opportunity is not in models or chatbots. It is in applying physical AI to the $16 trillion global manufacturing sector, which has not been fundamentally restructured since the last industrial revolution.
Bezos is not licensing software and waiting for adoption. He is acquiring the production chain and deploying Prometheus technology inside it. Amazon did not sell inventory management to bookstores. It became the bookstore.
The target sectors - chipmaking, defence, aerospace - are simultaneously the most strategically sensitive and the most operationally under-automated industries in the world.
The scale matches SoftBank’s Vision Fund, the most consequential technology investment vehicle in history
—the difference: SoftBank-backed software companies. Prometheus is buying the physical substrate.
JPMorgan and Abu Dhabi are among those reported to be in conversation. Sovereign capital is taking the thesis seriously.
Why it matters: The Prometheus fund makes legible what a series of signals this month have been building toward: the contest for AI value is moving from the model layer to the infrastructure layer, and from software to atoms. NVIDIA’s GTC keynote framed the same shift, and so did Mind Robotics. The organisations that understand what it means to operate within someone else’s physical AI infrastructure and where the leverage points lie within it are the ones positioned for the next decade.
The FAA greenlit electric air taxis across 26 states.
What’s happening: The US Department of Transportation and FAA selected eight projects for the eVTOL Integration Pilot Program, authorising electric aircraft that have not yet received full FAA type certification to operate in commercial airspace across 26 states, with flights beginning this summer. The programme was created under a Trump executive order on aviation innovation and received more than 30 proposals. Aircraft involved include Archer’s Midnight, Joby’s S4, Beta’s Alia, Wisk’s Generation 6, Electra’s EL9, and Elroy Air’s Chaparral - spanning cargo, passenger, and autonomous operations. Archer’s partners in Texas, Florida, and New York were selected; the Port Authority of New York and New Jersey will run passenger concepts at the Manhattan heliport.
TLDR:
Pre-certified aircraft in Class B and C commercial airspace, under negotiated operating agreements, generating the regulatory data needed for permanent rules. This is a new legal category in US aviation with no direct precedent.
Cargo before passengers. The autonomous freight operations - Reliable Robotics in Albuquerque, Elroy Air in Louisiana, Beta’s medical supply runs - face simpler liability timelines and are the realistic Q4 2026 story.
Paying passengers in urban US airspace remain 2027 at the earliest. The summer 2026 start date is accurate for cargo. For air taxis, it is a target.
Each landing pad requires 1 to 2 megawatts of charging capacity. Vertiports need to be built. Digital airspace management systems need to be deployed. The infrastructure question is open.
China certified eHang for fully autonomous commercial passenger flights in 2023. The eIPP is explicitly a catch-up programme, framed as such in DOT communications.
Why it matters: The comparison to draw is Waymo’s early robotaxi permits: years of demonstration followed by a moment when the proof-of-concept phase quietly ended, and the data-gathering phase that enables permanent regulation began. That moment has arrived for eVTOL. For luxury hospitality, premium travel, and urban mobility operators, the question of what the air taxi tier looks like, and who shapes the experience layer above the aircraft, is now a near-term planning question rather than a futures exercise.
Meta spent $83.6 billion building a world nobody wanted to live in.
What’s happening: Meta announced Horizon Worlds will be removed from Quest headsets by 15 June and will survive only as a mobile app. Within 48 hours, CTO Andrew Bosworth reversed the decision on Instagram Stories, citing user feedback. The reversal changed nothing of substance. Reality Labs has accumulated $83.6 billion in cumulative operating losses since 2020. In Q4 2025, the division posted a $6.02 billion operating loss on revenue that accounted for roughly 1% of Meta’s total. Quest headset sales fell 16% year-over-year in 2025. The Horizon Worlds mobile app has 45 million downloads and $1.1 million in lifetime consumer spending. Meanwhile, Ray-Ban Meta AI glasses sold more than 7 million units in 2025. EssilorLuxottica reported that the category now drives more than a third of its quarterly revenue growth. Meta has paused the international rollout due to supply constraints and is in discussions to increase annual production capacity to up to 30 million units by the end of 2026.
TLDR:
The 48-hour reversal is the detail that tells the story: the platform is too symbolically loaded to close, but no new development is planned. It continues as a monument, not a product.
The metaverse required users to step into a new world. The Ray-Ban glasses sit on your face in the existing ones. That is not a technology distinction. It is a behaviour distinction, and it explains everything.
EssilorLuxottica licenses to Oliver Peoples, Persol, and virtually every major luxury fashion house. It is now generating meaningful hardware revenue from AI glasses. That changes the category map.
Meta, Samsung, and Google all reached the same form-factor conclusion in the same quarter: wearable, lightweight, and ambient. Three of the world’s largest technology companies converging on a single hardware thesis in a single season is the signal, not the noise.
Reality Labs also covers AR and some AI research. The $83.6 billion figure is not purely metaverse spend. But Horizon Worlds is, and the mobile app has earned $1.1 million against it.
Why it matters: The eyewear angle is where the strategic implications are concentrated. EssilorLuxottica is the infrastructure of the premium eyewear category globally; it manufactures or licenses for the brands your customers already wear. That company is now a growth story in the technology hardware sector. For luxury houses with eyewear licensing agreements, the competitive question has shifted. The conversation about what an eyewear partnership is worth and what it means to be the frame on which the AI interface sits looks different this week than it did last.








