Codes of Culture | Issue 88
Claude Surges, Zara Experiments, and AI Goes Wearable
Welcome back to Codes of Culture. I’m Ashumi Sanghvi.
After a short hiatus, we are back to sharing the top news and insights shaping luxury, culture and technology with a slightly longer version for today’s newsletter. Back in London after a few weeks on the road — India Impact AI Summit, then the first Founders Forum India, where I hosted a fireside chat with the visionary CEO of Brahma AI. The conversations happening in Mumbai and Delhi were operating on a different frequency: more urgent, more geopolitical, and far less deferential to the Western tech consensus than they were even two years ago. Sovereign AI, cultural infrastructure, the question of who gets to shape the next generation of platforms- these topics came up repeatedly, and seriously.
It felt like the right moment to relaunch this newsletter under its evolved form. Codes of Culture has always been about the systems underneath the surface — the forces shaping luxury, culture, capital, and influence before they become obvious. That brief hasn’t changed; the lens has sharpened.
This issue covers the standoff between Anthropic and the PentagonAI loyalty shift playing out in real time, what fashion’s OpenAI partnership actually signals, Zara’s virtual try-on and the questions it raises, and the quiet acquisition that may say more about the future of computing than anything announced at CES. It’s a week that underscores the shifting drivers of fashion and culture: tech innovation, cultural capital, and the renewed power of storytelling.
Remember to subscribe for full access to our news, insights reports, and global events.
📖 In this week’s issue-
The new power struggle: Governments vs AI Companies
Claude tops the App Store. The timing is everything.
OpenAI and the CFDA’s two-year bet on American fashion
Zara’s AI try-on and the return problem it’s actually solving
Oura acquires Doublepoint. The body is now the interface.
The new power struggle: Governments vs AI Companies
What’s Happening: Anthropic’s $200 million Department of Defence contract collapsed after the company refused to grant the military unrestricted access to its Claude AI model. The Pentagon, under Defence Secretary Pete Hegseth, demanded the right to deploy Claude for “any lawful purpose.” Anthropic drew the line at two specific uses: domestic mass surveillance and fully autonomous weapons. Trump ordered a federal phaseout of Claude. Hegseth declared Anthropic a “supply chain risk” — a designation typically reserved for foreign adversaries like Huawei. Talks have since partially resumed.
TLDR:
Anthropic refused a Pentagon demand for unrestricted AI access, citing two non-negotiable red lines: no mass domestic surveillance, no fully autonomous weapons
The DoD labelled Anthropic a “supply chain risk” and turned to OpenAI, which accepted the deal — Amodei called this “safety theatre” in an internal memo.
Trump ordered federal agencies to stop using Claude immediately
Anthropic has said it will challenge the supply chain designation in court; negotiations are reportedly ongoing
The episode is being read, beyond the contract dispute, as an early test of who actually governs powerful AI systems
Why It Matters: This is not primarily a story about a contract. It is about who holds authority over technology within classified military networks that could soon guide autonomous systems in conflict.
Amodei’s red lines — no mass surveillance of Americans, no fully autonomous weapons — are not fringe concerns. They are positions that AI safety researchers, ethicists, and parts of the military itself have held for years. The fact that OpenAI accepted a contract that Anthropic described as substantively identical, while publicly claiming the same protections, raises legitimate questions about the value of those assurances.
The more uncomfortable read is that Anthropic’s ethical posture has been selectively applied. The company scrapped its ban on selling to government spy agencies in 2024, partnered with Palantir and Amazon for military customers, and its technology was used in Pentagon operations that resulted in deaths. The line it drew with Hegseth was real, but drawn close to the edge of what it had already agreed to.
What this week makes undeniable is that AI companies are political actors, whether they intend to be or not. The institutions that might ordinarily arbitrate these decisions, legal, democratic, and international, don’t yet have the frameworks to do so. That gap is where the real power struggle lives.
What’s happening: Anthropic’s Claude overtook ChatGPT as the number one free app on the Apple U.S. App Store — two days after reports emerged that the Trump administration had moved to restrict federal use of Anthropic’s models, following the company’s refusal to grant unrestricted government access.
TDLR:
Claude is now the most downloaded free app in the U.S., overtaking ChatGPT.
The surge followed, not preceded, reports of a government-led attempt to restrict.
Users seem to be supporting Anthropic’s stance rather than punishing it
Why it matters:
The sequence here is worth reading carefully. A government attempts to limit a company’s reach. Within 48 hours, consumers push that company to the top of the charts. That’s not a coincidence. It’s a signal about where trust sits in the AI market right now. For three years, the generative AI conversation has been dominated by capability benchmarks: which model writes better, codes faster, reasons more accurately. That race still matters. But the App Store moment suggests something else is also in play. Users are beginning to choose AI the way they choose a bank or a news source. Institutional character counts. Where a company draws its lines, and why, is becoming part of the product. Anthropic has cultivated that reputation deliberately. It may now be its most defensible asset.
OpenAI and the CFDA’s two-year bet on American fashion
What’s happening: OpenAI has partnered with the Council of Fashion Designers of America to launch a two-year AI innovation hub, pairing American fashion brands with AI builders to co-develop tools across design, production, and creative workflow.
TLDR:
The partnership will embed AI tooling directly into American fashion brand workflows
Focus areas: design ideation, production planning, operational efficiency
The CFDA’s institutional endorsement signals industry-level acceptance, not just experimentation
Why it matters:
The CFDA is not a startup accelerator. It is where the American fashion industry forms consensus. When it commits to an AI partnership for two years and institutional credibility, it is not experimenting — it is signalling direction.
The framing of the initiative is also notable. AI here is positioned not as a cost-reduction tool but as a creative collaborator — something that expands what designers can explore, rather than simply automating what already exists. That distinction matters because fashion’s relationship with technology has historically been defensive: brands adopting digital tools reluctantly, after the moment has passed.
The harder question the CFDA partnership doesn’t yet answer is authorship. As AI becomes embedded in the design process, the industry will need frameworks for what it means to be the creative originator of a collection. That conversation is coming, and the brands building fluency with these tools now will be better positioned to shape it.
Zara’s AI Try-On and the return problem it’s actually solving
What’s happening: Zara has rolled out an AI-powered virtual try-on feature allowing shoppers to upload a photo, generate a digital version of themselves, and visualise garments before purchase.
TLDR:
Shoppers can now see clothes on a simulated version of their own body before buying
Primary commercial logic: reducing the fashion industry’s chronic returns problem
Secondary question: What does it mean to mediate our relationship with clothing through an AI render of ourselves
Why it matters:
Online fashion returns run at 20–30%. In fast fashion, where margins are already compressed, that’s an existential inefficiency. Zara isn’t building this feature because it’s interesting. It’s building it because returns are expensive, wasteful, and structurally difficult to solve through any other means.
The technology itself is imperfect — no AI render yet captures how fabric moves or how light hits a material. But it doesn’t need to be perfect to be effective. It needs to reduce uncertainty enough to change purchasing behaviour at the margin. Even a modest reduction in return rates at Zara’s volume is a significant commercial outcome.
The more durable question is what happens when this becomes standard. Once consumers can visualise garments on their own bodies across every major retailer, the brands that win won’t just be the ones with the best try-on technology. They’ll be the ones whose clothes still look compelling when the fantasy of the campaign image is removed, and a realistic render of your own body is in the frame instead.
Oura acquires Doublepoint. The body is now the interface.
What’s happening: Oura has acquired Doublepoint, a Helsinki-based AI gesture recognition company founded in 2020. Doublepoint’s technology enables users to control devices through small, natural hand and finger movements — no touchscreen required.
TLDR:
Oura moves from passive health tracker to active control device
Doublepoint’s AI reads biometric signals to detect intentional gestures
The acquisition positions Oura in a direct conversation with Apple and Meta’s spatial computing ambitions
Why it matters:
Oura has spent five years building one of the most trusted biometric devices in the market. The ring’s reputation rests on accuracy and restraint — it reads the body without demanding attention. The Doublepoint acquisition asks a different question of the same hardware: what if it could also take instructions from the body?
Gesture-based computing — controlling devices through subtle finger movements rather than touchscreens — has been a credible concept for years. What’s changed is the maturity of the AI required to reliably distinguish intentional gestures from ambient movement for it to be useful. Doublepoint appears to have crossed that threshold.
The strategic context is significant. Apple’s Vision Pro, Meta’s Ray-Ban smart glasses, and Samsung’s Galaxy Ring are all, in different ways, exploring the same territory: a world where the screen is no longer the primary interface between the body and the digital environment. Oura’s acquisition places it in that conversation from a position of genuine biometric credibility — a foundation neither Apple nor Meta can easily replicate.
The wearables market has been described as the next computing platform for a decade. Oura just made its most convincing move yet toward actually building one.
What we’re building
Codes of Culture is the editorial and intelligence expression of Future+, a global network and strategic advisory spanning Europe, the Middle East, India, and the United States.
After a six-month pause, this newsletter returns in a sharpened form. The brief remains the same: to decode the systems shaping modern influence across capital, culture, luxury, and technology. Not trends. Not hype. The forces that are quietly restructuring markets before they become obvious.
Future+ operates as a curator of strategic access, advising leaders, founders and organisations navigating the intersection of luxury, culture, capital, and technology. Codes of Culture is the public-facing layer of that thinking: a weekly brief for the executives, founders, and capital allocators who want the analytical frame, not just the headlines.
If you’re reading this and it’s useful, forward it to someone who should be reading it too.
→ Codes of Culture is published by Future+ and founded by Ashumi Sanghvi. To explore Future+ advisory and strategic partnerships: futureplus.xyz. To subscribe, access the full archive, or get in touch: codesofculture.futureplus.xyz.
Disclaimer: Information in this newsletter is only for the purpose of sharing knowledge, news and insights.











