The Hyder Ground

The Hyder Ground

Share this post

The Hyder Ground
The Hyder Ground
The Three Lies We Tell Ourselves About AI (And Why They're All Cope)

The Three Lies We Tell Ourselves About AI (And Why They're All Cope)

How Apple's "failed" product launch is actually a trillion-dollar training program, and why that paper you're sharing about AI's limits is your new comfort blanket

Shama Hyder's avatar
Shama Hyder
Jun 17, 2025
∙ Paid
3

Share this post

The Hyder Ground
The Hyder Ground
The Three Lies We Tell Ourselves About AI (And Why They're All Cope)
1
Share

Here's a $3.4 trillion company releasing a product nobody asked for, that reviewers are calling "underwhelming," and that costs $3,499 for the privilege of having text float in front of your face.

Apple just played you. And you don't even know it yet.

Apple's Liquid Glass: A New Era of Hardware-Software Synergy and Ecosystem Dominance

The Great Glass Training Program

Let me paint you a picture of corporate genius disguised as incremental updates. Apple's Liquid Glass design language, announced at WWDC 2025, has been met with yawns and mockery. "It's just translucent UI elements," tech reviewers shrug. Microsoft is even roasting them for copying Windows' aesthetic. The design brings semi-transparent interfaces, glossy icons, and rounded controls across iOS 26, iPadOS 26, macOS Tahoe, and beyond.

They're all missing the point spectacularly.

Apple isn't just redesigning your iPhone's interface. They're running the world's most sophisticated behavioral conditioning experiment. Every user updating to iOS 26 is a lab rat in a trillion-dollar experiment. The question isn't "Do people like translucent buttons?" The question is "How do we train a billion people to expect digital layers floating over physical reality?"

Think about it: Liquid Glass makes every interface element behave like actual glass - refracting light, creating specular highlights, dynamically adapting to content beneath it. It's training your brain to see digital information as a translucent layer that can exist on top of the physical world. Sound familiar? That's exactly what AR glasses need you to believe.

The pattern is always the same:

Release an "underwhelming update" → Condition user behavior → Launch the hardware that needs that behavior → Own the next decade.

Remember when everyone mocked the iPhone X's notch? Apple trained us to ignore it, and now every phone has one.

From Liquid Glass to Floating Reality

Here's where it gets brilliant. The Vision Pro wasn't much of a commercial success — it cost $3,500, and unlike a computer, it isn't something that has proven essential for our lives. But that's not the point. The Vision Pro was Phase One: getting early adopters comfortable with spatial computing at any price.

Liquid Glass is Phase Two: Users are connecting Liquid Glass to potential AR glasses because the new design draws strong inspiration from that of Apple's Vision Pro VR headset. Every time you interact with a translucent notification, every time a control morphs and adapts to content beneath it, every time you see digital glass refracting light on your iPhone screen - you're being trained.

By 2027, when Apple releases AR glasses that look like regular Ray-Bans and cost $599, your brain will already be wired to accept digital overlays on physical reality. The behavioral groundwork will be complete. A billion people will have spent two years interacting with interfaces that blur the line between digital and physical, making the leap to AR feel inevitable rather than jarring.

This is Apple's mastery: they don't convince you to want new technology. They slowly reshape your expectations until their next product feels like the obvious next step.

The Comfort Papers: Why We're Desperate for AI to Fail

Now let's talk about that Apple paper on AI reasoning that's been making rounds in your LinkedIn feed like it's the Zapruder film of machine learning. You know the one - "GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models." Every executive I've talked to this week has brought it up, usually with barely concealed relief in their voice.

"See? AI can't really reason! It's all just pattern matching!"

Let me tell you what's actually happening here. We're watching the same psychological pattern that emerged with every transformative technology. The Comfort Papers - academic studies that "prove" the new thing isn't really that revolutionary. They spread like wildfire not because they're right, but because they're desperately needed.

The numbers tell the real story. There are thousands of papers published about AI capabilities every month. Papers showing AI breaking benchmarks, solving previously impossible problems, exhibiting emergent behaviors we don't fully understand. You haven't heard of any of them. But papers suggesting AI has limits? Those are on every executive's reading list by Tuesday.

Here's the pattern, documented and timestamped:

Keep reading with a 7-day free trial

Subscribe to The Hyder Ground to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Shama Hyder
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share