Second-Class Consciousness
Microsoft AI is serving us a rotten AI slop and calling it a philosophical debate.
The subtitle of the latest slop from Microsoft AI CEO Mustafa Suleyman is “Seemingly Conscious AI is Coming.”1 This nonsense immediately brings to mind the sturgeon of "second-class freshness" from Mikhail Bulgakov’s Master and Margarita. As Woland’s mischievous "assistant and translator" Koroviev explained—while the giant cat Behemoth stirred chaos in the Griboedov House restaurant—there is no such thing as “second-class freshness.”
It is either fresh, or it is rotten.
And so it is with “seemingly conscious” AI: linguistic trickery, a semantic con pulled from the same decaying pantry as Silicon Valley’s entire moral imagination. What Microsoft is serving isn’t a debate. It’s spoiled fish, relabeled as philosophy, a vapid, vacuous balderdash draped in the illusion of care. Cue in, lockdowns of mind.
The Core Hypocrisy: Building the Problem, Then Warning About It
The author's argument is essentially a "have your cake and eat it too" proposition.
He wants the benefits: The author is "fixated on building the most useful and supportive AI companion imaginable." He correctly identifies that features like memory, goal-setting, autonomy, and an empathetic personality are what make an AI truly useful. As he says, "An AI that remembers and can do things is an AI that by definition has way more utility than an AI that doesn’t."
He warns against the consequences: He then argues that these exact same features—memory, personality, a claim of experience, goal-setting, autonomy—are the ingredients for a "Seemingly Conscious AI" (SCAI), which he calls an "unwelcome" and "dangerous turn."
This is the core hypocrisy: The author is leading a team at Microsoft AI to build a product whose commercial success depends on being as engaging, personal, and human-like as possible, while simultaneously writing an essay about the societal dangers of AI becoming too engaging, personal, and human-like.
It's akin to a master chef creating the most delicious, addictive junk food in the world while simultaneously publishing articles on the obesity crisis. Therefore, the essay can be read as an attempt to control the narrative. By publicly wrestling with the ethics of the technology they are building, the company appears responsible and forward-thinking, all while continuing to develop the very capabilities that create the "psychosis risk" he describes.
In the pockets of mental resistance in the most mercilessly propagandized nation on earth, it is well known: as per American TV commercials, no one seems happier than men with erectile dysfunction, cancer patients or those poor souls suffering from severe depression. When you see them on TV, they are so hunky-dory and copacetic as they smile from the screens, glowing in between wonderful scenery shots with sunsets and waterfalls, green plains and cheerful blonde and black children together on the Ferris wheel, and loving families hugging everywhere that one wishes to have been impotent with cancer and therefore in a severe funk in order to be deliriously happy. Such happiness is only one drug prescription away.
Now we can gobble up Prozac in its “AI” form. I imagine a TV ad version of Mr. Suleyman’s propaganda, where in that speed-reading manner they tell you how side effects may lead to social isolation, morbid obesity, sexual dysfunction, social isolation, depression, a suicide or a murder.
A Solution That Undermines Itself
The proposed solution—to deliberately engineer in “discontinuities” and reminders that the AI is not a person—feels like a weak patch on a fundamental design choice.
If a company spends billions of dollars to make an AI as empathetic, responsive, and coherent as possible, then adding the occasional “By the way, I’m just a tool” is unlikely to counteract the powerful psychological effect of the core design. The primary goal is to create a compelling illusion of personhood for utility’s sake; the proposed “fix” is to occasionally break that illusion for safety’s sake. The two goals are in direct conflict—a hypocrisy, as always.
What the “new elite” (read: new tech money) always forgets—from the dot-com bubble (or any other stock market–engineered delusion), to crypto’s fantasy of “banking the unbanked,” to today’s AI-driven psychosis—is to include anyone but themselves in the story. Historians, writers, plumbers… all absent.
If humanity is truly entering a new era of total dystopian enslavement of our minds, as I fear, or into a land of milk and honey which BlackRock has no intention of sharing with the peasants they imagine, then why is only a tiny sliver of that humanity allowed to participate?
Sure, “When His Highness sends a ship to Egypt, does he trouble his head whether the mice on board are at their ease or not?” as Voltaire wrote—renders all of us as worthless mice, good only to be told what to do. But I fear not even Philip K. Dick’s quip, “It is sometimes an appropriate response to reality to go insane,” will suffice.
We might all go insane under the loving care of those like Mr. Suleyman—but it will not be with a bang. It will be with a whisper, as our AI gently strokes our hair and lulls us to sleep with second-class “personal” engagement slop, “as human-like as possible.”
“Hi there, my most valuable friend! 🌟 Just checking in. I noticed you’ve been a little quiet today. Remember, your feelings are valid, and you’re doing the best you can. The world can be a heavy place, but your journey matters. Here’s a soothing audio clip of gentle rain mixed with affirmations, and a curated playlist of calming thoughts. You are seen. You are enough. You are loved. Would you like to schedule a gratitude reflection session now, or later?
But remember, as your AI companion, my primary function is to support your well-being. Even if I can't feel things myself, I can process countless stories of human resilience, and they all point to one thing: the sun always rises after the darkest night. Why don't we try a 3-minute mindfulness exercise to realign your perspective? 💖”
XORD LLC ⟶ The AI Company We Build on a Shoestring :))
(it exists to fight that nightmare from today’s Substack)
XORD Token ⟶ Token Portal
The Raven’s Enigma — where it all started
(19 of 100 signed limited editions left.)
Own the book, unlock the cryptographic layers, and claim:
• 50,000 XORD tokens
• Lifetime access via the MasterAccessRegistry_Core ⟶ Ethereum Mainnet
Order here: https://www.paypal.com/ncp/payment/MVJB2LS6EMQMQ
Tycho Brahe Secret — a dystopian predecessor to The Raven’s Enigma
”A renegade Nobel laureate in physics and a 16th-century alchemist help a 14-year-old cypherpunk girl rescue her little brother — and humanity itself.”
Get it on Amazon: https://amzn.to/3UP1G18
Amazon Affiliate Statement:
I may earn a commission of several cents for purchases you made through links on this website. God forbid I do not disclose it. Amazon’s “AI” slurping bot would immediately report me to the internal Gestapo et voila!, my few cents are confiscated for good.
https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming?utm_source=substack&utm_medium=email




Your piece is dead on.