The future of AI is the revenge of Clippy
The drama re OpenAI's Sam Altman is a really a battle over who controls the most powerful (and scariest) forms of AI. The answer might be Microsoft.
Source: Walmart.
[UPDATE: Never mind.
But if this story really fascinates you, and you want a good excuse to avoid certain members of your family over the long holiday weekend, you will definitely want to consume Max Read’s deeply comprehensive and wryly amusing, “The interested normie’s guide to OpenAI drama.”]
The planet is burning up. There's war in Eastern Europe and the Middle East, a former leader of the world's largest democracy is vowing to come back and rain hellfire down on his political enemies (if he's not already in prison by then), there's a clown car doing donuts underneath the Capitol Rotunda driven by vaping troglodytes. And, oh yeah, we are still in the middle of a global pandemic.
But the tech world, in which I am a sometimes reluctant resident, is abuzz about one thing: OpenAI just canned its CEO, Sam Altman.
Happy Monday! [1]
The always droll Matt Levine, author of Bloomberg's Money Stuff column, summarizes it thusly:
On Friday, OpenAI’s nonprofit board, its ultimate decision maker, fired Sam Altman, its co-founder and chief executive officer, saying that “he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” Apparently the board felt that Altman was moving too aggressively to commercialize OpenAI’s products like ChatGPT, and worried that this speed of commercialization raised the risk of creating a rogue artificial intelligence that would, you know, murder or enslave humanity. [2]
And then all hell broke loose. Other high-ranking OpenAI executives resigned, then it looked like Altman would be reinstated as CEO over the weekend, and then he wasn't, and then 600+ of OpenAI's nearly 800 employees vowed to quit (but haven't yet), then Altman was immediately hired by Microsoft, which having committed $10 billion+ to OpenAI has a lot of skin in this game but apparently no power over the board's decisions.
Per The Atlantic's Charlie Warzel (on Threads):
Here lies Sam
"Not consistently candid" is a load-bearing sentence to be sure. What, exactly, was Sam Altman (allegedly) lying about? That's the $10 billion question. [3]
OpenAI was set up to be an altruistic organization "with the goal of building safe and beneficial artificial general intelligence for the benefit of humanity," per the company website. Altman himself has been pretty candid about the potentially negative impacts of AI.
Was he lying about that? Are things even worse than we imagined? Or was he lying about the intent to turn the nominally nonprofit organization into a money-printing machine? [4] Can you name any other nonprofits that have been valued at $80 billion+?
The lesson here? Never go full Sean Penn. Source: New Line productions.
Per Melissa Heikkila at The Algorithm:
This tension between trying to launch products quickly and slowing down development to ensure they are safe has vexed OpenAI from the very beginning. It was the reason key players in the company decided to leave OpenAI and start the competing AI safety startup Anthropic.... [5]
...The bigger takeaway is that OpenAI’s crisis points to a wider rift emerging in the industry as a whole, between “AI safety” folk who believe that unchecked progress could one day prove catastrophic for humans and those who find such “doomer” talk a ridiculous distraction from the real-world risks of any technological revolution, such as economic upheaval, harmful biases, and misuse.
Here's Nina Schick's take on this:
Perhaps the biggest takeaway we can glean from Open AI's meltdown is that commercializing and containing these systems will become *politically* (even if not hypothetically) incompatible, as the differences between the pioneers of AI development become even more pronounced.
The revenge of Clippy
This is not a fight over who gets to control today's AI tools. This is a fight over what's known as Artificial General Intelligence. AGI is the Skynet/HAL 9000/pick your favorite dystopian robot scenario. It's where these things start to really act like humans, only 1000x smarter and much better looking.
GPT-5, the next iteration of OpenAI's technology, could be the first real AGI. It's currently scheduled to arrive by the end of 2025. And that might be the reason that Altman got shown the door: He was determined to go full speed ahead on GPT-5, and the board got nervous. Or not. We'll probably never know. [6]
And now Microsoft could be the company that gets to 'own' AGI. Oh, goodie.
Sam I Am
On a lighter note, I asked ChatGPT to come up with a poem about its Dad's departure in the style of Dr. Seuss:
In the land of tech, with algorithms bright,
A tale unfolded, a surprising sight.
Sam Altman, CEO of OpenAI,
Fired one day, oh my, oh my!In a world of bits and lines of code,
Sam's departure, a winding road.
The news spread fast, like wildfire grew,
Whispers and rumors, a curious crew."Sam Altman out," the headlines blare,
A change in leadership, a shift in the air.
OpenAI's ship, sailing a new course,
No longer guided by Altman's force.But wait, let's not jump to conclusions too quick,
For the story's not over, it's just a small trick.
Perhaps Sam's departure, a strategic play,
To pave the path for a brighter day.In the realm of AI, uncertainty reigns,
As OpenAI charts unexplored domains.
New leaders emerge, ideas take flight,
In the world of algorithms, day turns to night.So here's to Sam Altman, a captain of yore,
Navigating OpenAI's ship to distant shores.
Though he's moved on, the journey's not done,
The adventure continues, under a different sun.
We all may be hurtling toward the abyss, but at least we can still eat. What’s on your T-day menu? Share your recipes (and thanks) in the comments below.
[1] Welcome to the special holiday week edition of COMYAI. I'm taking the rest of the week off. If you need me, I'll be in a tryptophan coma.
[2] Last month OpenAI created a new Preparedness team to "assess, evaluate and probe AI models to protect against what it describes as 'catastrophic risks'.” The team is offering a $25K prize and a job to people who submit "the most unique... probable, potentially catastrophic misuse" of AI. Honestly, OpenAI: Is this something you really want to encourage?
[3] Clearly they should have used an AI lie detector.
[4] Turns out the Vulcans were really Ferengis all along.
[5] Anthropic is a proponent of what it calls Constitutional AI, its version of Google's famous (possibly now infamous) "Don't be evil" policy. The idea is that organizations that subscribe to CAI must first consider the harms the technology can do before developing it.
[6] Meta (Facebook/Instagram) picked a good week to disband its Responsible AI team. Excellent timing guys!