How OpenAI went from an altruistic nonprofit to typical Big Tech startup
Internal tensions over the company prioritizing money over safety might be symptoms of a bigger issue
The story of OpenAI's meteoric rise in the artificial intelligence space took an unexpected turn over a tumultuous weekend, ending with co-founder Sam Altman's sudden ouster. His departure, made possible by the company's unique governance structure, illuminated an internal struggle between the company's nonprofit roots and the push for more commercialization.
The board has been relatively vague about the decision to fire him, stating that Altman was "not consistently candid in his communications with the board,” in the announcement on Friday. Though employees and investors rallied behind Altman to get him back in, the board ultimately hired former Twitch CEO Emmett Shear as interim CEO instead. Greg Brockman left his position as OpenAI's president in solidarity, and hundreds of employees threatened to leave the company if the board did not reinstate Altman and Brockman and resign. Hours after the board confirmed Altman would not return, Microsoft, a major investor in OpenAI, announced that it would hire Altman and Brockman to head its new advanced AI research lab.
While it's unclear exactly why Altman was fired, some say the chaotic turn of events is a microcosm of a larger debate over whether to prioritize safety over commercialization in artificial intelligence.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Proof that OpenAI is no different from other Big Tech companies
At first, the founders set OpenAI up as a "true not-for-profit with the goal of advancing the introduction of a safe AGI," James Ball explained in his newsletter TechTris. While it originally had "no intention of focusing on the profit motive or on hefty returns from venture capitalists," that model did not last long. Still, at its core, OpenAI was an "attempt to build a big tech startup in which the founder/CEO didn’t wield unassailable power," Ball noted. The company is attempting to change the world by "building safe and revolutionary artificial intelligence models and in showing big tech companies can work differently to how they have so far," he added. "The jury is still out on the former, but the latter experiment is now looking very much like a failure."
Altman's departure showed an "organization that was meant to align superintelligent AI with humanity failing to align the values of even its own board members and leadership," Steen Levy wrote for Wired. Under his leadership, fostering the "profit-seeking component to the nonprofit project turned it into an AI powerhouse." The idea was that launching more products would "provide not only profits but also opportunities to learn how to better control and develop beneficial AI." With the board moving to fire the driving force being the commercialization, "it’s unclear whether the current leadership thinks that can be done without breaching the project’s original promise to create AGI safely."
OpenAI's altruistic roots are 'unaligned' with its corporate interests
Altman's exit over the weekend was the "culmination of a power struggle between the company’s two ideological extremes," Karen Hao and Charlie Warzel wrote in The Atlantic. One side was "born from Silicon Valley techno-optimism, energized by rapid commercialization," while the other was "steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution." While the two sides were able to coexist for years, that "tenuous equilibrium" broke with the release of ChatGPT and increased pressure for commercialization. This sent the company in opposite directions, "widening and worsening the already present ideological rifts," the pair added.
In the end, the tumultuous events of the weekend "showed just how few people have a say in the progression of what might be the most consequential technology of our age," Hao and Warzel noted. "AI’s future is being determined by an ideological fight between wealthy techno-optimists, zealous doomers and multibillion-dollar companies."
Despite setting out to resist giving the power of AI to big corporations, OpenAI's board members may have played right into that outcome. With Atlman getting scooped up by Microsoft and many employees threatening to join him, "you can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit," Ben Thompson wrote for Stratchery." Microsoft already owns a perpetual license to all OpenAI intellectual property, "short of artificial general intelligence," Thompson explained. OpenAI, an "entity committed by charter to the safe development of AI," essentially "handed off all of its work" to "one of the largest for-profit entities on earth," Thompson mused. "Or in an AI-relevant framing, the structure of OpenAI was ultimately misaligned with fulfilling its stated mission."
Continue reading for free
We hope you're enjoying The Week's refreshingly open-minded journalism.
Subscribed to The Week? Register your account with the same email as your subscription.
Sign up to our 10 Things You Need to Know Today newsletter
A free daily digest of the biggest news stories of the day - and the best features from our website
Theara Coleman has worked as a staff writer at The Week since September 2022. She frequently writes about technology, education, literature and general news. She was previously a contributing writer and assistant editor at Honeysuckle Magazine, where she covered racial politics and cannabis industry news.
-
Today's political cartoons - December 2, 2023
Cartoons Saturday's cartoons - governors go Gotham, A.I. goes to the office party, and more
By The Week US Published
-
10 things you need to know today: December 2, 2023
Daily Briefing Death toll climbs in Gaza as airstrikes intensify, George Santos expelled from the House of Representatives, and more
By Justin Klawans, The Week US Published
-
5 hilarious cartoons about the George Santos expulsion vote
Cartoons Artists take on Santa versus Santos, his X account, and more
By The Week US Published
-
Elon Musk's 'frivolous' but precedent-setting free speech fight with Media Matters
Talking Point The lawsuit is just the latest in Musk's ongoing tension with social media watchdogs
By Theara Coleman, The Week US Published
-
Inside Sam Altman's 'extraordinary firing' from OpenAI
The Explainer AI superstar joins Microsoft after 'philosophical disagreement' with his old board that stunned tech world
By The Week UK Published
-
How Grok, Elon Musk's 'rebellious' AI bot, differs from the others
The Explainer Musk developed the bot as a competitor to ChatGPT
By Justin Klawans, The Week US Published
-
The new civil rights frontier: artificial intelligence
The Explainer Experts worry that AI could further inequality and discrimination
By Devika Rao, The Week US Published
-
Why the Metro Memory game is so addictive
Talking Point Trying to remember London's 416 Tube stops has attracted half a million players so far
By Sorcha Bradley, The Week UK Published
-
Will Rishi Sunak's AI summit be a success?
Today's Big Question PM gambles on bringing world leaders to the UK, even as key figures stay away
By Elliott Goat, The Week UK Published
-
The pros and cons of AI companions
Pros and Cons Chatbots are evolving from simple assistants to friendlier digital companions.
By Theara Coleman, The Week US Published
-
The AIs have it: will disinformation erode democracy in 2024?
Today's Big Question Threat from bots and deepfakes stalks key elections taking place around the world next year
By Harriet Marsden, The Week UK Published