Dan Milmo Global technology editor 

Why is OpenAI planning to become a for-profit business and does it matter?

Company plans announced as it reckons with departure of senior executives and seeks huge investment
  
  

A phone with the ChatGPT logo in front of a screen showing OpenAI
OpenAI, the developer of ChatGPT, was initially founded as a non-profit in 2015. Photograph: Dado Ruvić/Reuters

OpenAI, the developer of the groundbreaking ChatGPT chatbot, is preparing to overhaul its corporate structure and become a for-profit business.

The startup’s chief executive, Sam Altman, acknowledged on Thursday that it was “not a normal company” after another surprising development at OpenAI this week when its its chief technology officer, Mira Murati, resigned. Her departure was quickly followed by the announcement that two other executives had quit.

The company is synonymous with an artificial intelligence boom triggered by the emergence, in 2022, of OpenAI’s signature product, a chatbot that stunned users with its ability to craft convincing, human-like responses to an array of prompts.

Altman, in turn, has become the poster child for a technology that is advancing rapidly and is being developed by the world’s largest tech companies, including Microsoft – OpenAI’s biggest backer – Google, the Facebook owner Meta and Amazon.

Here, we look at some of the issues arising from the changes at OpenAI.

What are the changes OpenAI is considering?

The startup is considering becoming a for-profit benefit corporation – an entity that makes profits but is committed to the social and public good – that will no longer be controlled by its non-profit board, according to Reuters.

OpenAI was founded as a non-profit in 2015 and, in 2019, it added a profit-making subsidiary supported by Microsoft with a multibillion dollar investment. The San Francisco-based company described itself as a “partnership between our original non-profit and a new capped profit arm”.

As a capped profit business, OpenAI’s for-profit unit limits the returns given to investors and employees, with the excess handed back to the nonprofit “for the benefit of humanity”. There is no cap on profit returns for a public benefit corporation. Anthropic, a rival to OpenAI, is run as a public benefit corporation.

OpenAI has declined to comment on the specifics of the reported restructuring but has said the non-profit entity will “continue to exist”. Reuters also reported that the non-profit, and Altman, would own stakes in the profit-making business.

Why is OpenAI considering restructuring?

Developing powerful AI systems is expensive and OpenAI could be heading for a loss of as much as $5bn this year. It therefore needs more investment. The startup is in talks to raise $6.5bn from investors and removing any restriction on profits would be an extra incentive for the would-be backers engaged in talks. Potential investors include the tech company Apple and the chipmaker Nvidia.

Why does it matter?

OpenAI was founded with the aim of building “artificial general intelligence” (AGI), which it describes as “AI systems that are generally smarter than humans”. The AI industry has not developed AGI yet – and there is much debate over when and if AGI will arrive – but it is one of the potential breakthroughs in the technology that alarms experts, including the Tesla chief executive, Elon Musk. The worry is that reckless development of AGI will unleash highly powerful systems that will evade human control.

As a consequence, concerns about safety have swirled around OpenAI and have been voiced by former employees. William Saunders, a former safety researcher at the startup, said in written testimony to the US Senate that there is “a real risk they will miss important dangerous capabilities in future AI systems”. He added that he had “lost faith” in OpenAI’s ability to make responsible decisions about AGI.

More generally, many experts are concerned that the tech sector – led by the likes of OpenAI, Google, Microsoft, Meta and Anthropic – is racing towards developing powerful AI tools and that safety could be taking a back seat. Max Tegmark, a US academic and key figure in the AI safety debate, has said he is concerned that tech firms are engaged in a “race to the bottom that must be stopped”.

OpenAI says its approach is “safety at every step” and recently announced that its safety and security committee will become an independent entity.

Why have so many senior executives left the company?

Murati is the latest senior figure to leave OpenAI since a tumultuous few days in November last year when Altman was fired, and then reinstated, by the non-profit board. Murati said she wanted “space to do my own exploration”, as Altman expressed “tremendous gratitude” for her work. Murati had temporarily replaced Altman as chief executive last year but maintained a high-profile role at the startup, as the figurehead for the launch of its latest GPT-4o model.

Altman said on Thursday that the departure of Murati, and of two other senior employees this week, was not linked to the restructuring.

Other key executives have left since November. Ilya Sutskever, the company’s co-founder and chief scientist, departed this year after playing a role in firing and rehiring of Altman. He was on the board that made the decision to fire Altman, but then signed a letter soon afterwards calling for his reinstatement.

Other departures this year include John Schulman, a co-founder of OpenAI, who moved to Anthropic, and the product manager Peter Deng. Greg Brockman, OpenAI’s president co-founder, also announced he was taking a sabbatical until the end of the year.

 

Leave a Comment

Required fields are marked *

*

*