Open AI CEO Sam Altman walks to his seat to participate in a discussion entitled 'Charting the Path Forward: The Future of Artificial Intelligence" during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. (AP Photo/Eric Risberg)

The OpenAI debacle threw light on the ongoing debate over how quickly and carefully AI can be deployed. It could also mark a shift in the function of OpenAI — investors like Microsoft might have more sway and concerns about potential risks that AI pose could take a backseat.

Sam Altman, who has been dealing with chaos and turmoil in the artificial intelligence company for the past five days, was reinstated on Tuesday as the CEO of OpenAI. The board of OpenAI, which had ousted Altman last week, reshuffled the understanding between the company and the CEO, paving the way for his return.

The shake-up at OpenAI sparked debates on how quickly and cautiously AI can be deployed. This move may also indicate changes in OpenAI’s operations – heightened influence from investors like Microsoft, concerns about potential risks of AI may take a back seat, and there could be a focus on the rapid commercialization of technology.

What happened?

On November 17th, the board of OpenAI dismissed Altman during a video call, accusing him of “a lack of transparency in his communication with the board.” The company’s president and co-founder, Greg Brockman, had his board seat taken away, and then he resigned, showing solidarity with Altman.

About three days later, Microsoft announced that Altman and Brockman would lead its “new advanced AI research team.” This news caused a stir among OpenAI employees. On Monday, nearly all of them, around 800, signed an open letter, threatening to follow Altman and resign unless the board members who ousted him were not reinstated and did not resign.

The next day, OpenAI announced Altman’s return. The company stated that its board members, Helen Toner (an AI security researcher at Georgetown University), Ilya Sutskever (an AI researcher and the company’s co-founder), and Tasha McCauley (a technology entrepreneur), were removed. The only holdover was Adam D’Angelo (CEO of Quora and Q&A Forum). Brett Taylor (former co-CEO of another major software company, Salesforce) and Larry Summers (an economist who served as Treasury Secretary under Bill Clinton) would join the board. OpenAI also hinted at the possibility of expanding the board to nine members – Microsoft was expected to get a seat, and Altman could take his place.

Why did this happen?

Apart from its official statement, OpenAI did not provide more details about why Altman was removed. However, media reports suggested that Altman’s dismissal was a result of growing rifts between him and other directorial factions.

After the launch of the chatbot ChatGPT in November last year, OpenAI emerged as a major player in mainstream AI. It turned OpenAI into one of the world’s leading technology companies, and Altman became the face of the generative AI revolution. However, success brought a new set of challenges.

Some members, including Sutskever, were concerned about potential risks posed by the company’s technology to society. They also felt that Altman was not adequately focusing on these risks and was more concerned about the business development of OpenAI. These concerns escalated when the company recently announced the “potential risks associated with” its technology.

It achieved a milestone that helps improve its AI models in solving problems without additional data.

According to Reuters, this raised concerns among some OpenAI researchers, leading them to warn the company’s board in writing before Altman’s dismissal that it could pose a danger to humanity.

In addition, a few weeks before his dismissal, Altman had a debate with Toner over his recent research paper, where the security perspectives adopted by OpenAI and rival company Anthropic were compared.

According to a report by The New York Times, Mr. Altman criticized the praise for the security perspective adopted by Anthropic in their research paper while attempting to keep OpenAI’s AI technologies secure. Toner, in defense of her paper, stated, “An analysis of the challenges facing humanity has been attempted when trying to understand the intentions of countries and companies developing AI.”

Despite attempts by Altman to push Toner out, in the end, Sutskever, McCauley, D’Angelo, and Toner collectively ousted him.

Another reason for the conflict among board members was the inability to agree on candidates to fill vacant board seats.

What happens now?

The changes in OpenAI’s board will assist in maximizing the commercial potential of AI. However, according to The Economist, the thoughts of the new board members, Taylor and Summers, are not publicly known, but they are expected to understand Altman’s “empire-building ambitions” better than their predecessors. The company also aims to double down on developing its most powerful model, GPT-5, progress on which has slowed in recent days.

The rapid development of OpenAI’s technology will increase competition in the general AI market. Start-ups like Google, Amazon, Meta, and Anthropic have already presented their AI models and plan to launch more products. However, experts are concerned that this could lead to the release of products whose behavior, usage, and misuse have not been fully understood, which could be harmful to society.

Security concerns may force policymakers to take stringent measures to regulate AI tools. For instance, last month, U.S. President Joe Biden signed an executive order requiring major AI developers to undergo security testing with the government and share results and other information.

For more latest news click

Leave a Reply

Your email address will not be published. Required fields are marked *