Microsoft and Amazon have cut their AI ethics teams as Elon Musk and Steve Wozniak call for a moratorium on tech companies’ ‘out-of-control race’

Technologists are calling for a more cautious approach to AI, but some companies are still pushing forward at all costs.

Twitter CEO Elon Musk

Elon Musk is one of the many technologists critical of AI's current direction Maja Hitij/Getty Images

Tech leaders and public figures who have stepped forward to urge companies to take a more cautious approach to artificial intelligence may have been disappointed Wednesday after reports emerged that the technology giants locked in the battle for A.I. domination are laser-focused on moving fast and winning the race, even if it means chipping away at their own A.I. ethics teams.

Subscribe to unlock this article and get full access to Fortune.com

Tech leaders and public figures who have stepped forward to urge companies to take a more cautious approach to artificial intelligence may have been disappointed Wednesday after reports emerged that the technology giants locked in the battle for A.I. domination are laser-focused on moving fast and winning the race, even if it means chipping away at their own A.I. ethics teams.

Google and Microsoft are racing ahead of the pack in the A.I. arms race, but their fellow Silicon Valley giants are no slouches either. Amazon is tapping machine learning to improve its cloud computing Web Services branch, while social media behemoths Meta and Twitter have doubled down on their own A.I. research too. Meta CEO Mark Zuckerberg signaled A.I. would be a cornerstone of Meta’s new efficiency push during the company’s earnings call earlier this year, while Elon Musk is reportedly attracting talent to develop Twitter’s own version of ChatGPT, OpenAI’s wildly successful A.I.-powered chatbot released late last year.

But as the A.I. race heats up in a murky economic environment that has forced tech companies to lay off more than 300,000 employees since last year, developers are reportedly cutting back on the ethics teams charged with ensuring A.I. is developed safely and responsibly. Amid larger waves of layoffs, Meta, Microsoft, Amazon, Google, and others have downsized their “responsible A.I. teams” in recent months, the Financial Times reported Wednesday, a development that is unlikely to please critics who were already unhappy with the current direction tech companies were going. Many of the layoffs the newspaper included in its roundup had first been reported by other publications. 

Also on Wednesday, several technologists and independent A.I. researchers signed an open letter calling for a six-month pause on advanced A.I. research beyond currently available systems, saying more attention should be paid to the potential effects and consequences of A.I. before companies roll out products. Among the letter’s signatories were Apple cofounder Steve Wozniak and Elon Musk, whose widespread layoffs at Twitter since taking over last year included the social media company’s ethical A.I. team, per the FT report.

The open letter cited an A.I governance framework established in 2017 known as the Asilomar A.I. Principles, which state that given the potentially monumental impact A.I. could have on humanity, the technology should be “planned for and managed with commensurate care and resources.” But the letter criticized tech companies leading the race for A.I. of failing to abide by these principles. 

“This level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control,” the letter states.

Amazon did not immediately reply to Fortune’s request for comment. Twitter has not had an active press relations team since November. Microsoft, Meta, and Google representatives told Fortune that ethics continue to be a cornerstone of their A.I. research.

A Google spokesperson disputed the FT report in a statement to Fortune, emphasizing that ethics research remains a part of the company’s A.I. strategy.

“These claims are inaccurate. Responsible AI remains a top priority at the company, and we are continuing to invest in these teams,” the spokesperson said.

Downsized A.I. ethics teams

As the tech sector has pivoted to focus on efficiency and strong fundamentals over the past year, projects that were deemed superfluous like A.I. ethics research teams were among the first on the cutting board.

Twitter’s “ethical A.I. team” was cut days before the first round of Musk’s layoffs affecting thousands of employees at the company in November, less than a week after he became CEO. Former Twitter employees told Wired at the time that the team had been working on “important new research on political bias” that could have helped social media platforms avoid unfairly penalizing individual viewpoints. The team was aware Musk intended to eliminate it once he took charge of the company, and hurriedly published months of research into A.I. ethics and disinformation in the weeks before Musk became CEO, Wired reported in February.

Other tech companies have also slashed their A.I. ethics teams in recent layoffs. Microsoft terminated around 10,000 employees last January, including the company’s entire A.I. ethics and society team, Platformer reported earlier this month. The company still has an Office of Responsible AI that sets high-level principles and policies for the development and deployment of A.I. But it no longer has a central team of ethicists dedicated to researching the potential harms of A.I. systems and working on broad mitigation strategies. The team also acted as a consulting body for product teams when they had questions about how to implement various responsible A.I. principles. Members of the original A.I. ethics team were either reassigned to product teams or were laid off.

According to an audio recording leaked to Platformer, Microsoft executives told members of the A.I. ethics team that top executives, including CEO Satya Nadella and CTO Kevin Scott, were putting pressure on the entire company to integrate A.I. technology from OpenAI into numerous Microsoft products as quickly as possible and calls to slow down the pace of deployment in order to ensure such systems were developed ethically were not appreciated.

A Microsoft spokesperson told Fortune that the ethics and society team had played a “key role” in the company’s responsible A.I. research, but over the past few years Microsoft has sought to integrate its responsible A.I. team directly with its product and design teams.

“Since 2017, we have worked hard to institutionalize this work and adopt organizational structures and governance processes that we know to be effective in integrating responsible AI considerations into our engineering systems and processes,” the spokesperson said, adding that Microsoft currently has “hundreds” of employees working on A.I. ethics across the company, including in the Office of Responsible AI.

Google, Microsoft’s main competitor in the A.I. space, has also terminated an unspecified number of responsible A.I. jobs, according to the FT. The company previously fired a top A.I. ethics researcher in 2020 after she had criticized Google’s diversity stance within its A.I. unit, a claim the company disputed. Meta disbanded its Responsible Innovation team in September, which included around two dozen engineers and ethics researchers. Amazon, meanwhile, laid off the ethical A.I. unit at the company’s live streaming service Twitch last week, according to the FT

Meta told Fortune that most members of the Responsible Innovation team were still at the company but had been reassigned to work directly with product teams. Meta moved to decentralize its Responsible A.I. unit last year, which was distinct from its Responsible Innovation team. Responsible A.I. workers are now more closely integrated with Meta’s product design groups.

“Responsible A.I. continues to be a priority at Meta,” Esteban Arcaute, Meta’s director of Responsible A.I., told Fortune. “We hope to proactively promote and advance the responsible design and operation of AI systems.” 

Stepping back from a ‘dangerous race’

Since OpenAI launched ChatGPT last year, tech giants have piled into the A.I. space in an effort to outdo one another and stake a claim in the rapidly growing market. Proponents of A.I.’s current direction have praised the technology’s disruptive nature and defended its accelerated timeline. But critics of the hotly-competitive atmosphere have accused companies of prioritizing profits over safety, and risk releasing potentially dangerous technologies before they are fully tested.

“A.I. research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” the open letter advocating for a six-month moratorium on A.I. research stated.

The letter said that A.I. developers should use the pause to develop a shared set of safety protocols and guidelines for future A.I. research which would ensure A.I. systems are safe “beyond a reasonable doubt.” These protocols could be overseen by external and independent experts.

“This does not mean a pause on A.I. development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter said.

Several of the letter’s signatories have been critical of A.I.’s rapid advancement in recent months and the technology’s propensity for mistakes. “The trouble is it does good things for us, but it can make horrible mistakes by not knowing what humanness is,” Steve Wozniak told CNBC in February

Musk, who was a cofounder and important early investor in OpenAI before leaving its board in 2018, has criticized the San Francisco-based startup for its pivot to profit-seeking in recent years, sparking a war of words with Sam Altman, OpenAI’s CEO. Altman has expressed his own reservations on how A.I. could be misappropriated as more companies try their hand at initiating technology like ChatGPT, warning in an interview with ABC this month that “there will be other people who don’t put some of the safety limits that we put on.” 

Other critics have been even more outspoken of the potential risks to A.I., and how important it is to get the technology right. Yuval Harari, a historian and author who has written extensively on the concepts of intelligence and human evolution, was one of the letter’s signatories after co-authoring a critical New York Times guest essay on A.I.’s current direction published last week.

“A race to dominate the market should not set the speed of deploying humanity’s most consequential technology. We should move at whatever speed enables us to get this right,” Harari and his co-authors wrote.

Update: This story was updated with the correct spelling of Esteban Arcaute’s name.

Update: This story was updated with a statement from a Microsoft spokesperson.