• OpenAI is building a new “Preparedness” team to further AI safety.
  • The ChatGPT-maker’s newest team aims to address potential risks linked to advanced AI, including nuclear threats. 
  • The Preparedness team is hiring for a national security threat researcher and a research engineer.

OpenAI, the company behind ChatGPT, is doubling down on its efforts to prevent an AI-driven catastrophe.

This week, OpenAI announced the new “Preparedness” team, which will aims to study and protect against the potential threats that can arise from advanced AI capabilities — which OpenAI calls “frontier risks.”

“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” OpenAI wrote in its announcement. “But they also pose increasingly severe risks.”

The Preparedness team will help “track, evaluate, forecast, and protect against catastrophic risks,” including chemical, biological, nuclear, and cybersecurity threats.

It will also develop a “Risk-Informed Development Policy” that will include protective actions and a governance structure to hold AI systems accountable.

Aleksander Madry, who is currently on leave from MIT, will be leading the team as OpenAI’s head of preparedness.

As part of the team, OpenAI is hiring for a national security threat researcher and a research engineer. Each could earn an annual salary between $200,000 and $370,000, according to the job listings.

OpenAI and Aleksander Madry didn’t immediately respond to Insider’s request for comment before publication.

For months, tech leaders at top AI companies have raised alarms around AI safety.

Elon Musk, who helped cofound OpenAI before leaving the company, said in February that AI is “one of the biggest risks to the future of civilization.”

In March, OpenAI CEO Sam Altman said on an episode of Lex Fridman’s podcast that he empathizes with people who are afraid of AI, noting that advancements in the technology come with risks related to “disinformation problems,” “economic shocks” like job replacement, and threats “far beyond anything we’re prepared for.”

Earlier this month, Anthropic, an OpenAI rival behind the AI chatbot Claude, revamped its constitution with input from users to level up its guardrails, and prevent toxic and racist responses.

Others, though, are less fearful.

Earlier this month, Yann LeCun, Meta’s chief AI scientist, said that claims around superintelligent AI wiping out humanity are “preposterous” and are based more on science fiction than reality.

“Intelligence has nothing to do with a desire to dominate,” he said.

Read the full article here

Share.
Exit mobile version