- Meta’s chief AI scientist Yann LeCun said that superintelligent AI is unlikely to wipe out humanity.
- He told the Financial Times that current AI models are less intelligent than a cat.
- AI CEOs signed a letter in May warning that superintelligent AI could pose an “extinction risk.”
Fears that AI could wipe out the human race are “preposterous” and based more on science fiction than reality, Meta’s chief AI scientist has said.
Yann LeCun told the Financial Times that people had been conditioned by science fiction films like “The Terminator” to think that superintelligent AI poses a threat to humanity, when in reality there is no reason why intelligent machines would even try to compete with humans.
“Intelligence has nothing to do with a desire to dominate. It’s not even true for humans,” he said.
“If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither,” he added.
The rapid explosion of generative AI tools such as ChatGPT over the past year has sparked fears over the potential future risks of superintelligent artificial intelligence.
In May, OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei all signed a public statement warning that AI could in future pose an ‘extinction’ risk comparable to nuclear war.
There is a fierce debate over how close current models are to this hypothesized “Artificial General Intelligence” (AGI). A study conducted by Microsoft earlier this year said that OpenAI’s GPT-4 model showed “sparks of AGI” in how it approached reasoning problems in a human-like way.
However, LeCun told the Financial Times that many AI companies had been “consistently over-optimistic” over how close current generative models were to AGI, and that fears over AI extinction were overblown as a result.
“They [the models] just do not understand how the world works. They’re not capable of planning. They’re not capable of real reasoning,” he said.
“The debate on existential risk is very premature until we have a design for a system that can even rival a cat in terms of learning capabilities, which we don’t have at the moment,” he added.
LeCun said that reaching human-level intelligence required “several conceptual breakthroughs” — and suggested that AI systems would likely pose no threat even when they hit that level, as they could be encoded with “moral character” that would prevent them from going rogue.
Meta did not immediately respond to a request for comment from Insider, made outside normal working hours.
Read the full article here