Sunday, June 26, 2022

Google’s DeepMind says it’s close to reaching human-level AI

- Advertisement -
- Advertisement -


According to a senior researcher at Google’s DeepMind AI division, human-level artificial intelligence is on the verge of finally being achieved.

dr Nando de Freitas said “the game is over” in the decades-long quest to make artificial general intelligence (AGI) a reality after DeepMind unveiled an AI system capable of completing a wide variety of complex tasks, from stacking blocks to writing poetry.

DeepMind’s new Gato AI, dubbed a “generalist agent,” only needs to be scaled up to create an AI that can rival human intelligence, said Dr. de Freitas.

Response to a written opinion article The next web claiming “humans will never achieve AGI,” DeepMind’s research lead wrote that he believed such a result was inevitable.

“It’s all about scaling now! It’s game over!” he wrote on Twitter.

“It’s about making these models bigger, more secure, more computationally efficient, faster at sampling, smarter storage, more modalities, innovative data, online/offline… Solving these challenges is what AGI will deliver.”

When asked by machine learning researcher Alex Dimikas how far he thinks the Gato AI is from passing a true Turing test — a measure of computational intelligence that a human may not be able to measure a Distinguishing machine from another human – answered Dr. de Freitas: “Very quiet.”

Leading AI researchers have warned that the advent of AGI could lead to an existential catastrophe for humanity, with Oxford University professor Nick Bostrom speculating that a “super-intelligent” system that surpasses biological intelligence could see humans as the dominant one life form on earth could replace.

One of the main concerns about introducing an AGI system capable of self-teaching and becoming exponentially smarter than humans is that it would be impossible to switch off.

dr Answering more questions from AI researchers on Twitter, de Freitas said that “security is paramount when developing AGI.”

“This is probably the biggest challenge we face,” he wrote. “Everyone should think about it. The lack of diversity also worries me a lot.”

Google, which acquired London-based DeepMind in 2014, is already working on a “big red button” to mitigate the risks associated with an intelligence explosion.

In a 2016 paper titled “Safely Interruptible Agents,” DeepMind researchers outlined a framework to prevent advanced artificial intelligence from ignoring shutdown commands.

Safe interruptibility can be useful for taking control of a misbehaving robot and can lead to irreversible consequences,” the paper said.

“When such an agent is operating under human supervision in real-time, there may occasionally be a need for a human operator to press the big red button to prevent the agent from continuing with a harmful sequence of actions – harmful to either the agent or the Environment – ​​and lead the agent to a safer situation.”



- Advertisement -
Latest news
- Advertisement -
Related news
- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here