Rogue AI ‘could kill everyone’.
Scientists believe AI could cause catastrophe that is at least as bad as an all-out nuclear war” if left unchecked.
A rogue artificial intelligence system could kill everyone and the technology must be regulated in a similar way to nuclear weapons, MPs have been told.
Researchers from Oxford University told the science and technology committee that AI could eventually pose an “existential threat” to humanity. Just as humans wiped out the dodo, the machines might eradicate us, they said.
Michael Cohen, a doctoral student, said: “With superhuman AI there is a particular risk that is of a different sort of class, which is . . . it could kill everyone.”
One danger, he explained, might involve asking an AI to achieve a goal without placing sufficient limits on the tactics it uses.
He added: “If you imagine training a dog with treats: it will learn to pick actions that lead to it getting treats, but if the dog finds the treat cupboard, it can get the treats itself without doing what we wanted it to do.
“If you have something much smarter than us monomaniacally trying to get this positive feedback, and it’s taken over the world to secure that, it would direct as much energy as it could to securing its hold on that, and that would leave us without any energy for ourselves.”
Similar concerns appear to be shared by many scientists who work with AI. A survey in September by a team at New York University found that a third of 327 researchers believe AI could cause a disaster akin to a nuclear apocalypse.
Thirty-six per cent of them agreed that it was “plausible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war”.
Those fears were echoed yesterday, when MPs were told that the global AI industry had already evolved into a “literal arms race”, with rival states rushing to develop applications for military and civilian use.
Michael Osborne, professor of machine learning at the University of Oxford, said: “I think the bleak scenario is realistic because AI is attempting to bottle what makes humans special — that [quality] has led to humans completely changing the face of the Earth.”
“If we’re able to capture that in technology, then, of course, it’s going to pose just as much risk to us as we have posed to other species: the dodo is one example.
“I think we’re in a massive AI arms race, geopolitically with the US versus China, and among tech firms there seems to be this willingness to throw safety and caution out the window and race as fast as possible to the most advanced AI.”
He added: “Artificial systems could become as good at outfoxing us geopolitically as they are in the simple environments of games.
“There are some reasons for hope in that we have been pretty good at regulating the use of nuclear weapons. AI is as comparable a danger as nuclear weapons.”
Source:https://www.thetimes.co.uk/article/f5b2e26c-9cef-11ed-b81d-ce538d806950