• Thu. May 4th, 2023

Artificial intelligence pioneer quits Google over fears of rapid escalation

ByGurinderbir Singh

May 3, 2023

Even the developers are abandoning ship.

Geoffrey Hinton, an AI pioneer known as the “godfather of artificial intelligence”, has announced his resignation from Google, citing growing concerns about the potential dangers of artificial intelligence.

Hinton, 75, expressed regret about his work in a statement to The New York Times, warning that chatbots powered by AI are “quite scary” and could soon surpass human intelligence.

He explained that AI systems like GPT-4 already eclipse humans in terms of general knowledge and could soon surpass them in reasoning ability as well.

The arrival of ChatGPT and similar applications now available to consumers has given the regular person access to some of the most advanced language models ever seen.

In a few short months of it being available, people have already used the free service to generate income, among several other useful exploits.

However, the rapid acceleration in technology is likely to cause chaos as leaders scramble to legislate the finer details.

Nobody knows exactly what a world populated by computers more intelligent than humans looks like, which is why some experts like Hinton are resigning from their post before it gets ugly.

He described the “existential risk” AI poses to modern life, highlighting the possibility for corrupt leaders to interfere with democracy, among several other concerns.

Hinton also expressed concern about the potential for “bad actors” to misuse AI technology, such as Russian President Vladimir Putin giving robots autonomy that could lead to dangerous outcomes.

“Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning,” he said in a recent interview aired by the BBC.

“And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”

“This is just a kind of worst-case scenario, kind of a nightmare scenario,” he continued.

“You can imagine, for example, some bad actor like Putin decided to give robots the ability to create their own sub-goals.”

He emphasised that the type of intelligence being developed through AI is very different from the intelligence of biological systems like humans.

“We‘re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world,” he said.

“And all these copies can learn separately but share their knowledge instantly. So it‘s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

Although Hinton’s pioneering research on deep learning and neural networks has paved the way for current AI systems, he said that he did not want to criticise Google and that the company had been “very responsible” in its approach to AI. He explained that he decided to retire at his age of 75.

In response to Hinton’s resignation, Google’s chief scientist Jeff Dean stated that the company remains committed to a responsible approach to AI and is continually learning to understand emerging risks while also innovating boldly.

Hinton’s comments came after Australian artificial intelligence researcher warned the nation of the devastating powers the next generation of AI could possess.

Major nations around the world, such as China, the United States, and Russia, have already identified AI as a crucial component of the future military landscape and are racing to advance their capabilities.

“Australia needs to consider how it might defend itself in an AI-enabled world, where terrorists or rogue states can launch swarms of drones against us – and where it might be impossible to determine the attacker,” the artificial intelligence expert said.

“A review that ignores all of this leaves us woefully unprepared for the future.

“We also need to engage more constructively in ongoing diplomatic discussions about the use of AI in warfare. Sometimes the best defence is to be found in the political arena, and not the military one.”

Even the CEO of OpenAI, the company developing ChatGPT, admitted that there are real dangers caused by their exploits.

“We’ve got to be careful here,” Sam Altman told ABC News last month.

“I think people should be happy that we are a little bit scared of this. I’m particularly worried that these models could be used for large-scale disinformation.

“Now that they’re getting better at writing computer code, it could be used for offensive cyber-attacks.”

Altmann defends his company, now worth several billions of dollars, saying their work could be “the greatest technology humanity has yet developed”.

Shrugging off the potential for the AI to begin communicating and commanding itself, Altmann reassures that ChatGPT is still a tool that is “very much in human control”, at least for now.

“There will be other people who don’t put some of the safety limits that we put on,” he said. “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”

Read related topics:Google

Leave a Reply

Your email address will not be published.