- Title: AI Anxiety: experts split on whether the dangers justify the public angst
- Date: 27th May 2023
- Summary: UNIDENTIFIED LOCATION, UNITED STATES (FILE) (REUTERS) (MUTE) VARIOUS OF RACKS OF SERVERS
- Embargoed: 10th June 2023 02:28
- Keywords: AI Artificial Intelligence ChatGPT Existential threat Geoffrey Hinton Ipsos Machine learning OpenAI Sam Altman
- Location: VARIOUS
- City: VARIOUS
- Country: US
- Topics: North America
- Reuters ID: LVA00A544126052023RP1
- Aspect Ratio: 16:9
- Story Text: The swift growth of artificial intelligence technology could put the future of humanity at risk, according to most Americans surveyed in a recent Reuters/Ipsos poll.
More than two-thirds of Americans are concerned about the negative effects of AI and 61% believe it could threaten civilization.
The public anxiety has spread as OpenAI's ChatGPT has became the fastest growing app of all time.
ChatGPT has kicked off an AI arms race, with tech heavyweights like Microsoft and Google vying to outdo each other.
The integration of AI into everyday life has catapulted AI to the forefront of public discourse, spurring Congressional hearings looking into potential risks.
At a Senate hearing this month, OpenAI's CEO Sam Altman warned senators, "If this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening."
AI's rapidly developing capabilities has some researchers who have observed it at close range sounding the alarm.
Widely known as one of the "godfathers of AI", computer scientist Geoffrey Hinton recently announced he had quit Google after a decade at the firm, saying he wanted to speak out on the risks of the technology without it affecting his former employer.
Hinton's work is considered essential to the development of contemporary AI systems.
In 1986, he co-authored a paper widely seen as a milestone in the development of the "neural networks" undergirding AI technology.
In 2018, he was awarded the Turing Award in recognition of his breakthroughs.
But Hinton is now among a growing number of tech leaders publicly warning about the risk that AI machines will achieve greater intelligence than humans and potentially take control of the planet.
"I suddenly realized that maybe the computer models we have now are actually better than the brain. And if that's the case, then maybe quite soon they'll be better than us. So that the idea of super intelligence, instead of being something in the distant future, might come much sooner than I expected."
Hinton and OpenAI's Altman have both voiced concerns that AI systems could learn from human examples how to manipulate people with misinformation and to eventually pursue goals that do not align with the well-being of humanity. "I think they will quickly realize that if they got more control they could realize their goals much more easily," Hilton said. "Once they want to get control. things start looking bad for people."
Hinton compares the danger to the threat posed by the advent of nuclear weapons in the mid-20th century.
But other AI researchers view talk of an existential threat from AI as a distraction from more immediate concerns. New York University professor Julia Stoyanovich worries AI systems may have built-in biases against vulnerable communities in areas where the technology is already being applied. "Hiring and employment is one," said Stoyanovich. "Predatory lending is another. Access to housing, access to developmental opportunity like college admissions and school admissions. These are domains where we really should be paying very close attention to what we're doing with these systems."
Stoyanovich is among many AI experts, including OpenAI's Altman, who invite government regulation of AI or "guardrails" to protect the public from potential harm. Polling indicates the American public also favors government oversight of AI.
Sarah Hooker, who heads the non-profit research lab Cohere for AI, says there also needs to be greater focus on preventing powerful AI tools from falling into the wrong hands. "One of the concerns is once it's in the wild, it can be used by good agents or bad agents," Hooker said. "When you think about things like misinformation or the ability to generate text that might be used in nefarious ways -- we need better traceability for models. Can you trace when a text is generated by a model instead of a human? That's really not great technology right now and it needs a lot more work."
Hooker believes the current anxiety surrounding AI may have a silver lining if it leads to "more resources for work like this that tries to make these models safer."
(Production: Jorge Garcia, Matt McKnight, Maria Alejandra Cardona, Tom Rowe) - Copyright Holder: FILE REUTERS (CAN SELL)
- Copyright Notice: (c) Copyright Thomson Reuters 2023. Open For Restrictions - http://about.reuters.com/fulllegal.asp
- Usage Terms/Restrictions: None