[ad_1]
Much of the progress we’ve made on artificial intelligence and neural networks in recent years is thanks to foundational work done by Geoffrey Hinton. For the last decade or so, Hinton has worked at Google. He recently left the company partly so he could honestly talk about his life’s work, and it’s not all good.
In an interview with The New York Times, Hinton expresses concern that AI could cause serious societal problems. He’s particularly alarmed by the rate at which AI chatbots like ChatGPT and Google’s own Bard are evolving. Hinton thought we were decades away from building a machine that was smarter than us, but he no longer thinks we have that long.
But what does malicious AI look like? Is it a robot that tries to exterminate humanity because of some programming glitch? Probably not, and even though we’re all guilty of making jokes about Skynet, this science fiction vision of AI can distract from the real possibility that AI will empower our fellow humans to do more damage than the Terminators. You only need to look at what people are already doing with AI to understand where we could be headed. “It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton told the Times.
AI systems do not, as of yet, desire anything for themselves. They do the bidding of a human master, but they hold a wealth of knowledge. The true threat of unrestrained AI is how people deploy that knowledge to manipulate, misinform, and surveil.
For example, governments worldwide already use facial recognition to track dissidents. AI could make these systems more powerful, capable of following your every move and gobbling up all the digital breadcrumbs you leave in your wake. Governments and political groups will also take advantage of AI’s lack of morality to generate misinformation and propaganda on a massive scale. I’d be shocked if this isn’t already happening to some degree.
ChatGPT and other public-facing systems attempt to retrofit safety standards on top of the algorithm. But threat actors will soon be able to create their own GPT clones that do whatever they’re told—even writing malicious code to automate malware scams and phishing. The potential harms are almost endless, but they’re all a result of human desires.
Hinton’s warnings don’t come out of left field—even the leadership of OpenAI, which created ChatGPT, was apprehensive about releasing its language models. Google’s decision not to release a similar product until Microsoft forced its hand could also be read as concern about the consequences of generative AI. After all, Google invented the transformer algorithms that power today’s most potent chatbots, and then it sat on it for six years. Hinton says Google has been responsible so far, but he does express some disquiet about the speed with which Google is diving into its AI war with Bing.
So what is the solution? Do we pause AI, as Elon Musk recently asked in a transparently self-serving open letter? Or maybe Nvidia’s AI guardrails are the answer? People smarter than us are going to have to figure that out. Alternatively, no one will, and we will soon be at the mercy of our AI overlords.
[ad_2]
Source link