[ad_1]
January 20 started out like most typical Friday afternoons for Scottsdale, Arizona resident Jennifer DeStefano. The mother of two had just picked up her youngest daughter from dance practice when she received a call from an unknown number. She almost let the number go to voicemail but decided to pick it up on its final ring. DeStefano says what happened over the next few moments will likely haunt her for the rest of her life. She didn’t know it yet, but the Arizona resident was about to become a key figure in the rapidly emerging trend of AI deepfake kidnapping scams.
DeStefano recounted her experience in gripping detail during a Senate Judiciary Committee hearing Tuesday discussing the real-world impacts of generative artificial intelligence on human rights. She recalls the crying voice on the other end of the call sounding nearly identical to her 15-old-daughter Brie, who was away on a ski trip with her father.
“Mom, I messed up,” the voice said between spurts of crying. “Mom these bad men have me, help me, help me.”
A man’s voice suddenly appeared on the call and demanded a ransom of $1 million dollar hand-delivered for Brie’s safe return. The man threatened DeStefano against calling for help and said he would drug her teen daughter, “have his way with her,” and murder her if she called law enforcement. Brie’s younger sister heard all of this over speakerphone. None of that, it turns out was true. “Brie’s” voice was actually an AI-generated deepfake. The kidnapper was a scammer looking to make an easy buck.
“I will never be able to shake that voice and the desperate cries for help out of my mind,” DeStefano said, fighting back tears. “It’s every parent’s worst nightmare to hear their child pleading in fear and pain, knowing that they are being harmed and are helpless.”
The mother’s story points to both troubling new areas of AI abuse and a massive deficiency of laws needed to hold bad actors accountable. When DeStefano did contact police about the deepfake scam, she was shocked to learn law enforcement were already well aware of the emerging issue. Despite the trauma and horror the experience caused, police said it amounted to nothing more than a “prank call” because no actual crime had been committed and no money ever exchanged hands.
DeStefano, who says she stayed up for nights “paralyzed in fear” following the incident, quickly discovered others in her community had suffered from similar types of scams. Her own mother, DeStefano testified, said she received a phone call from what sounded like her brother’s voice saying he was in an accident and needed money for a hospital bill. DeStefano told lawmakers said she traveled to D.C. this week, in part, because she fears the rise of scams like these threatens the shared idea or reality itself.
“No longer can we trust seeing is believing or ‘I heard it with my own ears,’” DeStefano said. “There is no limit to the depth of evil AI can enable.”
Experts warn AI is muddling collective truth
A panel of expert witnesses speaking before the Judiciary Committee’s subcommittee on human rights and law shared DeStefano’s concerns and pointed lawmakers towards areas they believe would benefit from new AI legislation. Aleksander Madry, a distinguished computer science professor and director of MIT Center for Deployable Machine Learning, said the recent wave of advances in AI spearheaded by OpenAI’s ChatGPT and DALL-E are “poised to fundamentally transform our collective sensemaking.” Scammers can now create content that is realistic, convincing, personalized, and deployable at scale even if it’s entirely fake. That creates huge areas of abuse for scams, Madry said, but it also threatens general trust in shared reality itself.
Center For Democracy & Technology CEO Alexandra Reeve Givens shared those concerns and told lawmakers deepfakes like the kind used against DeStefano already present clear and present dangers to upcoming US elections. Twitter users experienced a brief microcosm of that possibility earlier this month when an AI-generated image of a supposed bomb detonating outside of the Pentagon gained traction. Author and Foundation for American Innovation Senior Fellow Geoffrey Cain said his work covering China’s use of advanced AI systems to surveil its Uyghurs Muslim minority offered a glimpse into the totalitarian dangers posed by these systems on the extreme end. The witnesses collectively agreed said the clock was ticking to enact “robust safety standards” to prevent the US from following a similar path.
“Is this our new normal?” DeStefano asked the committee.
Lawmakers can bolster existing laws and incentivize deepfake detection
Speaking during the hearing, Tennessee Senator Marsha Blackburn said DeStefano’s story proved the need to expand existing laws governing stalking and harassment to apply to online digital spaces as well. Reeve Givens similarly advised Congress to investigate ways it can bolster existing laws on issues like discrimination and fraud to account for AI algorithms. The Federal Trade Commission, which leads consumer safety enforcement actions against tech companies, recently said it’s also looking at ways to hold AI fraudsters accountable using existing laws already on the book.
Outside of legal reforms, Reeve Givens and Madry said Congress could and should take steps to incentivize private companies to develop better deepfake detection capabilities. While there’s no shortage of companies already offering services claiming to detect AI-generated content, Madry described this as a game of “cat and mouse” where attackers are always a few steps ahead. AI developers, he said, could play a role in mitigating risk by developing watermarking systems to disclose any time content is generated by its AI models. Law enforcement agencies, Reeve Givens noted, should be well equipped with AI detection capabilities so they have the ability to respond to cases like DeStefano’s.’
Even after experiencing “terrorizing and lasting trauma” at the hands of AI tools, DeStefanos expressed optimism over the potential upside of well-governed generative AI models.
“What happened to me and my daughter was the tragic side of AI, but there’s also hopeful advancements in the way AI can improve life as well,” DeStefano’s said.
[ad_2]
Source link