[ad_1]
Louisiana has become one of the first states to pass legislation explicitly criminalizing the creation of deepfaked child sexual abuse material. The legislation, called SB175, makes it a crime to knowingly create or possess an AI-generated image or video depicting a person under the age of 18 engaged in a sexual act. People convicted of violating the law could face between five and 20 years in jail, a $10,000 fine, or both. Selling or advertising deepfaked sexual material depicting minors, meanwhile, can carry an even greater jail sentence—10 to 30 years or a fine of up to $50,000.
Louisiana Governor John Bel Edwards signed the bill into law last week, and it’s slated to go into effect August 1st. The bill comes amid a flurry of new legislation nationwide attempting to rein in a variety of deepfake abuses, but the Pelican State’s law is one of the first of its kind to address a legal gray area looming over image generation by artificial intelligence—images involving minors.
Edwards did not immediately respond to Gizmodo’s request for comment. Louisiana state senator Senator Jeremy Stine, who authored the bill, said in a statement he hoped the legislation would “protect our children from digital predators.”
The new law, which does not specify whether or not the deepfaked image or videos need to contain child sexual abuse material (CSAM) of real people, is one of several efforts to address a legal loophole helping facilitate the spread of deepfaked sexual material online. Federal law already outlaws the creation or possession of CSAM material online, but it’s not clear whether those laws apply to AI-generated creations.
Louisiana’s new law aims to eliminate that ambiguity. New Jersey is currently considering similar legislation which, if passed, would treat AI-generated CSAM the same as traditional sexual abuse material. Some deepfake creators are already facing jail time. In Quebec, for example, a provincial court judge sentenced a man to over three years in prison for using an AI system to create deepfaked child pornography in April.
States rush to pass deepfake laws
Increasingly powerful AI technology, decreasing costs, and lowering barriers to entry have led to a surge in the creation of deepfakes in recent years. Though some deepfakes have recently made headlines for their use in scams and political ads, porn still makes up the vast majority of use cases. A 2019 report by Deeptrace Labs found 96% of the nearly 15,000 deepfake videos it found online were pornographic in nature.
Deepfakes are particularly pernicious when used to depict minors because they can subvert traditional detection methods. Major tech platforms like Facebook and YouTube rely on a database of known CSAM material maintained by The National Center for Missing & Exploited Children to scan for and root out violators on their platforms. Deepfaked abuse material, however, can skirt past those scans undetected if it uses non-sexualized images of minors pulled from the web as training material.
At least nine other states including Texas and California have already passed laws attempting to criminalize the spread of non-consensual, AI-generated porn and deepfakes used in political campaigns. On the federal level, New York representative Joseph Morelle recently proposed his own bill that criminalizes non-consensual sharing of intimate deepfake images online.
“As artificial intelligence continues to evolve and permeate our society, it’s critical that we take proactive steps to combat the spread of disinformation and protect individuals from compromising situations online,” Morelle said.
[ad_2]
Source link