[ad_1]
Using the chatbot is more direct and maybe more engaging, says Donald Findlater, the director of the Stop It Now help line run by the Lucy Faithfull Foundation. After the chatbot appeared more than 170,000 times in March, 158 people clicked through to the help line’s website. While the number is “modest,” Findlater says, those people have made an important step. “They’ve overcome quite a lot of hurdles to do that,” Findlater says. “Anything that stops people just starting the journey is a measure of success,” the IWF’s Hargreaves adds. “We know that people are using it. We know they are making referrals, we know they’re accessing services.”
Pornhub has a checkered reputation for the moderation of videos on its website, and reports have detailed how women and girls had videos of themselves uploaded without their consent. In December 2020, Pornhub removed more than 10 million videos from its website and started requiring people uploading content to verify their identity. Last year, 9,000 pieces of CSAM were removed from Pornhub.
“The IWF chatbot is yet another layer of protection to ensure users are educated that they will not find such illegal material on our platform, and referring them to Stop It Now to help change their behavior,” a spokesperson for Pornhub says, adding it has “zero tolerance” for illegal material and has clear policies around CSAM. Those involved in the chatbot project say Pornhub volunteered to take part, isn’t being paid to do so, and that the system will run on Pornhub’s UK website for the next year before being evaluated by external academics.
John Perrino, a policy analyst at the Stanford Internet Observatory who is not connected to the project, says there has been an increase in recent years to build new tools that use “safety by design” to combat harms online. “It’s an interesting collaboration, in a line of policy and public perception, to help users and point them toward healthy resources and healthy habits,” Perrino says. He adds that he has not seen a tool exactly like this being developed for a pornography website before.
There is already some evidence that this kind of technical intervention can make a difference in diverting people away from potential child sexual abuse material and reduce the number of searches for CSAM online. For instance, as far back as 2013, Google worked with the Lucy Faithfull Foundation to introduce warning messages when people search for terms that could be linked to CSAM. There was a “thirteen-fold reduction” in the number of searches for child sexual abuse material as a result of the warnings, Google said in 2018.
A separate study in 2015 found search engines that put in place blocking measures against terms linked to child sexual abuse saw the number of searches drastically decrease, compared to those that didn’t put measures in place. One set of advertisements designed to direct people looking for CSAM to help lines in Germany saw 240,000 website clicks and more than 20 million impressions over a three-year period. A 2021 study that looked at warning pop-up messages on gambling websites found the nudges had a “limited impact.”
Those involved with the chatbot stress that they don’t see it as the only way to stop people from finding child sexual abuse material online. “The solution is not a magic bullet that is going to stop the demand for child sexual abuse on the internet. It is deployed in a particular environment,” Sexton says. However, if the system is successful, he adds it could then be rolled out to other websites or online services.
“There are other places that they will also be looking, whether it’s on various social media sites, whether it’s on various gaming platforms,” Findlater says. However, if this was to happen, the triggers that cause it to pop up would have to be evaluated and the system rebuilt for the specific website that it is on. The search terms used by Pornhub, for instance, wouldn’t work on a Google search. “We can’t transfer one set of warnings to another context,” Findlater says.
[ad_2]
Source link