[ad_1]
There once was a virtual assistant named Ms. Dewey, a comely librarian played by Janina Gavankar who assisted you with your inquiries on Microsoft’s first attempt at a search engine. Ms. Dewey was launched in 2006, complete with over 600 lines of recorded dialog. She was ahead of her time in a few ways, but one particularly overlooked example was captured by information scholar Miriam Sweeney in her 2013 doctoral dissertation, where she detailed the gendered and racialized implications of Dewey’s replies. That included lines like, “Hey, if you can get inside of your computer, you can do whatever you want to me.” Or how searching for “blow jobs” caused a clip of her eating a banana to play, or inputting terms like “ghetto” made her perform a rap with lyrics including such gems as, “No, goldtooth, ghetto-fabulous mutha-fucker BEEP steps to this piece of [ass] BEEP.” Sweeney analyzes the obvious: that Dewey was designed to cater to a white, straight male user. Blogs at the time praised Dewey’s flirtatiousness, after all.
Ms. Dewey was switched off by Microsoft in 2009, but later critics—myself included—would identify a similar pattern of prejudice in how some users engaged with virtual assistants like Siri or Cortana. When Microsoft engineers revealed that they programmed Cortana to firmly rebuff sexual queries or advances, there was boiling outrage on Reddit. One highly upvoted post read: “Are these fucking people serious?! ‘Her’ entire purpose is to do what people tell her to! Hey, bitch, add this to my calendar … The day Cortana becomes an ‘independent woman’ is the day that software becomes fucking useless.” Criticism of such behavior flourished, including from your humble correspondent.
Now, amid the pushback against ChatGPT and its ilk, the pendulum has swung back hard, and we’re warned against empathizing with these things. It’s a point I made in the wake of the LaMDA AI fiasco last year: A bot doesn’t need to be sapient for us to anthropomorphize it, and that fact will be exploited by profiteers. I stand by that warning. But some have gone further to suggest that earlier criticisms of people who abused their virtual assistants are naive enablements in retrospect. Perhaps the men who repeatedly called Cortana a “bitch” were onto something!
It may shock you to learn this isn’t the case. Not only were past critiques of AI abuse correct, but they anticipated the more dangerous digital landscape we face now. The real reason that the critique has shifted from “people are too mean to bots” to “people are too nice to them” is because the political economy of AI has suddenly and dramatically changed, and along with it, tech companies’ sales pitches. Where once bots were sold to us as the perfect servant, now they’re going to be sold to us as our best friend. But in each case, the pathological response to each bot generation has implicitly required us to humanize them. The bot’s owners have always weaponized our worst and best impulses.
One counterintuitive truth about violence is that, while dehumanizing, it actually requires the perpetrator to see you as human. It’s a grim reality, but everyone from war criminals to creeps at the pub are, to some degree, getting off on the idea that their victims are feeling pain. Dehumanization is not the failure to see someone as human, but the desire to see someone as less than human and act accordingly. Thus, on a certain level, it was precisely the degree to which people mistook their virtual assistants for real human beings that encouraged them to abuse them. It wouldn’t be fun otherwise. That leads us to the present moment.
The previous generation of AI was sold to us as perfect servants—a sophisticated PA or perhaps Majel Barrett’s Starship Enterprise computer. Yielding, all-knowing, ever ready to serve. The new chatbot search engines also carry some of the same associations, but as they evolve, they will be also sold to us as our new confidants, even our new therapists.
[ad_2]
Source link