Saturday, June 24, 2023

Most of the People, Most of the Time

In a recent episode of Freakonomics Radio, Slate legal reporter Dahlia Lithwick makes what I think is an often-overlooked observation about the business of news:

One of the things that worries me about journalism as we are practicing it now is the monetization of scaring people, right? If you scare people’s face off, they will click. And we know this. But also, I think the commodification of: “I will find out what your anxieties are, and I will feed you a thousand things that will convince you that they’re coming for that.” That is part of the business model here.

As generative "artificial intelligence" has rapidly become a more and more commonplace tool for doing all sorts of things, journalists and media outlets have quickly realized that stories noting how the Four Horsemen of the Information Apocalypse (terrorists, drug dealers, kidnappers, and child pornographers, depending on who you ask) could us "AI" to wreak havoc generates precious clicks. They're aided and abetted in this (or maybe they're simply being used) by law enforcement, who have come to realize that if they scare people's face off, they just might pay a modicum of attention for a moment or two.

Which brings us to an article in Axios on "How AI is helping scammers target victims in 'sextortion' schemes." (Why "sextortion" is in quotes, but "AI" is not, is something of a mystery.) Not that the headline is accurate; internet thieves aren't using the tools of technology to target people; but rather to make it more likely the people they target will pay up. And since the Horseman being referenced here is child pornographer-adjacent, there is, of course, a "won't somebody think of the children" angle.

"The question becomes, 'How do we develop a law now that's going to protect children 10 years from now?'" [Amanda] Manyana said.

If that's the question, we're in trouble, because it completely misses the point.

Sextortion schemes have typically worked in a few different modes. One popular one was to convince some horny young adult male to send pictures to what they thought was an amazingly attractive woman that 1) they just happened to stumble across on the Internet and 2) was immediately into them, but needed photographic evidence of how hot they were. The amazingly attractive woman would then suddenly (and pretty much inevitably) turn out to be an extortionist, who then threatened to release the photos to all of the mark's online contacts if not paid. The mark, who apparently had been telling everyone they knew that they'd taken an ironclad vow of chastity and moved into a monastery, would pay up to avoid being outed... as a horny young adult.

The addition of "AI" to the scheme ups the ante somewhat, because now, it's supposedly easy as pie for an extortionist to scrape the internet for pictures of random people, and use that to create believable video of said random people in sexual acts. So laws. Must. Be. Passed.

But the underlying problem, which never seems to come up in any of these articles, is somewhat different. If some unknown party sent you a sexually explicit video out of the blue, claiming it was of someone you knew, why would you a) believe them, and b) seek to confront, or even punish, the person in the video based on that alone? As the Axios article points out: "So-called 'deepfakes' and the threats they pose have been around for years." So why do people think that they should act on anything they see? Especially without making at least some effort to corroborate it?

While much has been made of people's willingness to use motivated reasoning to find reasons to disbelieve what is likely accurate information, because it conflicts with what they want to be true, there is also the problem of people's skepticism being behind the times.

What frightens people into paying extortionists who claim to have explicit photos or videos is the sense that their denials to people who supposedly love, or at least like, them will fall on deaf ears because some shady character on the Internet shows up peddling evidence that undermines what turns out to be a shaky believe in that person's worth and/or worthiness. There isn't a law that will protect people from that for 10 minutes, let alone 10 years, in a society where the constant perception of scarcity often means that supposed evidence that disqualifies people from aid or opportunities is seen as a helpful tool for making hard choices significantly easier.

No comments: