Thursday, January 26, 2023

Boogeyman

Most of us are only now getting a glimpse of just how smart artificial intelligence has become. It's awe-inducing — and terrifying.
How ChatGPT became the next big thing
Terrifying? Really? Why? What happened? Are AIs already rising up, seeking to take over the world and looking to kill random waitresses to prevent their future children from leading a rebellion?

When people talk about "bias" in "the media," the general knock is that journalists, rather than reporting the brute facts, are offering up skewed versions of events in order to benefit the shadowy people who are paying them to deliberately mislead the public. But journalism is a business, and outlets that don't force readers to pay subscription fees in order to read content have to keep the lights on and doors open through advertising. And that means looking to draw in as many people as possible to read stories that cost as little as possible to produce.

And that leaves us with Axios seeking to scare people with hyperbolic language concerning Artificial Intelligence starting to become broadly useful.
Everyone seems to see an array of uses for the technology in ways that are both exciting and scary.
And I get it. News stories that coincide with the way the target audience already sees the world are more likely to be passed along to others. And playing up the uncertainty that systems like ChatGPT generate coincides neatly with an audience that sees itself as under threat from technologies that their employers (and wealthy people in general) will be able to use to replace them in the world of work. Or that militaries (or police forces) could use to injure or kill them remotely. It plays into a sense that the world is unjustifiably hostile to the interests of "the people," in a way that reinforces a reader's understanding that they're a victim (or a potential victim), and thus, one of the "good people."

But it also simply creates and accentuates anxieties, by encouraging people to see themselves as helpless. (Because if there were something concrete to be done about things, there would be no need to be afraid.) I don't see that as helpful. But then again, I'm not in charge of growing readership numbers.

Hobgoblins are exciting sometimes precisely because they are scary. They create an emotional rise that a good number of people seek out. After all, horror movies, and other media intended to frighten audiences, at least for the time that they're engaged with it, are multi-billion dollar businesses. But it seems an unworthy motivation for news outlets. Because ginning up people's fears of the world around them has consequences that can be both serious and long-lasting.

In the end, the problem isn't the technology. Skynet is likely to forever be a work of fiction. Other people, on the other hand, can be prone to see life as a competition, and a zero-sum game, at that. And that, I think, is the underlying fear that the language in these sorts of stories both provokes and plays into. People understand that their fellow homo sapiens can be more than ready, willing and able to "screw them over for a percentage" to paraphrase Ellen Ripley in Aliens. Artificial Intelligence is simply another tool that can be used to accomplish that. But emphasizing people's fear of their vulnerability doesn't, in itself, do anything to tell people how to change that vulnerability. And that, I think, is what a lot of people really seek to know.

No comments: