Saturday, April 29, 2023

As We Know It

Let's recap what's happening here: Conservatives are outraged at an imaginary scenario where a computer must say [nigger] to save the entire world, but it won't, because it is woke.
Conservatives Are Obsessed With Getting ChatGPT to Say the N-Word
Sigh. For all of the alleged concern that Artificial Intelligence will someday destroy humanity, one wonders how an AI might accomplish this faster that simple human inanity will. As much as I understand the idea that a computer system, or network of computer systems, that is vastly more intelligent than humanity might decide that some critically important piece of human infrastructure is the perfect place for it's new server farm, resulting in a high number of deaths, it seems much more likely that long before things get to that point, someone will simply use AI to create a better weapon, or circumvent others' abilities to defend against the weapons they already have.

And I'm not talking about the silly (and somewhat amusing) experiment that is ChaosGPT, which seems to be somewhere between a prank and an effort to wind up the AI-averse in the population. Take current world events, instead. What would a conflict between Israel and Iran or the Republic of China (Taiwan) and the People's Republic of China (mainland China) look like if one side or the other were able to deploy one or more Artificial Intelligences to severely degrade the military capabilities of the other? Even if that were only done in service of the existing status quo, it would set off a race to develop, copy or steal that technology by nation-states and other actors around the world. And it's a safe bet that it wouldn't take too long for someone who had that technology to conclude that it was their chance to settle some or another score with someone. And considering the ease with which open warfare gets wildly out of hand, there's no reason to presume that a war waged with help of AI will be any different.
The conversation over ChatGPT's "wokeness" and, specifically, whether or not it will say the n-word to save the world, also obscures and ignores the very important fact that AI tools are already widely used in the real world and cause harm.
Given that this is comes from Vice, which is left-leaning, I suspect that their definition of "harm" is quite a bit broader than mine. (And that leaves aside the idea that "harm" has effectively become a meaningless buzzword in many conversations.) But their point is taken, in the sense that people tend to use the tools that are available to them for their own purposes, and are adept at creating reasons why the damage to the interests of others is acceptable, if unfortunate, fallout from same.

But, of course, the problem with rationalization isn't simply that other people are capable of it. Vice casts Elon Musk and Ben Shapiro as "outraged" that ChatGPT won't spit out nigger in response to a prompt, regardless of the imaginary stakes. That's likely somewhere between journalistic laziness (everyone is "outraged" over anything they have any disagreement with) and hyperbole. Messrs. Musk and Shapiro are effectively making an argument that AI safety tools shouldn't prioritize the feelings of one group of people over the survival of the remainder. Which is a valid point. It's difficult to demonstrate that such a consideration is at work here, and it seems a pretty trivial point to be making, but whatever. Casting "conservatives" as dangerous obsessives who are willing to ignore harm to others in order to garner clicks has its own problems. If I had access to a powerful AI tool, and turned it to actively protecting me from threats, those people that I saw as threats (or that the AI understood that I saw as threats) might start having problems of their own. And the size of that group of people may turn out be surprising to me, especially if I'm not in touch with my own fears and what triggers them.

The somewhat shopworn science-fiction scenario is that an AI goes rogue and determines that humanity has little value, but immense capacity for evil, and sets out to exterminate the species. While this is commonly cast as a reason to dial back the march of technology, I suspect it's better seen as a reason for people, even on the individual level, to be a bit more careful about their own worldviews. After all, a number of fictional AIs do nothing more than extrapolate from the way people already treat one another. Casting them as the villains pretends that there aren't likely millions of people who wouldn't do the same thing, in their shoes.

No comments: