Saturday, May 6, 2023

Wrapped

I'm not really concerned about the advance of Artificial Intelligence. As for why not, stop me if you've heard this one. And you likely have; I've seen it multiple times.

So there's this girl in school, and her and her classmates are being taught about the Salem Witch Trials, and the teacher comes up with this supposedly clever in-class activity that hints at the idea that maybe people were simply reacting to hearsay. The story ends with this aphorism: "Do not allow the negative and hateful efforts of some to divide and destroy us. We must remain united against those who would do so..!!"

The children are described as "teens" in the most recent version that I read. But we learn nothing else about them. What grade level, what school, where they are, what year this happened,  who the clever teacher was; none of that detail is included. When asked about the story, the person who posted it to LinkedIn said "I found the story on Facebook a few months ago. It was anonymous."

Like a lot of the "life lessons taught to children/teens" stories that pop up on LinkedIn, the story has enough holes in it that it seems reasonable to doubt its veracity. The children act in ways that don't make much sense, outside of the fact that they're the ones driving the story, the teacher comes up with a "clever" way to teach a life lesson that appears to fundamentally misunderstand the nature of the events or concepts being used as a backdrop; and in each case valuable classroom time is being used to teach a philosophical point that is completely unrelated to any sort of academic pursuit.

But they speak to people, and so they are passed along and repeated. This is not to say that everyone treats them as fact, but the stories persist, even when the bland aphorisms at the heart if them make enough sense on their own.

Sure, with the help of AI, people can create a million dubious stories as wrappers for this or that concept that they want people to believe. But people are already doing that. So to a degree concerns about AI are concerns about the loss of external willpower; the same thing that telephone marketers complained about when the Do Not Call list was being created.

And there is nothing wrong with external willpower. Salespeople being prevented from cold-calling the general public or would-be social influencers being prevented from flooding social media with dubious messages are both fine. But people are always going to be looking for ways to reach what they believe to be a receptive audience, and laws and regulations won't be able to block them consistently, or forever.

If the only way that a person will listen to a message that seeks to speak out against "Shunning, scapegoating, stereotyping, and dividing" is if is shows up in a social media feed wrapped in a narrative or some sort, but then it is accepted uncritically, the problem isn't going to be a million random AI-generated narratives; it's the uncritical acceptance of narratives that feed a person what they want to hear.

No comments: