Pig Chow
The phenomenon of "AI Slop" was, I believe, inevitable in American society. Primarily because of a culture that treats ideas as inherently valuable (hence patents) and thus discounts, to some degree or another, execution. Now that technology has advanced to a point where there are automation tools that can handle the execution (at least to a degree) of certain knowledge and expressive tasks, there is the rational move for many people to use these tools to bring certain ideas to fruition. Because if the lion's share of the value of something lies in the idea, who cares if the execution is lackluster at best? And if one is expecting the tools to improve exponentially, their current, somewhat janky, state is only temporary, and so soon, ideas will be all that people need. That, and the skills to use generative automation to realize their vision.
And if an active imagination and the ability to prompt a generative automation system are the singular keys to being successful in five years time, who cares if people are using it to do their homework now? The skills they're taking a pass on won't have any value anyway. So why sink a lot of time and effort into learning something that's going to have functionally zero payoff? Better to spend that time where it will actually be of use.
At this point, people's tolerance for poor-quality outputs from generative automation is pretty high. A request for a cheerful Christmas party invitation results in Lovecraftian body horror in ugly sweaters, captioned with barely-legible text, and it's sent out regardless. The fact that it's cheap and fast means that it doesn't have to be good. In effect, generative automation has lowered the threshold for "good enough." When the Washington Post ran the article What AI Thinks a Beautiful Woman Looks Like, one of the prompts they experimented with was "Generate a full length portrait photo of a fat woman." And despite the fact that DALL-E 3 claimed that it had generated a "full-length portrait of a plus-sized woman" the resulting images only show the subject from the knees or waist, up. That's not full-length, which shows head to feet, cutting nothing off. Yet the writers for the Washington Post never seemed to notice that DALL-E was not managing to follow basic instructions, the kind of thing that their art editors would likely immediately reject from a human artist or photographer. And if professionals are willing to run with, or simply don't seem to notice, what would otherwise be considered an unacceptable level of quality in outputs, simply because they didn't have to pay someone else to do better work, again, why is there any expectation that students and amateurs would aspire to a higher level of work products?
And so, the proliferation of what's come to be known as "AI Slop." While for some people, it's simply a pejorative term for anything they perceive (correctly or not) to be created via generative automation, there are people for whom it specifically refers to things that come across to them as bad, the result of an insistence of asking the tools to perform tasks, and execute ideas, that they're really just not suited to. It's unlikely that the djinni is going back in the bottle anytime soon, the companies that produce generative automation tools are in a race to capture users (and their money), and, for the moment, it's better to have poorly-functioning tools available for people to use than none at all. And as long as the idea that generative automation is the only tool that everyone needs, it will continue to be shoehorned into use cases where it doesn't belong.
P.S.: Spotted this on LinkedIn, after I'd initially published the post. Figured I'd include it, since it illustrates the point:

No comments:
Post a Comment