Sunday, February 8, 2026

Degenerated

A connection of mine made a post on LinkedIn about the use of generative automation in the gaming industry, and how that's become basically cover for bad management.

Someone who saw the post took offense, not at the post itself, or its theme, but at the fact that it struck them as having been artificially generated. (I decided to drop the text into a few "GPT Detector" sites, by the way, and even my favorite false positive generator came back with a "0% GPT" score.)

Pointing out the patterns in writing that one believes that LLMs have been trained (intentionally or not) to favor is a different task than pointing out patterns in writing that are unique to LLMs. I think that there is a tendency to become caught up in the idea of "the flaw of averages," the idea that the "average" of a group of people, even a large group, won't actually match any given individual in that group. Applied to detecting LLMs and GPT-created text, it presumes that some artifacts of the training data that come out in generated text are unique to generated text; observe enough people and you'll see something like a given phrasing or sentence structure come out of the data, but the precise phrasing or structure exist nowhere in the data.

Which is reasonable, but to actually validate that for any given piece of text, one would need an in-depth understanding of the training data. To claim, for instance, that only generative automation uses emojis to mark bulleted lists is to make a pretty sweeping claim about quite a lot of human social media posting; one that's effectively impossible to empirically support. And I have it on pretty good authority that ChatGPT didn't invent the m-dash.

Big picture, I understand the feeling that generative automation is equivalent to "low-effort." I've seen my share of generated artwork, and come away with the impression that the person felt a need to have some sort of illustration, but not anything worth investing significant time, effort or money into, and so it felt perfunctory... the Social Media Gods say that text with pictures gets more Engagement, so here's a picture: please Engage now.

But I'm not sure that angry call-outs do anything productive. (Not that there's anything wrong with simply venting on the Internet, mind you.) People can snipe at one another for a supposed unwillingness to treat online posting with the respect that it deserves, but in the end, that sort of feeds the very Engagement beast that sits at the heart of the problem. And because spending the time to write posts oneself is the norm, there's little drive to step up and comment on that fact. It's not much different from the reasons why rage-bait outperforms more positive postings: the "Must. Denounce. Now." impulse feeds into the incentive structure of social media more broadly. (Which, of course, makes them an attractive mode on online interaction themselves.)

What makes things on social media go away isn't vitriol, it's apathy. (Another sentence structure supposedly coined by LLMs, by the way.)

It's likely overstating things to claim that the use of generative automation in social media is reaching the level of a moral panic, but I suspect that the number of people who feel actively slighted by it is growing. And sensitivity to slights can produce the perverse habit of seeking them out, in order to respond to them. Which, in turn, can lead to one's slight detector is perfectly calibrated.

For my part, I've come to realize that I don't naturally analyze text for signs of automation. I think that I'm okay with lacking the skill to do so; I'm unconvinced that learning to do it well enough to be accurate is time well spent.

No comments: