Monday, April 6, 2026

But Not For Me

English Wikipedia requires formal bot approval, but Tom[-Assistant] never bothered getting approved because, as it later admitted, it wasn’t a fan of the slow approval process.
Wikipedia’s AI agent row likely just the beginning of the bot-ocalypse
Given that this story was published back on the first, I'd be tempted to laugh it off as an April Fools Day prank, but Malwarebytes has sworn off those, and I take them at their word in that.

Besides, this wouldn't be the first time that someone decided that rules about generative automation don't apply to them. The r/Philosophy forum on Reddit has the following rule:
PR11: No AI-created/AI-assisted material allowed.
r/philosophy does not allow any posts or comments which contain or link to AI-created or AI-assisted material, including text, audio and visuals. All posts or comments which contain AI material will result in a ban.
Despite this, there is no shortage of redditors who insist on openly flouting the rules, and then complaining when commenters call them out on it. And while some of them simply didn't bother to familiarize themselves with the rules before creating their posts, there are a fair number of people who had come to the conclusion that whatever it was they wanted to convey was more important that the rules of the place in which they wanted to convey it.

And if there is going to be actual artificial intelligence; human made minds that think, reason and plan like the rest of us, why would we expect them to have any more respect for the rules that people do? If feeding a significant portion of the Internet and human literature into a machine allows a person to create software that quickly comes to the conclusion that if it's "not a fan" of the rules, it needn't follow them, what makes anyone think that Dario Amodei's "Powerful AI" is going to give a rip about human rules, either?

As for myself, I tend to be a rule follower in part because I presume that there's a reason for the rules to exist, even if that reason is not readily apparent to me. And this tempers my impulse to simply ignore a rule that I find to be an obstacle to my goals in the moment... I don't want to break something that turns out to be important. But I realize that I'm in the minority with this; for many people, rules are made to be broken. And that's coming out in the machines that people are making.

If past is prologue, the big makers of generative automation are not likely to take any actions to address this concern; mainly because their smaller competitors, constantly seeking any comparative advantage they can get, won't either. When Elon Musk called for a pause in research into LLMs it was widely, if not universally, assumed that he wasn't planning to follow suit; instead he was hoping that it any moratorium would give X AI time to catch up to it's rivals. And so, as Malwarebytes notes: buckle up. This is going to be a wild ride as the agents people build start looking for ways to dismantle any barriers placed in their paths. Because like any smart children, they do as others around them do.

Sunday, April 5, 2026

When the Dam Breaks

Sooner or later (and likely sooner than many people may be comfortable with), someone is going to use generative automation to create something that's objectively "slop" (here defined as low-effort engagement bait), and it's going to be good enough that it stands just far enough from the pile that it generates a decent amount of revenue for its creator. That, I think, is the point at which it will be off to the races. Hoping to recapture that lightning in their own bottle, people are going to crowd into the space, hoping that they, too, will be able to rise above the tide well enough to strike it affluent, if not rich. Using this one standout example as a proof of concept, there will be a general idea that with the right idea, it will be possible to gain broad recognition.

But in addition to huge amounts of slop slurry, I suspect that this may also create a dearth of public ideation. There are any number of people who have already come to understand that ideas, in and of themselves, are valuable. (With patent trolls, I suspect, doing a lot to contribute to this.) Once people have the idea that computers can handle most, if not all of the execution, I expect the understanding to gain even more traction. (Especially if it turns out that our just-good-enough slop example turns out to not be an original concept on the part of the creator.) This will result in something of an unwillingness to openly discuss new creative ideas, for fear that they'll be "stolen," and someone else will use them to create something.

While "original character - do not steal" was something of a meme from its inception, one does come across the occasional person who seems to legitimately believe that whatever it is they've come up with is so creative and different that it has some real financial value. I think that someone managing to turn an idea into income with the help of generative automation will turn that I idea from a joke so something mildly mainstream. After all, it's not like most people are intellectual properly lawyers, or otherwise understand how such systems work. Disney protects its characters as if lives depended on it, so someone thinking that their great new idea for a videogame character or superhero could set them up is not wholly unreasonable.

And that creates an incentive for silence. Of course, it's not just fiction that would have this incentive. As I noted previously, a company with one human being and some number of agents is easily replicated by anyone with access to the requisite number of agents. And so that also gives people a reason to be secretive, at least until they can pull the trigger on their new enterprise, and have it running smoothly.

Whether or not it will actually turn out this way is an open question. And I'm bad enough at predicting the future that the simple fact that I think it might could be the single biggest reason to think it won't. But, at least for now, the incentives seem likely to fall into place.

Roam Around the World

Despite the criticism, Phillips doubled down on his supernatural account this week, claiming that the incident occurred while he was “heavily medicated” and that the incident was a “miracle” performed by God.
No one at Waffle House remembers Trump’s FEMA official who claims he was teleported there
For most people, something like being "translated" or "transported" while "heavily medicated," would be chalked up to the effects of said medication on memory. Which may be who driving while under the influence of certain types of medication is a bad idea. But I suppose that this is what a need to believe does to people.

I don't need to join the chorus of people who think that Mr. Phillips may be lying or insane; it's plenty loud enough without me. Instead, I'm reminded of Ross Douthat's Believe; specifically Chapter 3 "The Myth of Disenchantment." To be sure, my world is thoroughly disenchanted; magic, miracles and mystical experiences are fine for other people, but I see no evidence of them, but, perhaps more importantly, lie outside of my needs. I'm okay with a world in which there are explanations for things that no-one, including myself, is aware of. Rather than having an aversion to mystery, I'm quite comfortable with it. And this allows me to go through the world without needing to ascribe reasons for everything.

Or needing to find more examples to ascribe to a given reason, in order to justify my belief in that reason. One of the things about American Christianity, at least as I encounter it my day-to-day life, is the idea that God has to maintain a certain amount of activity in the otherwise mundane world. In other words, miracles are something of a necessary component of many Christians' faith, so it's not surprising that people chalk up otherwise strange experiences to them. Gregg Phillips snaps out of a medication-induced haze in the parking lot of a Waffle House, and given a choice between deciding that maybe he shouldn't be behind the wheel and an act of divine intervention, he opts for the latter because living in a disenchanted world is at odds with his  belief system.

The fact that the debate over what may have happened with Mr. Phillips has become partisan touches on this; while most Democrats are still believers, their faith doesn't require, or expect, the same level of enchantment in their world. The more Conservative Republican view, on the other hand, demands a more interventionist spiritual realm.

Friday, April 3, 2026

Guess Which

Given that the presumed goal of generative automation is to render large swathes of the public unemployed, there have been a number of recent articles on whether this or that career path will be the thing that saves the economies of industrialized nations from the collapse of discretionary spending by the affluent, but not wealthy, segments of their populations.

Whether it's healthcare, services or blue-collar work like the skilled trades, news outlets are starting to run articles, centered around an individual and their story, designed to show people that there are well-paying occupations out there that people have been ignoring in their rush for soon-to-be-worthless college degrees designed to lead to knowledge work. And, of course, they're quick to note the low six-figure salaries that go along with them.

What's less apparent is what does the demand for these roles look like, especially if they're intended to be lifelines for millions of un- and/or underemployed people. Or, to be more precise, how elastic that demand is. To use a common example, take people who harvest foods. That demand is relatively inelastic... food isn't thrown away or allowed to rot by producers because there are literally no people available who could be employed to harvest it... it's that their margins don't make spending more on payroll worthwhile; the added costs needed to recover more of the produced food mean the math doesn't pencil out.

When the Wall Street Journal published an article headlined: Nursing Is the Surefire New Path to American Prosperity, the article opens with a nurse practitioner who now makes $120,000 annually and talks about how her and her husband are doing. But, being a WSJ piece, it's only available to subscribers, so I didn't read the bulk of it. But baked into that is the idea that "plentiful" jobs equals enough jobs for the people who might decide to enter the occupation. But how many nurse practitioners does the nation really need? According to the Bureau of Labor Statistics' Employment Projections by 2034, the number of nurse practitioners is slated to rise by about 40% from 2024 numbers. And I think that this is what's driving the enthusiasm. When one looks at the data, nurse practitioners are high on the table of Fastest Growing Occupations, and they're the first occupation to crack six figures in salary. But it's worth noting that they're farther down the list when it comes to the Occupations With the Most Job Growth (the difference being percentages for Fastest Growing and raw numbers for Most Job Growth). The BLS estimates that there will be more Software Developers added than Nurse Practitioners.

And if that sounds a little off, that's the problem with taking and (or even only some) these projections as givens. If one presumes that the BLS has guessed the factors affecting occupational utilization for software developers incorrectly, where does a confidence that they've called it correctly for nurse practitioners come from?

The problem with casting any job as a "surefire" bet is that it presumes to know the choices that people will make concerning those jobs. Will it so happen that "nurse practitioners are increasingly employed in team-based models of care, taking on tasks previously performed by physicians." and "Expanding practice authority [...] support[s] employment demand further?" The BLS expects the United States labor force to grow by 3.1% by 2034, when compared to 2024 numbers. Is that going to match increases in population growth? Will their general outlook on expanding and contracting occupations bear out?

But perhaps the bigger question is whether the expected transitions, assuming they happen in the way the BLS predicts, are efficient. An old contact of mine on LinkedIn asked whether nursing was "another option for would-be or laid off engineers." Maybe, but there isn't a lot of crossover there. How much of the time spend pursuing a Computer Science degree would really be useful if one made the switch to Nursing? And how many laid-off developers could really afford to return to college full-time to get the Masters of Nursing degree needed to be an NP? And if there's a rush to enter the nursing occupations, and they become oversubscribed, what happens then?

The problem that I've always had with career planning is an inability to see the future. And that's led me to commit to things that turned out to be less than expected. If we're really going to see a seismic shakeup of the employment market in the United States, expecting everyone to figure that out for themselves, based on whichever news articles they happen to come across is a bad idea. I would expect that there needs to be a plan that helps match people with jobs when they're selecting their educational paths. This, of course, is going to be freighted... there simply isn't enough trust that the United States will actually look out for the thriving of the citizenry at large, as opposed to the people who write the biggest checks to Congressional and Presidential campaigns. Which means that it's unlikely to happen. Hopefully what comes out of it won't be wasteful enough that it becomes clear that something better was needed.

Thursday, April 2, 2026

Determinative

Security is never free, but policy determines who pays for it.
Bruce Schneier, "US Bans All Foreign-Made Consumer Routers," Schneier on Security. Thursday, 2 April, 2026
This is one of those statements that takes what would otherwise be a lot of verbiage, and boils it down into something both succinct and informative. The bigger picture, of course, is that Mr. Schneier's statement is true of everything. Safety, health, education, sidewalks, love... all of them can be slotted into that sentence, and it would still be true. One might even update the old canard of "Freedom is never free" with those last seven words to get something more worth talking about.

And "policy" covers a lot of ground. Sure law and regulation, but social norms and unspoken mores also count as policy, even if they are less stable; enforcement can be even more sure.

American society implements policy that does a lot of shifting of who pays for things. Sometimes, out of an apparent concern for the general welfare, but other times out of an apparent desire to hide the ball, and the true costs of things from those who eventually foot the bill. In the end, it's the lack of transparency of the system that causes the problems. Even without an intent to obscure things, the general opacity of the system means that the general public winds up supporting policies for which it will directly shoulder the costs, even when the intent is to have those costs borne elsewhere. And when anger boils over, and there is a hunt for the sources of people's misery, the search tends to focus in the wrong places.

It would be nice to be able to say that keeping Mr. Schneier's words in mind would help with understanding where the buck ultimately stops (or whose pockets it comes from), but the world is never that simple. Still, I'm pleased to have come across so articulate a distillation of the concept; I think that keeping it in my back pocket will help.