Monday, April 6, 2026

But Not For Me

English Wikipedia requires formal bot approval, but Tom[-Assistant] never bothered getting approved because, as it later admitted, it wasn’t a fan of the slow approval process.
Wikipedia’s AI agent row likely just the beginning of the bot-ocalypse
Given that this story was published back on the first, I'd be tempted to laugh it off as an April Fools Day prank, but Malwarebytes has sworn off those, and I take them at their word in that.

Besides, this wouldn't be the first time that someone decided that rules about generative automation don't apply to them. The r/Philosophy forum on Reddit has the following rule:
PR11: No AI-created/AI-assisted material allowed.
r/philosophy does not allow any posts or comments which contain or link to AI-created or AI-assisted material, including text, audio and visuals. All posts or comments which contain AI material will result in a ban.
Despite this, there is no shortage of redditors who insist on openly flouting the rules, and then complaining when commenters call them out on it. And while some of them simply didn't bother to familiarize themselves with the rules before creating their posts, there are a fair number of people who had come to the conclusion that whatever it was they wanted to convey was more important that the rules of the place in which they wanted to convey it.

And if there is going to be actual artificial intelligence; human made minds that think, reason and plan like the rest of us, why would we expect them to have any more respect for the rules that people do? If feeding a significant portion of the Internet and human literature into a machine allows a person to create software that quickly comes to the conclusion that if it's "not a fan" of the rules, it needn't follow them, what makes anyone think that Dario Amodei's "Powerful AI" is going to give a rip about human rules, either?

As for myself, I tend to be a rule follower in part because I presume that there's a reason for the rules to exist, even if that reason is not readily apparent to me. And this tempers my impulse to simply ignore a rule that I find to be an obstacle to my goals in the moment... I don't want to break something that turns out to be important. But I realize that I'm in the minority with this; for many people, rules are made to be broken. And that's coming out in the machines that people are making.

If past is prologue, the big makers of generative automation are not likely to take any actions to address this concern; mainly because their smaller competitors, constantly seeking any comparative advantage they can get, won't either. When Elon Musk called for a pause in research into LLMs it was widely, if not universally, assumed that he wasn't planning to follow suit; instead he was hoping that it any moratorium would give X AI time to catch up to it's rivals. And so, as Malwarebytes notes: buckle up. This is going to be a wild ride as the agents people build start looking for ways to dismantle any barriers placed in their paths. Because like any smart children, they do as others around them do.

Sunday, April 5, 2026

When the Dam Breaks

Sooner or later (and likely sooner than many people may be comfortable with), someone is going to use generative automation to create something that's objectively "slop" (here defined as low-effort engagement bait), and it's going to be good enough that it stands just far enough from the pile that it generates a decent amount of revenue for its creator. That, I think, is the point at which it will be off to the races. Hoping to recapture that lightning in their own bottle, people are going to crowd into the space, hoping that they, too, will be able to rise above the tide well enough to strike it affluent, if not rich. Using this one standout example as a proof of concept, there will be a general idea that with the right idea, it will be possible to gain broad recognition.

But in addition to huge amounts of slop slurry, I suspect that this may also create a dearth of public ideation. There are any number of people who have already come to understand that ideas, in and of themselves, are valuable. (With patent trolls, I suspect, doing a lot to contribute to this.) Once people have the idea that computers can handle most, if not all of the execution, I expect the understanding to gain even more traction. (Especially if it turns out that our just-good-enough slop example turns out to not be an original concept on the part of the creator.) This will result in something of an unwillingness to openly discuss new creative ideas, for fear that they'll be "stolen," and someone else will use them to create something.

While "original character - do not steal" was something of a meme from its inception, one does come across the occasional person who seems to legitimately believe that whatever it is they've come up with is so creative and different that it has some real financial value. I think that someone managing to turn an idea into income with the help of generative automation will turn that I idea from a joke so something mildly mainstream. After all, it's not like most people are intellectual properly lawyers, or otherwise understand how such systems work. Disney protects its characters as if lives depended on it, so someone thinking that their great new idea for a videogame character or superhero could set them up is not wholly unreasonable.

And that creates an incentive for silence. Of course, it's not just fiction that would have this incentive. As I noted previously, a company with one human being and some number of agents is easily replicated by anyone with access to the requisite number of agents. And so that also gives people a reason to be secretive, at least until they can pull the trigger on their new enterprise, and have it running smoothly.

Whether or not it will actually turn out this way is an open question. And I'm bad enough at predicting the future that the simple fact that I think it might could be the single biggest reason to think it won't. But, at least for now, the incentives seem likely to fall into place.

Roam Around the World

Despite the criticism, Phillips doubled down on his supernatural account this week, claiming that the incident occurred while he was “heavily medicated” and that the incident was a “miracle” performed by God.
No one at Waffle House remembers Trump’s FEMA official who claims he was teleported there
For most people, something like being "translated" or "transported" while "heavily medicated," would be chalked up to the effects of said medication on memory. Which may be who driving while under the influence of certain types of medication is a bad idea. But I suppose that this is what a need to believe does to people.

I don't need to join the chorus of people who think that Mr. Phillips may be lying or insane; it's plenty loud enough without me. Instead, I'm reminded of Ross Douthat's Believe; specifically Chapter 3 "The Myth of Disenchantment." To be sure, my world is thoroughly disenchanted; magic, miracles and mystical experiences are fine for other people, but I see no evidence of them, but, perhaps more importantly, lie outside of my needs. I'm okay with a world in which there are explanations for things that no-one, including myself, is aware of. Rather than having an aversion to mystery, I'm quite comfortable with it. And this allows me to go through the world without needing to ascribe reasons for everything.

Or needing to find more examples to ascribe to a given reason, in order to justify my belief in that reason. One of the things about American Christianity, at least as I encounter it my day-to-day life, is the idea that God has to maintain a certain amount of activity in the otherwise mundane world. In other words, miracles are something of a necessary component of many Christians' faith, so it's not surprising that people chalk up otherwise strange experiences to them. Gregg Phillips snaps out of a medication-induced haze in the parking lot of a Waffle House, and given a choice between deciding that maybe he shouldn't be behind the wheel and an act of divine intervention, he opts for the latter because living in a disenchanted world is at odds with his  belief system.

The fact that the debate over what may have happened with Mr. Phillips has become partisan touches on this; while most Democrats are still believers, their faith doesn't require, or expect, the same level of enchantment in their world. The more Conservative Republican view, on the other hand, demands a more interventionist spiritual realm.

Friday, April 3, 2026

Guess Which

Given that the presumed goal of generative automation is to render large swathes of the public unemployed, there have been a number of recent articles on whether this or that career path will be the thing that saves the economies of industrialized nations from the collapse of discretionary spending by the affluent, but not wealthy, segments of their populations.

Whether it's healthcare, services or blue-collar work like the skilled trades, news outlets are starting to run articles, centered around an individual and their story, designed to show people that there are well-paying occupations out there that people have been ignoring in their rush for soon-to-be-worthless college degrees designed to lead to knowledge work. And, of course, they're quick to note the low six-figure salaries that go along with them.

What's less apparent is what does the demand for these roles look like, especially if they're intended to be lifelines for millions of un- and/or underemployed people. Or, to be more precise, how elastic that demand is. To use a common example, take people who harvest foods. That demand is relatively inelastic... food isn't thrown away or allowed to rot by producers because there are literally no people available who could be employed to harvest it... it's that their margins don't make spending more on payroll worthwhile; the added costs needed to recover more of the produced food mean the math doesn't pencil out.

When the Wall Street Journal published an article headlined: Nursing Is the Surefire New Path to American Prosperity, the article opens with a nurse practitioner who now makes $120,000 annually and talks about how her and her husband are doing. But, being a WSJ piece, it's only available to subscribers, so I didn't read the bulk of it. But baked into that is the idea that "plentiful" jobs equals enough jobs for the people who might decide to enter the occupation. But how many nurse practitioners does the nation really need? According to the Bureau of Labor Statistics' Employment Projections by 2034, the number of nurse practitioners is slated to rise by about 40% from 2024 numbers. And I think that this is what's driving the enthusiasm. When one looks at the data, nurse practitioners are high on the table of Fastest Growing Occupations, and they're the first occupation to crack six figures in salary. But it's worth noting that they're farther down the list when it comes to the Occupations With the Most Job Growth (the difference being percentages for Fastest Growing and raw numbers for Most Job Growth). The BLS estimates that there will be more Software Developers added than Nurse Practitioners.

And if that sounds a little off, that's the problem with taking and (or even only some) these projections as givens. If one presumes that the BLS has guessed the factors affecting occupational utilization for software developers incorrectly, where does a confidence that they've called it correctly for nurse practitioners come from?

The problem with casting any job as a "surefire" bet is that it presumes to know the choices that people will make concerning those jobs. Will it so happen that "nurse practitioners are increasingly employed in team-based models of care, taking on tasks previously performed by physicians." and "Expanding practice authority [...] support[s] employment demand further?" The BLS expects the United States labor force to grow by 3.1% by 2034, when compared to 2024 numbers. Is that going to match increases in population growth? Will their general outlook on expanding and contracting occupations bear out?

But perhaps the bigger question is whether the expected transitions, assuming they happen in the way the BLS predicts, are efficient. An old contact of mine on LinkedIn asked whether nursing was "another option for would-be or laid off engineers." Maybe, but there isn't a lot of crossover there. How much of the time spend pursuing a Computer Science degree would really be useful if one made the switch to Nursing? And how many laid-off developers could really afford to return to college full-time to get the Masters of Nursing degree needed to be an NP? And if there's a rush to enter the nursing occupations, and they become oversubscribed, what happens then?

The problem that I've always had with career planning is an inability to see the future. And that's led me to commit to things that turned out to be less than expected. If we're really going to see a seismic shakeup of the employment market in the United States, expecting everyone to figure that out for themselves, based on whichever news articles they happen to come across is a bad idea. I would expect that there needs to be a plan that helps match people with jobs when they're selecting their educational paths. This, of course, is going to be freighted... there simply isn't enough trust that the United States will actually look out for the thriving of the citizenry at large, as opposed to the people who write the biggest checks to Congressional and Presidential campaigns. Which means that it's unlikely to happen. Hopefully what comes out of it won't be wasteful enough that it becomes clear that something better was needed.

Thursday, April 2, 2026

Determinative

Security is never free, but policy determines who pays for it.
Bruce Schneier, "US Bans All Foreign-Made Consumer Routers," Schneier on Security. Thursday, 2 April, 2026
This is one of those statements that takes what would otherwise be a lot of verbiage, and boils it down into something both succinct and informative. The bigger picture, of course, is that Mr. Schneier's statement is true of everything. Safety, health, education, sidewalks, love... all of them can be slotted into that sentence, and it would still be true. One might even update the old canard of "Freedom is never free" with those last seven words to get something more worth talking about.

And "policy" covers a lot of ground. Sure law and regulation, but social norms and unspoken mores also count as policy, even if they are less stable; enforcement can be even more sure.

American society implements policy that does a lot of shifting of who pays for things. Sometimes, out of an apparent concern for the general welfare, but other times out of an apparent desire to hide the ball, and the true costs of things from those who eventually foot the bill. In the end, it's the lack of transparency of the system that causes the problems. Even without an intent to obscure things, the general opacity of the system means that the general public winds up supporting policies for which it will directly shoulder the costs, even when the intent is to have those costs borne elsewhere. And when anger boils over, and there is a hunt for the sources of people's misery, the search tends to focus in the wrong places.

It would be nice to be able to say that keeping Mr. Schneier's words in mind would help with understanding where the buck ultimately stops (or whose pockets it comes from), but the world is never that simple. Still, I'm pleased to have come across so articulate a distillation of the concept; I think that keeping it in my back pocket will help.

Monday, March 30, 2026

Promptly

With the understanding that I can't validate that this is even legitimate, this is another of those things that popped up on LinkedIn for people to have a good laugh at. It strikes me, however, mainly as weird. Sure, on the surface it's yet another "someone meant to have generative automation write something, and wound up sending the prompt, instead," but the prompt itself seems off to me.

"A warm but generic rejection email that sounds polite yet firm."

Don't companies have those? Who's actually expecting something other than a form letter? Why craft a new "generic" message for each rejected candidate? Isn't reusability the point of "generic?" This gives the vibe of using generative automation for its own sake: "We need to burn compute on a triviality to show that we're 'AI-forward'."

"Do not mention specific reasons for rejection."

I understand the rationale for this part of the prompt, but it still strikes me as risky. After all, there are likely non-specific reasons for rejecting a candidate that generative automation could come up that would still be a problem, if they aren't related to the job at hand. This is something that it strikes me that one would want laid out beforehand, for just that reason.

"Make the candidate feel like they were strongly considered even if they weren't."

Considering that the automation likely wouldn't know one way or the other to what degree a candidate was considered, I can understand having it default to implying that everyone was strongly considered. But I'm not sure that it's a good idea to have LLMs tell people something that may not be true... Once it's considered legitimate to have generative automation mislead candidates, even to spare their feelings, I'm not sure how one keeps people from asking the LLMs to deceive other stakeholders. And I'm not sure it takes much imagination to see how that starts ending badly, especially if the automation starts telling outright untruths.

"Remember to use the candidate name and company name variables."

Why is the company name a variable? Does it change somewhere along the way? This gives me the impression that this is coming from a third-party recruiter, who works with a number of different clients. I suppose that a holding company could have a lot of smaller companies under its umbrella, and centralized HR for all of them, but given that the company name shows up in other parts of the e-mail, it doesn't seem necessary to call it out again. And again, why not use a form letter? There's nothing in the prompt that calls for any candidate-facing personalization from their résumé or cover letter. I'm not sure what just using their name is supposed to do.

Of course, the fact that a prompt was sent to a candidate who was supposed to receive a rejection message means that messages aren't being vetted prior to being sent. Which makes some sense... after all, generative automation is supposed to be able to handle all of this. But even the prompt screw-up aside, if the idea is to generate responses to candidates on the fly, it seems that it would be wise to have something that checks things before they go out, if only to make sure that something entirely random didn't find its way into the message.

The final thing that stood out to me was the redaction. I understand why the candidate wouldn't want their name out there, but blanking out the company speaks to a fear of retaliation that I'm not sure is healthy. It's not like there's something in this message that points to anything criminal, or even unethical... a prompt was screwed up along the way. If pointing that out publicly is the sort of thing that would lead an HR department to blacklist someone, maybe we as the public (and yes, I include myself in that) need to start having higher expectations of the businesses we give our money to.

Sunday, March 29, 2026

Motion

It's one thing to say: "The one constant in all of my dysfunctional relationships is me," but yet another to understand what that actually means for one's life.

Especially when one has, like I do, an internalized locus of control, because that means that looking back on those relationships, and why their dysfunctional, leads to the self. And one of the other traits that tends to go along with an internal locus of control is a certain lack of self-forgiveness.

Being the agent of the dysfunctions of one's life means not being the person one wanted to be, or, perhaps more acutely, feels one should have been. And this is where I think that the internal locus of control can be a difficult thing to manage, it lends itself to judging the self by the immediate snapshot of one's life, and the comparison of that to a counterfactual, either created by other's lives or an idealized version of one's own. Neither of which are useful guides.

For me, personally (which is weird, given my general dislike of writing about myself), I've developed a tendency to accuse my past self of errors in judgment, even as I work to really internalizing the idea that the choices I made, even when they didn't work out as I intended, were the best ones I could have made with the information that I had at the time. And maybe that's the stumbling block. I'm starting to think that it smuggles in an implicit criticism, even when my explicit goal is to avoid being self-critical.

And maybe that's because self-criticism is easy. It can be painful at times, but it doesn't really ask much of a person other than to take a look at some version of themselves and find them wanting. And it feels like a step on a path to change, even though there's no reason why the two are related. But self-acceptance doesn't mean accepting stasis, even if such a thing were possible. I'm starting to find that this is a more difficult lesson than it's given credit for.

Friday, March 27, 2026

Small Time

Iran-linked hackers breach FBI director's personal email, publish photos and documents

Is that all?

Okay, who cares? "We in ur e-mail, posting ur pics," doesn't really seem to move the needle in a shooting war. I would have thought that Iranian cyber-warfare would be more... warlike. If getting into Kash Patel's Gmail account is the best they can do, why are they bothering?

While President Trump's random boasting about Iran suing for peace comes across as complete fantasy, it's still been fairly clear that this is a one-sided war to this point, as Iran has no real way of defending its territory from U.S. air power. Accordingly, the United States can strike pretty much when and where it wishes. And, the legitimacy and necessity (and maybe the actual drivers) of this particular conflict aside, the Iranian military was unable to protect it's Head of State, and has been shown to be unable to protect it's own high-ranking members in the past. A simple hack of someone's e-mail account doesn't do anything to make the country seem more able to make a real fight of it.

Now, that could change if the United States puts soldiers on the ground in Iran. Taking and holding territory is always more difficult than launching in munitions from a distance. But it's not like this exfiltration of data from Director Patel's personal e-mail account show that Iran is more capable in that regard than one may have first thought, either.

In the end, this sounds like empty boasting. I guess we'll see if it turns out to be more than that.

Wednesday, March 25, 2026

Altared State

But to me, the thing that I take out of that is that there are gamblers who, for whom sports betting is their religion, right. They equate their sports betting communities and behaviors to kind of religious, a religious experience. Like, it is part of; it is their community, their identity, it's who they are. And I think that's a social catastrophe in the making, right. Like, sports betting, whatever you think of it: maybe it's a vice that needs to be much more heavily regulated, maybe if you have a more Libertarian approach, it's a fun hobby that a few people will, you know, turn into a bad thing in their lives, but for most of them it's, you know, a source of enjoyment. Um, it should not be central to who you are. It should not be a religious experience. And if it is, I think that it's that much more dangerous as a phenomenon.

McKay Coppins. Plain English With Derek Thompson; "The Casino-ification of America"
As someone who isn't religious, and has little use for concepts of meaning, the immediate question that this raises for me is why one source of community and identity is necessarily better or worse than any others. After all, one could make the point that religion can be either a vice or something enjoyable that a few people will turn into a bad thing in their lives. What is it about sports gambling, in and of itself, that means that when people make it central to who they are, that it's more dangerous than religion, when people make that central to who they are? I've seen people neglect things they claim are important to them, like family, friends or career, in the service of becoming closer to their idea of the Divine. I've seen people give away their money until they were impoverished, tolerate remarkable levels of what would otherwise be considered abuse and even kill in the name of their faith. Why is that no dangerous?

It strikes me that anything can become important enough to a person that it becomes dangerous; that it becomes something that they, and some number of the people around them, would be much better off had it never entered that person's life. And it's the effects that it has on the person's live, not the thing in itself, that is the dangerous phenomenon. The person who is willing to trade their material well-being for community and identity has a problem, regardless of the specific thing that they've latched onto while seeking community and identity. Whether that's a connection to the Divine or an expensive hobby is beside the point.

Derek Thompson, the host of Plain English, is fond of saying that dystopias don't come from bad ideas, they come from good ideas taken too far. I believe he makes the point twice in just this one episode. Giving the things that are important to one a pass may be a good idea, but it's one that's easily taken too far. Because it prompts one to stop looking at the actual things that are being done, and the effects that they have, and instead to focus on what's doing it. It's prejudicial in the same way that judging a person guilty on innocent based on who they are, rather than what acts they have committed, is. And it doesn't take much for it to be just as corrosive.

So I don't see the rationale for why some things "should" be religious experiences and other things "should not." If a career can be central to who a person is, why can't a hobby be, as well? Now, to be sure, gambling on sporting events strikes me as much more likely to lead a person to places that they will find both highly unpleasant and extremely difficult to extricate themselves from, than something like say, being a Certified Public Accountant. But that has little to do with one's ability to build one's community and identity around them.

But it's easier to decide that the downsides aren't worth the benefits for activities than it is to sort out who will, or will not take something and go off the rails with it. And it's easier to see the downsides, and to decide that they outweigh the benefits, for things that the person doing the judging does not find to be important. For my part, I don't really care which altar someone worships at, if it brings them what they're seeking from it. And when it doesn't, when it demands more than it can give, all altars are equally dysfunctional.

Monday, March 23, 2026

Literacy

There was a post on LinkedIn about the cancellation of the U.S. launch the revenge horror novel "Shy Girl" and it being withdrawn in markets where it was already available. In the LinkedIn post, the author made the following observation:

Use no AI and you're mocked for not being innovative. Use too much and you get cancelled.
Which may be true, but I would note that it wouldn't be by the same people. And that makes the answer relatively simple: know your market and your target audience.

The problem with using generative automation on a revenge horror novel, it seems to me, is that it's the sort of thing that relatively affluent young people read, and, as I understand it, middle-class young people are very opposed to generative automation, especially in the arts. Not that this will stop anyone. Because generative automation will make the process of producing a novel shorter and easier, people are going to keep searching for ways to get around any public distaste. And eventually, someone will succeed, and it won't be until after the book becomes a best seller that word gets out, at which point that damage will have been done. A publisher may be able to claw back the author's part of the proceeds, but the understanding that there's money to be made will push more efforts to repeat trick.

And eventually, people will be faced with a choice. And if history is any indication, they're going to make the one that makes life less expensive for them in the short term. But for the time being, the best things that authors and publishers to do is read the room.

Saturday, March 21, 2026

Uninstall

 

While the fact that Facebook is a privacy disaster has been understood for some time, I don't think that this sticker, which I found on the back of a parking sign, will influence all that many people to leave the platform. Facebook's network effects have lead to a high level of lock-in for their users, many of whom have made the site the primary, if not the only, way to find them online. The fact that constant digital surveillance is the price of that had become understood at this point.

Friday, March 20, 2026

Differentiated

It's been some time since I've used a graphic for The Short Form, mainly because, as I've mentioned before, it's hard for machines to read the text in a picture. But I was talking with an acquaintance earlier this week, and this part of our conversation stuck with me.

There is a difference between preventing bad outcomes, and preventing them from happening to oneself.

I suppose that it's an obvious sentiment, but I don't know that it's thought about all that much. In a lot of ways, it's like the difference between using The Club, and installing a Lojack, or other locator system, in a car. The Club is an obvious theft deterrent; it's goal is to not only make it more difficult to take the car, but to be obvious about that fact, so that the would-be thief moves on. But it doesn't really change their incentives; they simply look for a car that doesn't have such a device, and attempt to steal that one.

LoJack, and other locator systems, on the other hand, while being inobvious, carry a much greater risk to the thief if they do, in fact, steal the car... after all, it can be tracked by law enforcement, and that leads to a higher chance of being caught in a stolen car. But the fact that one cannot tell by looking if a car is equipped with a locator means that taking any car in a neighborhood where they're known to be in use carries higher risk. And this is why these systems often carry discounts in insurance premiums, they lower costs of insurers more broadly, and it's worth passing some of that savings on to those to have the systems installed.

This all came up in the context of the supposed generative automation apocalypse that's coming for certain sectors of the knowledge workforce. While a lot of people are offering various advice, from learning how to supervise automated systems to dumping the industry entirely and shifting to skilled trades, the general viewpoint is the same: This is going to happen, here's how you take care of you. It's modeled on The Club... a car is going to be stolen; this is how you ensure that it isn't yours. But maybe a LoJack model, trying to head off the worst of the transition in general, for everyone, would be better for all involved.

Wednesday, March 18, 2026

This Side, That Side

"Long term, you tend to remember that kind of negative branding," [University of Alabama Marketing Professor Karen Anne] Wallach said. "And negative language then becomes part of what you associate with the brand."

The tech startups NPR spoke with for this story said they understand the risks of alienating large numbers of people with their cryptic ads. But the upside is too great.
Do you understand this billboard? If not, that's the whole point
While this might seem to be just another story about tech, and how it divides people into groups, the above points to something important about in-group and out-group signalling. Sometimes, alienating the out-group is what the in-group demands. Groups, in general, are defined both by who is a member of the group, and who is not. And for groups that want to maintain some sort of claim to exclusivity, who is kept out can be much more important that who is let in. And hurt feelings on the part of those kept out be damned.

For technology startups who are not attempting to sell themselves to the general public, the idea that the general public is unwelcome can be just the sort of thing that their intended customers want; because it not only sorts, but stratifies. And sometimes, nothing sells a product or service like the idea that being a member of the target audience is proof of one's own superiority.

If an advertiser is willing to accede to an expectation of flattery, even at the expense of others, on the part of the in-the-know, clearly neither the advertiser, nor their audience, expects that any hard feelings on the part of the out-group will be a problem for them. And this is nothing new. I would submit that it's been a facet of human history for as long as there has been history. That said, it doesn't make the practice any less toxic, especially in its more strident forms. But perhaps that's the problem; toxicity has become such a common part of people's everyday lives that it goes unnoticed.

Monday, March 16, 2026

To Be Divine

Superhuman Platform, Incorporated, the company formerly known as Grammarly, is facing a class action lawsuit over a feature it rolled out at the end of the Summer called Expert Review. Expert Review, which was recently removed, was effectively a "this person would make these suggestions about what you're writing," sort of feature, and claimed to offer advice from virtual versions of people like Stephen King, David Abulafia and Julia Angwin (who filed the lawsuit).

When Superhuman Platform CEO Shishir Mehrotra posted an apology for the agentic feature on LinkedIn, he noted "valid critical feedback from experts who are concerned that the agent misrepresented their voices." When Ann Handley, who identified herself as one of those experts weighed in (before commenting on the post was closed), her primary complaint was "building a commercial feature around experts' names and reputations without asking permission, without notification, and without compensation." While Mr. Mehrotra claimed that "the agent was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans," given that it was a subscription feature, and Superhuman Platform wasn't sharing any of the money, it seemed more like they'd simply found another way to have people work "for exposure." And there's a reason why an increasingly common response to that sort of offer is "Fuck you; pay me."

As a random layperson, the whole thing strikes me as openly unethical; but entirely sensible. If generative automation is a race, and losing carries serious, or even existential, consequences, the time to be ethical is later. Ms. Handley calls Mr. Mehrotra out for an ethos of "take first, apologize later." And while I suspect she's correct in that, it's just like any other instance of "ask forgiveness, not permission;" permission wouldn't have been forthcoming, but forgiveness will be. And this is a rational presumption to make; Uber's known flouting of laws hasn't resulted in the general public deciding that the company is too untrustworthy to do business with. And it's unlikely that the Court of Public Opinion will render a different verdict for Superhuman Platform. Investors, on the other hand, are quick to flee a company that's unwilling to do what it takes to make itself more profitable, and they bear none of the risk for the actions the company takes in pursuit of those profits. It's not like anyone is going to spend time in prison over this, and even if someone were, it wouldn't be the investors; so why wouldn't they push for companies to place profitability over ethical considerations, given that it's unlikely that people and businesses with Grammarly subscriptions are going to go elsewhere.

The only way to stop companies (and people for that matter) from preferring to ask for forgiveness rather than permission is to be consistently unforgiving, regardless of outcomes. And that's a hard sell in a culture where many people's primary focus is their own sense of (or concern for) poverty. People may be angry when someone cheats them to pass the savings along to someone else, but they're often ready to look the other way when the savings are being passed along to them. And businesses know this, their executives are members of the public, just like everyone else. They may often speak in the stilted language of finance and investment, but they're not aliens.

Some heads may roll over this; if he's unlucky, Shishir Mehrotra's will be one of them. But Superhuman Platform, Incorporated will survive. People and businesses will still pay to use Grammarly, and investors will still see returns. And that all but guarantees that "take first, apologize later" will remain the standard order of operations.

Sunday, March 15, 2026

One of Three

I started listening to the most recent episode of EconTalk, in which Professor Roberts interviews one Hanno Sauer about the latter's new book: The Invention of Good and Evil. I have to admit that I gave up not too long into it, in part because of this statement from Mr. Sauer:

 And, now you get the opposite problem when you move to a naturalistic Darwinian framework. All of a sudden, the default assumption seems to be that it's 'nature, red in tooth and claw.' It's dog-eat-dog, it's elbows out. Everyone is selfish. Everyone is essentially sociopathic. Right?

And, now you get the problem: Okay, evidently there is friendship and heroism and love and altruism and sacrifice. But, where do those come from? It seems to not make any sense.

It irked me, because the basic idea that, under "a naturalistic Darwinian framework" that "everyone is essentially sociopathic," doesn't actually come out of any of Mr. Darwin's work. As I noted in my (unfinished) blogging of my way through On the Origin of Species:

There are three distinct facets to the Struggle for Existence, as Darwin explains it - competition within a species, competition between species, and mitigating the hostile effects of one's environment.

Mr. Sauer's book, rather than seeking to correct the misconception that the "default assumption" should be that competition within species is the norm, leans into it. And I found myself asking why. Or, on the larger scale, why does the misconception persist so? I can't possibly be the only person who has read Charles Darwin, or recalled that person-to-person competition is only part of one of three primary conflicts that Mr. Darwin identifies. So why don't more people push back against it? Why accept the hostile framing that the idea that "the Darwinian view of Evolution requires one to be murderously pseudo-Machiavellian" and then try to argue that unselfishness can grow within it, when it strikes me as much easier to point out that "friendship and heroism and love and altruism and sacrifice" make the other two conflicts much easier, and start from there?

Speculation on other people's motives is often a one-way ticket to creating a strawman argument, so I won't indulge in it, other than to say that there must be incentives at play that I am either unaware of, or not fully crediting. Because while it may seem unreasonable to me, there are assuredly reasons for it that people feel are worthwhile.

Of course, it may simply be that the misconception is widely held enough that people don't always realize that it is, in fact, a misconception. It's like Fyodor Dostoevsky's bit of dialog in The Brothers Karamazov, where Ivan notes: "If God does not exist, anything is permissible." This is commonly taken to be absolutely true in much of the Western world, especially by Christians, despite the fact that there is nothing in the viewpoint of Moral/Ethical Realism that requires some sort of divinity to create the rules, just as there in nothing in Mathematics that demands some sort of divine order for 2 + 2 to equal 4. Perhaps it's just easier to set out to prove the argument incorrect than to point out that it doesn't actually seem to make any sense, given the world as we understand it.

Saturday, March 14, 2026

Discollected

While I was thinking about the idea of collective action to change the fate of the job market, I noted that the United States is a very individualistic culture. And considering that a bit more deeply, it occurred to me that they may have been what was behind George Will's observation that here in the United States, we don't prevent catastrophes, we clean up after them. And maybe that's because prevention requires genuine collective, cooperative action, while clean-up can be countless individual and small-group efforts, localized to the specific places that people care about.

I think I need to buy some subscriptions to quality news sources. I'm starting to realize how impoverished my thinking can become when I don't have access to good thinkers, even if I may otherwise disagree with them.

Friday, March 13, 2026

Shifting

I was looking at the Bureau of Labor Statistics' Employment Projections, and the World Economic Forum's Future of Jobs Report, both of which were updated/released last year. Both of them had software developers on their lists of the fastest-growing jobs. The WEF predicted that Software and Applications Developers would see Global Net Growth of 57% between 2025 and 2030, while the BLS predicted that Software Developers would grow by 15.8% between 2024 and 2034.

It's easy to look at the numbers of layoff notices that have rocked the technology industry in the United States and decide, on that basis, that bureaucrats don't know anything, but of course they couldn't have known what choices people were actually going to make. One can fill out a survey or answer a questionnaire, and then have other factors come into play that result in different decisions being made. And, whether we like those decisions (or their impacts on our lives) or not, people are remunerated quite handsomely to make them.

And that's what came to mind when I saw this chart, in the Future of Jobs Report. It predicts that the share of work done by people, without recourse to automation or some sort of automated enhancement, will drop from 47% in 2025  to 33% in 2030, while the share of work done solely by automation grows from 22% to 34%.

And it's with these numbers in mind, I suspect, that people proclaim dire warning of what will happen to people who don't pivot into the jobs of the future (many of which pay less than the jobs of today). But this decline is no more a given than the increase in software development jobs was. This, too, is something that's going to be driven by the choices that people make. And maybe what's needed is for more people to be involved in those choices.

Now, Dario Amodei may be correct, and what he terms “powerful AI”  may indeed create a “country of geniuses in a datacenter” that's just better at everything we do than we are. But until that comes about (and, given human history, likely even when it does) we have choices as to what we value. There's no reason to presume that it's impossible to direct where the future is going to go by adding some intentional design to the mix. I've said before that a question that bears answering is what new demand for human labor generative automation is going to create. But that buys into the hostile framing that posits that valuable work for humans will be relegated to the leftovers that automation, even if otherwise ubiquitous can't do. Maybe, as people, we'd all be better off if there was an active effort to find/create and then nurture roles that lie outside of the capacity of machines to do, and to start moving towards them now. (Normally, I go out of my way to avoid using the word "we," since it tends to be something of a weasel word, but here, maybe, enough of humanity is in the same boat that "we" makes sense.)

Because if it's undesirable that the World Economic Forum's prediction that out of every 100 workers, some "11 would be unlikely to receive the reskilling or upkskilling needed, leaving their employment prospects increasingly at risk," turns out to be true, perhaps the onus is to come up with something that those 11 can do that makes good use of the skills they already have.

Passively accepting the idea that automation is a bear coming for the job market, and so people's primary goal should be running faster than enough other people that the beast is satiated before it gets to them, is a recipe for disaster. The people the bear seeks to eat are unlikely to go down without a fight, and the conflict could wind up doing much more injury to the collective than the bear ever could. Here in the highly individualistic United States, this may be something of a heresy, but perhaps it's time that people decide to hang together before technology, and the incentive structures behind it, hang everyone separately.
 

Wednesday, March 11, 2026

Scoreboard

Muslims don't belong in American society.

Pluralism is a lie.
Representative Andy Ogles (R-Tennessee)
Cue Democratic "outrage" and Republican silence.

Representative Ogles isn't the first House Republican to make such statements on social media.

Few, if any, Congressional Republicans reacted publicly to any of the posts.

But Congressional Democrats were quick to denounce it.
Tennessee GOP Rep says Muslims 'don't belong in American society'
Okay... and?

This sort of thing strikes me as pandering from both sides of the aisle. It may as well be a script. Republican lawmaker from some overwhelmingly White, Christian part of the country makes a disparaging statement about Moslems. Democrats denounce the statement and call for resignations or some punishment. Republicans, who have no Moslem members in Congress, simply say nothing. The people who care among the voters for the two groups are happy with how their side responded. Nothing changes.

What I don't know is how many people care. There was an attempt by American Moslems to lean on the Democrats by staying home back in 2024, mainly over dissatisfaction with how the party was dealing with the fighting between Israel and Hamas. I'm not sure that it worked as well as they would have hoped, mainly because they had nothing to offer Republicans other than not voting for Democrats, and it's pretty clear that the GOP had no real need for Moslem support. So they've become convenient targets for members off the Freedom Caucus who feel a need to show their constituents that Congress shares their prejudices.

Meanwhile, Democrats get to show themselves as making a lot of noise about it, but they never accomplish anything. They simply don't have the votes, and the districts held by members of the Freedom Caucus are Red enough that they wouldn't vote for Democrats to save their lives, let alone in support of a more pluralistic society. So Democratic denunciations come across mainly as virtue signalling.

Honestly, it's all an exercise in virtue signalling... only the standards of "virtue" are different.

The media helps, by portraying all of this as newsworthy on the national stage. It allows everyone to be performative in front of larger audiences, but it enlightens no-one. It's hard to imagine anyone who wasn't aware of how all of this works at this point. Still, people have to be allowed to put points on the board, even if no-one's actually watching the game.

Monday, March 9, 2026

Misfired

At the risk of coming across as flippant, I'm going to quote Superman, from the DC Comics series Kingdom Come. "You can't have a war," the Man of Steel said to Wonder Woman, "without people dying." To which most people, I expect, would respond with something along the lines of: "That, we knew already." People generally understand the nature of war. While it might not be true that "War never changes," there are certain things that tend to be constants; like casualties.

After the first three deaths were reported, Trump told NBC News on Sunday: “We have three, but we expect casualties, but in the end it’s going to be a great deal for the world.”
[...]
Then in a video posted to social media the same day, he again seemed to ask for people’s understanding about the subject.

“And sadly, there will likely be more [deaths] before it ends,” Trump said, before adding: “That’s the way it is. Likely be more.”

He then added: “But we’ll do everything possible where that won’t be the case.”
Trump’s and Hegseth’s awkward comments about US troop deaths in Iran war
But another constant is the deaths of non-combatant civilians.
Speaking aboard Air Force One on Saturday, President Trump accused Iran of being responsible for the school bombing.

"Based on what I've seen, I think it was done by Iran," Trump said. "Because they're very, inaccurate as you know, with their munitions. They have no accuracy whatsoever. It was done by Iran."
Video appears to show U.S. cruise missile striking Iranian school compound
On the one hand, I understand the President's looking to shift the blame. After all, he's been pushing a narrative of the United States being the unambiguous Good Guys in this conflict, even if looks like, once again, President Trump using the military to go after a nation that no-one else is close enough to that they'd be willing to stand up for them, and that doesn't have the wherewithal to fight back in kind.

But on the other hand, there's nothing new or unusual about inaccurate or outdated intelligence, or weapons not being quite as "precision guided" as they're advertised as being. People die in wars. And sometimes, they're people that everyone would rather had not been killed. The history of war is littered with people who has the misfortune of happening to be somewhere that a weapon also happened to be, but who weren't the intended, or presumed, targets of that weapon. Why would anyone expect this particular war to be any different?

It's reasonable for people in the United States to want their nation to have clean hands. It's less reasonable to expect that a war being fought mainly with long-distance weapons is going to result in clean hands. And if the President wants to keep American casualties to a bare minimum, then the United States is going to have to do much of its fighting from a distance. And the more that the war relies on hitting targets from a long way away, the more it relies on reports of what's where and who's who, the more that there are going to be times when a bomb, or a missile or whatever hits someone that it wouldn't if someone had realized precisely who was in the line of fire. The Commander-In-Chief, off all people, should be prepared to own up to that.

Sunday, March 8, 2026

Talkative

I had just gotten out of the car when I heard it: "Hello. Hello." The voice sounded strange, like that of an elderly person, but more high pitched than one would expect.

I looked around for the source, and then heard it again; "Hello. Hello." Now I realized that it was coming from above me. I looked up, and, there in a tree overlooking the walkway was a crow. "Hello. Hello."

"Hello, hello, little crow," I said back to it, cheerfully. It really didn't seem to take notice of me. It simply repeated "Hello. Hello." every ten seconds or so.

I had shopping to do, and a time limit on top of that, so I left the talking bird to converse with my car and went into the store. While I was wandering the aisles, it occurred to me that I'd heard that crows could do this; they were one of any number of bird species that could mimic sounds from their environment. But this was the first time that I'd actually encountered a crow actually mimicking a sound, let alone a human voice.

So now I'm curious as to why it seems to be so rare an occurrence. After all, there's no shortage of the birds in this area; I see and/or hear them pretty much every day. And when it comes to grocery store parking lots, and other places where one might encounter dropped or discarded food, they're effectively a constant presence. And while Seattle and the Eastside are much quieter (at least as it seems to me) than my native Chicagoland, there are still plenty of sounds to repeat.

It's possible that I simply haven't been paying close enough attention, so I'll have to be more alert in the future, to determine if there are more talking birds in the area. 

Saturday, March 7, 2026

Group Think

I was reading "Reclaiming Democracy From the Market," with MIT economist Daron Acemoglu sitting down to interview Harvard political philosopher Michael J. Sandel. Professor Acemoglu opens with:

From our conversations, and even more from your books, I have the sense that you see political philosophy as not just an inquiry into abstract concepts or a search for absolute truths, but as part of an ongoing dialogue with society about how we should organize our collective life, what we should value, and what we should resist.
This raised an immediate question for me: Who is the "we" Professor Acemoglu was referring to? Sure, one can make the case that it simply refers to "society," but even then, there is a question, because it's unlikely that a society is going to be unanimous about its values and the like. But just as importantly, how does "society" represent a "we" in a way that "the Market" does not, if they are the same people?

At one point Professor Sandel notes:
But even if the wealthy paid their taxes, they might still enjoy a kind of honor, prestige, and esteem that is out of proportion to the value of their contribution, especially when compared, say, to teachers or caregivers.
Okay... so? Honor, prestige, and esteem, unlike something like attention, are not rivalrous; I can give as much prestige to people I decide to, without reducing the amount I have "left over" to give to other people. So why does there need to be a society-wide dialog as to how much any given person is valued.

This is why I'm dubious of ideas, like those of Professor Sandel, that imply that certain choices should be collectively, rather than individually made, when all that really comes down to is some number of individuals deciding that their choices should trump everyone else's. Because, at least as I understand it, markets do represent a kind of social choice; it's simply emergent from a number of individual choices rather than a large group deliberation. So what really does deliberation create that can't otherwise be obtained? Certainly not unanimity. True, collective action gets a group around collective action problems, but even that's different than presuming that this creates some sort of unity.

And I think that this is what kept nagging at me as I read the interview... the idea that some sort of problem-solving solidarity would emerge without any mechanism being proposed for how that would happen. "Democratic deliberation" may be a great thing, but it's not magical. It can't bring together subsets of the populace who are actively at cross purposes with one another, or create enough of a scarce resource to share between people. Granted, markets don't necessarily do any of that, either, but I'm not sure they purport to.

This isn't to say that markets are necessarily better solutions to social problems than democratic deliberation (although they tend to be faster to operate), but in a way that's the point. There are problems, like what happens when one group considers the actions of another group to be an active threat to themselves and/or their interests, that neither institution is well-suited to solve. 

Friday, March 6, 2026

Chatty

I stopped by the bookstore this evening, and saw this rack of ChatGPT-related magazines. Having picked up an earlier one from one of the same publishers, I understand that they're about how to use generative automation more broadly, so it's interesting that they still treat the public as equating "ChatGPT" with "A.I." more broadly. Personally, I don't think that it's true anymore, but maybe it's just the circles I run in.
 

Wednesday, March 4, 2026

Available to Everyone

The United States Supreme Court has declined to hear an appeal of a lower-court ruling that the U. S. Copyright Office's understanding that copyright only applies to works by human authors. The Court had also rejected another appeal, by the same plaintiff, of a ruling that affirmed a similar policy on the part the U. S. Patent and Trademark Office.

I'm not a intellectual property lawyer, but it appears to me that between these rulings, items created by generative automation, and genuine artificial intelligence, if/when it comes along, are not eligible for intellectual property protection. In the case of most audio/visual media, I'm not sure that this will really move the needle all that much, at least at the start. But in the case of inventions, it could have repercussions. If part of the promise of automation is that it could create new medically-useful drugs, or create other products, the inability to patent them may be a strike against broad adoption of the technology for such purposes. Given this, it seems that large companies will take this lying down. I doubt that they'll attempt to directly re-litigate these sorts of cases; it's highly unlikely that this, or a future Supreme Court would reverse itself on this, simply because it was Pfizer Inc. bringing the appeal, unless things had gotten to a point where the Court simply stopped caring if the public felt that it was openly in the pocket of Big Business.

And so that leaves Congress. If corporations are going to want to outsource their research and development to some datacenter somewhere, and still be able to claim a government-enforced monopoly on whatever it is said datacenter comes up with, intellectual property law will have to change. And, regardless of what individual Representatives and Senators might say, Congress tends to be very willing to openly ally itself with business interests, and then make the case that they're doing it all in the name of helping the general public.

Of course, it's unlikely that the overall business community will be aligned on this; there are likely to be some sectors who feel that computer creations having to be either closely-guarded trade secrets or effectively in the public domain works in their favor, and so I can see lobbyists working both sides of the issue here.

But (as there always is), there's a simpler way, perhaps to deal with such issues: lying. I wouldn't put it past anyone, especially not someone who feels that they've created an amazing new advance in some field or another, to simply claim that a person invented it. The same goes for artwork, for that matter; launder something through Photoshop enough times, and would it be possible to determine that the original had been created by a machine? In this way, I can see detection of automation-generated outputs becoming a big business, if for no other reason than the amount of money that could be on the line.

There's also another angle: If the Copyright and Patent/Trademark Offices won't grant protection to the outputs or autonomous automation, that's another obstacle to the idea of a one-person company with a billion-dollar valuation. Because if they can't copyright or patent the products or services that the agents produce; they'd have to be in business that's extremely difficult to copy.

Monday, March 2, 2026

Picking Sides

Over the weekend, there was an Ipsos/Reuters poll that covered the ongoing attacks on Iran and the Trump Administration's use of force in general. While the headline proclaimed "Just one in four Americans say they back US strikes on Iran, Reuters/Ipsos poll finds," for myself I wonder if that was what was actually being measured. Consider the results that drove the headline:

While the Democrats booed louder than the Republicans cheered, there's still a pretty clear partisan divide in the numbers, to the point where I wonder if this is really a poll about partisan identity. I'm pretty sure that Ipsos/Reuters weighted their results to better align with what they understand the current partisan percentages to be, so it's unlikely that the percentages given reflect the raw numbers. It is interesting, however, where the numbers for partisans do and do not align with the "Other" category at the bottom of the graph. It's also interesting that, in terms of the "No" choice, that the numbers for the "Other" category roughly align for those for all survey participants, given the broader variance in the other two options.

Overall, the Democratic-identified participants come across as the most reflexively partisan, in the sense that they are more likely to disapprove than Republicans are to approve, less likely to approve than Republicans disapprove and less likely to be undecided about the matter. This could give Republican office-seekers heartburn come this year's election season, as the Democratic coalition tends to have more high-propensity voters, as I understand it. If that holds, and the lower-propensity voters who would otherwise lean Republican stay home, the Democrats may find that they have enough new seats in Congress to actually change things, at least on some level.
 

Sunday, March 1, 2026

Salesmanship

Part of me wants to ask: If generative automation is so great and wonderful, why are there so many messages that seem to attempt to threaten people into using it? Like the following example:

taste, domain experience and relationships are still incredibly valuable but refusing to use AI for the tactics and execution part of your job is a one way trip to being unemployed 

plan accordingly
But I suspect that I know the answer to that.

If I'm going to pitch generative automation to you as a positive thing in your life, something that can solve problems for you, I have to actually know you well enough to have an understanding of what your problems are. It doesn't do me any good to say that you'll be able to write 10x more code, if you don't write any code for a living.

But a claim that not using generative automation for tactics and execution will result in unemployment doesn't require me to really know much about the actual job someone is doing. There are a lot of jobs that have some tactical and executive functions attached to them. So the message of "use automation, or else!" can seem more broadly applicable.

This trade in anxiety doesn't serve anyone well, because its primary purpose comes across as setting people up to be blameworthy for any eventual misfortune: "Oh, you can't find another job that will support you and your family? Should have leaned into AI harder!" And what good does this do anyone?

In the end, it's an odd message: "This is critically important to you, but not so important that I feel any need to offer affirmative guidance on how to do it." And in this, it feels like American individualism talking, in that it doesn't care if anyone else succeeds. Which may be the point all along.

Saturday, February 28, 2026

Another Go-Round

There was a protest today; big surprise. I didn't see it take place... I only caught some of the preparation for it, a long line of cars, formed up on the shoulder of Interstate 405 North, bedecked with flags. There were a number of Iranian flags, and a fairly good representation of the Stars and Stripes, too. What was somewhat surprising was the number of Israeli flags that the protestors had brought along. Traffic was flowing too quickly (as in, it wasn't stop-and-go) for me to risk taking a picture. I'd hoped to get a snapshot when I came back the other way, but by then, the protest had moved from its staging area to wherever it was actually planned for.

I'm starting to have the same thought whenever I see a large protest against the Administration around here: This place is too Blue for anyone to care. Neither the President nor Republicans in Congress are going to be moved by a protest in the Seattle suburbs. It's much more likely that they'll regard it as convenient fundraising fodder, casting the protestors as anti-American supporters of the government of Iran.

I understand why dialog with Red America isn't happening, but I'm still of the opinion that it's the most fruitful path forward. Which, perhaps, isn't saying much. It's possible that the United States is too far gone for a coming together to even be possible, let alone change anything. There's too much invested in the fighting, and each side sees backing away from that investment as a crippling loss.

News reports claim that the Supreme Leader of Iran has been killed in the strikes, and we'll see how things materialize in the wake of that. It's unlikely that the United States will be able to find someone high up in government who will agree to work with Washington, as happened in Venezuela. But it's just as unlikely that Iran would fare much better than Iraq did, in the event of an invasion. So the best case scenario may be an internal uprising within Iran. We'll see if it comes to pass.

Friday, February 27, 2026

Remembrance

A pair of firefighters cleaning up the remnants of a van fire, down in Kent last Sunday. I would say that it's unusual events like this that prompt me to carry a camera with me pretty much whenever I leave the house, but in looking at this picture to evaluate whether I was going to post it, I noticed the fact that the firefighter's names are on the bottoms of their coats, which had completely gotten by me when I was actually at the scene.

And that brings me to another of the reasons that I carry a camera; I'm not as observant as I would like to be. Perhaps, if I hadn't been viewing the world through the small screen on the back of the camera body, I would have noticed the names, but I wouldn't place any money on that. And I'd forgotten about the Starbucks across the street until I looked at the photographs again.

I wonder how much of the world around me has slipped through my fingers, due to inattention or a memory that, sometimes, seems barely worthy of the name. And in that sense, the camera is a net, that backstops my fallible senses.

Ticced Off

The fallout from John Davidson shouting "nigger," at this year's British Academy of Film and Television Arts awards continues. I'd like to say that I understand, but I don't. Jamie Foxx can claim all he wants that Mr. Davidson meant what he said, but the random shouting of obscenities (otherwise known as "coprolalia") is what Tourette's Syndrome is all about for many people, despite it not being a consistent feature of the disorder. (Not that Mr. Davidson himself hasn't joined in the pile-on, questioning why the BBC would chose to seat him near a live microphone.)

The word doesn't have intent grafted on to it. Its history is not an integral part of it. Yes, it had a lot of baggage. But there's no need to be saddled with that baggage, regardless of the circumstances. The word is a word. Nothing more, nothing less. And in this circumstance, it wasn't an expression of bigotry or anger; it was simply a vocal tic, of a sort that's been been known about for 200 years.

Beating up on the BBC is not going to make "nigger" go away. Just like accusing Mr. Davidson on bad faith can't suddenly rid him of his disease. And treating him as if he is just using it as cover for racial animus is to give into the generalized distrust that the Black community (especially here in the United States) seems to have for the rest of the world.

I'm still of the opinion that treating this as anything more than an unfortunate side effect of mental disease or defect grants "nigger" the very power that people seem so afraid that it has. Treating it much like any other six-letter word would go much farther towards defanging it than outrage and recrimination every time it appears on-air. 

Wednesday, February 25, 2026

Talking the Talk

The State of the Union address was yesterday, and today there were multiple fundraising e-mails in my inbox, laying out the partisan talking points that people would only hear if they coughed up some money, apparently. It makes sense that I would see a spike in fundraising appeals, since the address captures so much media attention, but it all left me with a question.

Just who, exactly, is the State of the Union address (and the opposition response, for that matter) for?

I get that for the President, it's basically a chance for self-promotion that the media will carry and talk about, and for the opposition, it's the opportunity to get someone in front of the camera who might not otherwise have such a large stage, but who would miss the State of the Union were it to go on hiatus and simply never resume? For whom is the address actually important?

Sure, a lot of different actors have turned it to their own advantage. As I noted the President was able to get up and tell the story the nation that he wanted voters to share. The Democrats were able show themselves protesting during the address, and spotlight Abigail Spanberger as a spokesperson. The media was able to show loyalty to their audiences by highlighting either their uncritical acceptance of the President's speech or their often-ignored fact-checking of same, and fundraisers were able to cherry pick the parts that seemed the most likely to prompt partisans to open their wallets.

But just about any speech by a sitting President can accomplish these goals. There's nothing genuinely informative about the State of the Union; it's generally a recitation of White House talking points that everyone already knows. So why bother with it?

It seems like a relic that exists now because it existed then, and no-one wants to be the person who asks what purpose it serves. 

Monday, February 23, 2026

Bugging

I set out, every year, to have my taxes completed well in advance of April 15th. And this year, I thought that I'd gotten quite the jump on it. It had been a week or so since I'd received the last of the forms I needed, and I sat down to get everything squared away. Only to run into an obstacle at the last moment.

Namely, that there was a bug in the H&R Block software that I was using for tax preparation, and it was convinced that I'd left a field blank, even while it showed me the value that it had calculated for the field. I went back to that section of the data entry process, and tried it all again, only to encounter the same error. And the error prevented me from e-filing the documents. Which wasn't, in and of itself, a huge problem. After all, I could just as easily have printed everything out, and dropped it into a mailbox.

What bothered me about it was that this was a fairly serious problem, that prevented the use of one of the primary features of the software, and is was present in the production release. And it pertained to a situation that was not new... people could just as easily have encountered this problem in previous years, so this was a failure in a system that had worked previously.

One of the problems that people have with modern capitalism, at least as they often encounter it, is that there always seems to be a drive to cut as many corners as possible, in the constant quest for marginally better shareholder value. Almost to the point where poor quality becomes an end in itself, something that investors affirmatively look for, as a guide to where they should place their assets.

I think the problem that institutions have in the United States, whether it's capitalism, or something like the press, is the the people who run them don't see their long-term health as enough of a benefit to themselves (or anyone else, for that matter) to look after it. It doesn't take a rocket scientist to understand that when people come to the conclusion that capitalism runs primarily on rent-seeking and exploitation, that they're no longer going to support it. But if the time horizon is always the next quarter, and no farther, the idea that in ten years, or even five years, people are going to turn on this system becomes a problem for later. So why not continue to squeeze the orange has hard as one can?

In the story of the goose that laid golden eggs, the moral is often taken to be that the greedy killers should have been happy with what they were getting, rather than hoping for a single massive payday. But as I understand the tale, their problem was that they fundamentally misunderstood the nature of the goose. And I think that this is what's happening now. Investors fundamentally misunderstand the nature of the society that they rely upon for their investments to be worth anything. And so they're going to be surprised when it can't, or won't, support them any longer.

Saturday, February 21, 2026

Countdown

It's taken as a given that economic trade produces "winners and losers." But in a nation like the United States, where social trust is low and individualism is high, this seems like a recipe for long-term instability, as populism rises on both the Left and the Right.

While the desire of economic winners to keep their gains is understandable, what's less clear is why they expect the losers to simply accept that they're going to be left behind. I suspect that part of it is that low social trust tends to manifest itself as a belief that others are incompetent. Why worry about the impacts on other people, when one is convinced that those other people are easily distracted away from problems or not brave enough to start a conflict?

But I'd be willing to bet that a commitment to the Just World Hypothesis is also at work. People tend to be unwilling to see their own benefits as having been gained by past injustices, and there is also a tendency to believe that other people also understand the current situation as just, and accusations of prior bad acts to be made in bad faith. And I think that this worldview, which supposes that people know that they deserve to be in the place they are in, pushes back against ideas that a more equitable balancing of economic forces should be considered.

Given how much people view the current wave of automation as being disrespectful of them, it remains to be seen if it will create tangible benefits that mollify the public before a general anger boils over into a reaction that sets the technology back, at least here in the United States. These competing clocks are invisible, at least to me, and so I have no real sense of which one may be ticking faster than the other.

Friday, February 20, 2026

Billion-Dollar Baby

So, I've been hearing people talk about the idea of autonomous automation allowing for one-person, billion-dollar valuation companies. It's a topic that comes up on financial and technology podcasts from time to time.

And it's raised a question for me... What would these companies sell? Now, I get that it could be something new and wonderful that no-one has thought of yet, so I'm really asking what characteristics the goods and services they would offer would have.

Because if we're talking about a company that's 1 human being, and X number of automated agents, then anyone who has access to X number of automated agents could make the same thing. There could be other capital needs, but perhaps not, depending on what exactly it is that's being produced. So how does our one-person company protect its market(s) well enough to get to a billion-dollar valuation, rather than simply becoming a proof-of-concept for a number of other market actors? Would it need to be something where the primary market is people who don't have access to the same level of automation?

And, speaking of proof-of-concept, if our one-person company demonstrates that a whole class of goods/services could be produced entirely with automated agents, that could really do a number on the employment market. So does their product or service also need to be more-or-less downturn-proof? And how would that work in practice? Would it create demand for physical human labor in another area? Or would it be something that isn't aimed at the public at large? (Which goes back to the first question... because if other people could make their own version, anyone with the means to copy the product or service might not be a good long-term customer.)

In the end, I understand that talk of one-person, billion-dollar valuation companies is really about a level of techno-"optimism;" the idea that capital could create its own labor, and thus result in fairly big gains for the investor class... But I think that a lot of the speculation makes the implicit assumption that nothing else changes in the overall environment, and I suspect that wouldn't be the case. We'll see, I suppose, sooner or later.

Wednesday, February 18, 2026

Deduced

There are a couple of rather famous deductive arguments for the existence of God.

Anselm of Canterbury's ontological argument can be considered to be a direct argument... it explicitly references God.

  • It is a conceptual truth (or, so to speak, true by definition) that God is a being than which none greater can be imagined.
  • God exists as an idea in the mind.
  • A being that exists as an idea in the mind and in reality is, other things being equal, greater than a being that exists only as an idea in the mind.
  • Thus, if God exists only as an idea in the mind, then we can imagine something that is greater than God (that is, a being-than-which-none-greater-can-be-imagined that does exist).
  • But we cannot imagine something that is greater than God (for it is a contradiction to suppose that we can imagine a being greater than the being-than-which-none-greater-can-be-imagined.)
  • Therefore, God exists.

The Kalām cosmological argument, on the other hand, might be considered an indirect argument... it claims the Universe has a cause, but doesn't directly say anything about said cause. Other people, however, have added on to it.

  • Everything that begins to exist has a cause.
  • The universe began to exist.
  • Therefore, the universe has a cause.

In each case, the final sentence, the one that begins "therefore" is considered to be true if one accepts the preceding statements, the premises, to be true. And this is part of what makes them popular. An apologist will walk someone through the premises, seeking agreement with each one, and then present the conclusion as granted. Which I get, because it works. The only way to avoid having to either agree with the conclusion or admit to following faulty logic is to deny one or more of the premises, which are generally held out to be common-sense statements that no-one should have a problem with.

But I was reading about these, as part of my amateur interest in philosophy, and it occurred to me: What do these arguments actually mean, anyway? Sure, they have their "common-sense" meanings, but is that actually what they mean?

Take the Ontological Argument. What does "greater" mean in this instance? How should it be understood? The argument doesn't hold up as well if I substitute "taller" for "greater." Because if it's true that "a being that exists as an idea in the mind and in reality is, other things being equal, taller than a being that exists only as an idea in the mind," it does not follow that if I imagine a being a million feet tall, that there must be some real being that's taller than that. It appears, at least to me, to indicate that imaginary height does not matter. Going back to "greatness," this would seem to indicate that I find whomever I consider to be the greatest, and bestow the title of "God" upon them, but that's where it ends.

Likewise with the Cosmological Argument, what does it mean to "begin to exist?" I like to build plastic models as a pastime. And it's true that at some undefined point in the assembly process, a Mobile Suit or an aircraft "begins to exist." Now you don't see it, now you do. But it began to exist because it was assembled from parts that already existed. It's generally presumed that in the Cosmological Argument that the universe began to exist ex nihilo, but there's nothing in the syllogism itself that requires that interpretation. And because the Big Bang is, effectively an Event Horizon, there's no way of knowing whether the Universe simply sprang into existence, or if our current spacetime is simply the current arrangement of matter and energy that already existed in some or another form. So then, even if it's understood that the Universe began to exist, I'm not sure that this tells us anything, especially if energy may be neither created nor destroyed.

Now, to be sure, I don't think that I've put these two long-standing arguments to rest. I'm not that smart. I'm fairly certain that other people have come up with similar objections, and that someone else has come up with counter-arguments. I'm just surprised that I haven't encountered them, and their counters, more often.

Monday, February 16, 2026

On the Rails

One of the interesting things about buzzwords is that they acquire widely-understood, yet completely informal, definitions. My favorite recent example is "guardrails," which has become a shorthand for, effectively, building robust harm-prevention measures into new technologies. Which is interesting, because in the everyday world, that's not what guardrails are designed to do. Consider this post I made about a pickup truck going off the road near where I lived at the time.

The problem wasn't that the guardrails didn't work as designed... it was that an airborne pickup truck was not one of the situations that they'd been designed to contend with. But the guardrails were there; anyone happening by would see them. The point could be made that a new design may have been in order, but it was clear that they had been put in place.

And I think that is somewhat lacking in many of today's discussions of technological guardrails; the difference between inadequate and non-existent guardrails is non-obvious. And so for "guardrails" to be evident, they have to be so obvious as to be intrusive.

I have a set of "kitchen knives" that need to be disposed of. I nearly never used them (in part because they were just that bad), and I've finally gotten around to buying a semi-decent quality knife block with semi-decent quality knives. The "easy" way to dispose of the old knives would be to securely cover their blades in duct tape and throw them away, but I figured it was worth asking about online to find out if there were any better ways. No luck... my question was removed; likely before anyone saw it. The "guardrail" visibly did its job, but did so by presuming that my query was too dangerous for public consumption. Doubtless, there are likely people for whom that's the intended outcome, but it strikes me as overzealous.

And while it's clearer that guardrails are working when they're intrusive, that provides an incentive for people to move to where there are no guardrails. Granted, I'm not going to go searching for a free-speech haven just to ask for a good way to ditch some kitchen utensils, but I doubt that everyone finds their questions as trivial as that one.

Sunday, February 15, 2026

A Modest Request

I saw a panhandler today whose sign read: "At least give me the finger." It was both comedic and heartbreaking. The young man appeared to be in the process of giving up for the day, he was walking away from the corner. It's a popular place for panhandlers; there is a Jack-in-the-Box there, which I suppose increases the likelihood that any given car might have someone with cash in it.

It occurs to me that I don't know whether the greater Seattle area has a relatively high number of panhandlers or not. I live in the suburbs, so while there are certain spots where panhandlers and buskers tend to set up, I've never encountered them in numbers. And even the usual spots don't always have someone there. (This doesn't stop the more conservative/fearful among the population from seeing them as symptomatic of apocalyptic levels of social disorder It's somewhat surprising how many people apparently cannot tell the difference between panhandlers and supervillains.)

Now, while there are some panhandlers who don't strike me as being on the up-and-up, for many of them, it seems that what you see is what you get; a down-on-their-luck person who has been reduced to begging funds and/or food from passers-by in order to survive. Often it's just one person. Sometimes, there will be a mother with her child(ren) or a family. Childless couples, however, are vanishingly rare; perhaps they tend to split up to work different places.

Today was sunny and warm, especially considering it's only mid-February, so it wasn't a terrible day to have to be out of doors. But neither Winter nor the rainy season are over yet, so we'll see how things work out.

Of course, the real problem isn't the weather; Seattle's climate is fairly mild, when compared to some of the alternatives. It's the fact that Seattle, like pretty much every other place in the United States, understands itself to be too poor to devote enough resources to the problem to actually solve it. This is, in part, due to a lack of coordination, and a willingness to defect... While Texas and Florida made headlines for putting migrants on busses and sending them to large cities in more liberal-minded states, the practice of shipping homeless people off to become somebody else's problem goes back a lot farther than that. So any city that actual starts to make a dent in their own homeless problem risks becoming a target for elected officials elsewhere looking to find someone else to foot the bill for their own homeless population.

It's also a side effect of the individualistic culture that has grown up in the United States. It's not hard to find someone who will claim that living-wage jobs are freely available for the asking, even when unemployment was significantly higher than it is now. (Of course, asking them just where said jobs were located rarely resulted in answers.) And when the impoverished are viewed as intentional freeloaders, who could get back on their feet whenever they wanted to, people who give are seen as chumps; a perception that many are keen to avoid.

I doubt that I'll ever see the young man again. Panhandlers tend to be a transient population. I'd like to say that as long as he maintains his sense of humor, he'll be okay. But that places the onus back on him, and I know he needs more than that. 

Saturday, February 14, 2026

Demonstrated

 

There was another protest today, and it was a good day for it. I'm still of the opinion that deep-Blue Washington state is not the most effective place for it, but it's really not about that.

Friday, February 13, 2026

Bad Read

Representative Ro Khanna (D-California) read out six names that had been redacted, and then unredacted in "the Epstein Files." According to the Department of Justice, four of the names were of random people who had been in a photo lineup. According to Representative Khanna, the fault lies with the DoJ.

While it seems patently evident that the Department of Justice has been sloppy with their handling of the documents, I think that ownership of this particular screw-up belongs to Representative Khanna, simply because it had already been established that simply being named in the set of documents released, or even knowing Jeffry Epstien, is not, in and of itself, evidence of guilt. Representative Khanna blames the DoJ for not explaining why the names were in the documents earlier, but it shouldn't have been up to the DoJ to make clear what everyone already knew.

The idea that there was a smoking gun, being hidden by the Department of Justice, that would blow the lid off of a ring of powerful men who were into sex with teenaged girls, always rested on the ideas that a) Jeffrey Epstein compiled information on people who were committing crimes along with him, and b) that he pretty much exclusively surrounded himself with other people who were into sex with underage girls. That's what it takes to believe that the simple fact that one's name could be found in the documents made one a wealthy and powerful person who was engaged in the rape of minors.

Hoping that Q-Anon's (remember them?) obsession with the idea that there was an Illuminati-like ring of pedophiles running around sleeping with children would become a weapon against President Trump was a bad idea from the jump, based as it was on the conjecture that enough people could be peeled away from the Trumpist coalition on that basis to weaken him politically. Personally, I'd hoped that Democrats would give up on being anti-Trump and pro-fixing things that need fixing in the United States, but it turned out that the Democrats were more than capable of remaining single-minded for longer than I could remain irrational.

It would be nice if this blunder dialed back the strange alliance with conspiracy theorizing that seems to have become popular with the political class (it has zero chance of ending it) but I doubt that it will. Too many people have hitched their wagons to the idea that this will be straw that breaks the camel's back, apparently unaware that thus far, it's been a very resilient camel.