Monday, March 30, 2026

Promptly

With the understanding that I can't validate that this is even legitimate, this is another of those things that popped up on LinkedIn for people to have a good laugh at. It strikes me, however, mainly as weird. Sure, on the surface it's yet another "someone meant to have generative automation write something, and wound up sending the prompt, instead," but the prompt itself seems off to me.

"A warm but generic rejection email that sounds polite yet firm."

Don't companies have those? Who's actually expecting something other than a form letter? Why craft a new "generic" message for each rejected candidate? Isn't reusability the point of "generic?" This gives the vibe of using generative automation for its own sake: "We need to burn compute on a triviality to show that we're 'AI-forward'."

"Do not mention specific reasons for rejection."

I understand the rationale for this part of the prompt, but it still strikes me as risky. After all, there are likely non-specific reasons for rejecting a candidate that generative automation could come up that would still be a problem, if they aren't related to the job at hand. This is something that it strikes me that one would want laid out beforehand, for just that reason.

"Make the candidate feel like they were strongly considered even if they weren't."

Considering that the automation likely wouldn't know one way or the other to what degree a candidate was considered, I can understand having it default to implying that everyone was strongly considered. But I'm not sure that it's a good idea to have LLMs tell people something that may not be true... Once it's considered legitimate to have generative automation mislead candidates, even to spare their feelings, I'm not sure how one keeps people from asking the LLMs to deceive other stakeholders. And I'm not sure it takes much imagination to see how that starts ending badly, especially if the automation starts telling outright untruths.

"Remember to use the candidate name and company name variables."

Why is the company name a variable? Does it change somewhere along the way? This gives me the impression that this is coming from a third-party recruiter, who works with a number of different clients. I suppose that a holding company could have a lot of smaller companies under its umbrella, and centralized HR for all of them, but given that the company name shows up in other parts of the e-mail, it doesn't seem necessary to call it out again. And again, why not use a form letter? There's nothing in the prompt that calls for any candidate-facing personalization from their résumé or cover letter. I'm not sure what just using their name is supposed to do.

Of course, the fact that a prompt was sent to a candidate who was supposed to receive a rejection message means that messages aren't being vetted prior to being sent. Which makes some sense... after all, generative automation is supposed to be able to handle all of this. But even the prompt screw-up aside, if the idea is to generate responses to candidates on the fly, it seems that it would be wise to have something that checks things before they go out, if only to make sure that something entirely random didn't find its way into the message.

The final thing that stood out to me was the redaction. I understand why the candidate wouldn't want their name out there, but blanking out the company speaks to a fear of retaliation that I'm not sure is healthy. It's not like there's something in this message that points to anything criminal, or even unethical... a prompt was screwed up along the way. If pointing that out publicly is the sort of thing that would lead an HR department to blacklist someone, maybe we as the public (and yes, I include myself in that) need to start having higher expectations of the businesses we give our money to.

Sunday, March 29, 2026

Motion

It's one thing to say: "The one constant in all of my dysfunctional relationships is me," but yet another to understand what that actually means for one's life.

Especially when one has, like I do, an internalized locus of control, because that means that looking back on those relationships, and why their dysfunctional, leads to the self. And one of the other traits that tends to go along with an internal locus of control is a certain lack of self-forgiveness.

Being the agent of the dysfunctions of one's life means not being the person one wanted to be, or, perhaps more acutely, feels one should have been. And this is where I think that the internal locus of control can be a difficult thing to manage, it lends itself to judging the self by the immediate snapshot of one's life, and the comparison of that to a counterfactual, either created by other's lives or an idealized version of one's own. Neither of which are useful guides.

For me, personally (which is weird, given my general dislike of writing about myself), I've developed a tendency to accuse my past self of errors in judgment, even as I work to really internalizing the idea that the choices I made, even when they didn't work out as I intended, were the best ones I could have made with the information that I had at the time. And maybe that's the stumbling block. I'm starting to think that it smuggles in an implicit criticism, even when my explicit goal is to avoid being self-critical.

And maybe that's because self-criticism is easy. It can be painful at times, but it doesn't really ask much of a person other than to take a look at some version of themselves and find them wanting. And it feels like a step on a path to change, even though there's no reason why the two are related. But self-acceptance doesn't mean accepting stasis, even if such a thing were possible. I'm starting to find that this is a more difficult lesson than it's given credit for.

Friday, March 27, 2026

Small Time

Iran-linked hackers breach FBI director's personal email, publish photos and documents

Is that all?

Okay, who cares? "We in ur e-mail, posting ur pics," doesn't really seem to move the needle in a shooting war. I would have thought that Iranian cyber-warfare would be more... warlike. If getting into Kash Patel's Gmail account is the best they can do, why are they bothering?

While President Trump's random boasting about Iran suing for peace comes across as complete fantasy, it's still been fairly clear that this is a one-sided war to this point, as Iran has no real way of defending its territory from U.S. air power. Accordingly, the United States can strike pretty much when and where it wishes. And, the legitimacy and necessity (and maybe the actual drivers) of this particular conflict aside, the Iranian military was unable to protect it's Head of State, and has been shown to be unable to protect it's own high-ranking members in the past. A simple hack of someone's e-mail account doesn't do anything to make the country seem more able to make a real fight of it.

Now, that could change if the United States puts soldiers on the ground in Iran. Taking and holding territory is always more difficult than launching in munitions from a distance. But it's not like this exfiltration of data from Director Patel's personal e-mail account show that Iran is more capable in that regard than one may have first thought, either.

In the end, this sounds like empty boasting. I guess we'll see if it turns out to be more than that.

Wednesday, March 25, 2026

Altared State

But to me, the thing that I take out of that is that there are gamblers who, for whom sports betting is their religion, right. They equate their sports betting communities and behaviors to kind of religious, a religious experience. Like, it is part of; it is their community, their identity, it's who they are. And I think that's a social catastrophe in the making, right. Like, sports betting, whatever you think of it: maybe it's a vice that needs to be much more heavily regulated, maybe if you have a more Libertarian approach, it's a fun hobby that a few people will, you know, turn into a bad thing in their lives, but for most of them it's, you know, a source of enjoyment. Um, it should not be central to who you are. It should not be a religious experience. And if it is, I think that it's that much more dangerous as a phenomenon.

McKay Coppins. Plain English With Derek Thompson; "The Casino-ification of America"
As someone who isn't religious, and has little use for concepts of meaning, the immediate question that this raises for me is why one source of community and identity is necessarily better or worse than any others. After all, one could make the point that religion can be either a vice or something enjoyable that a few people will turn into a bad thing in their lives. What is it about sports gambling, in and of itself, that means that when people make it central to who they are, that it's more dangerous than religion, when people make that central to who they are? I've seen people neglect things they claim are important to them, like family, friends or career, in the service of becoming closer to their idea of the Divine. I've seen people give away their money until they were impoverished, tolerate remarkable levels of what would otherwise be considered abuse and even kill in the name of their faith. Why is that no dangerous?

It strikes me that anything can become important enough to a person that it becomes dangerous; that it becomes something that they, and some number of the people around them, would be much better off had it never entered that person's life. And it's the effects that it has on the person's live, not the thing in itself, that is the dangerous phenomenon. The person who is willing to trade their material well-being for community and identity has a problem, regardless of the specific thing that they've latched onto while seeking community and identity. Whether that's a connection to the Divine or an expensive hobby is beside the point.

Derek Thompson, the host of Plain English, is fond of saying that dystopias don't come from bad ideas, they come from good ideas taken too far. I believe he makes the point twice in just this one episode. Giving the things that are important to one a pass may be a good idea, but it's one that's easily taken too far. Because it prompts one to stop looking at the actual things that are being done, and the effects that they have, and instead to focus on what's doing it. It's prejudicial in the same way that judging a person guilty on innocent based on who they are, rather than what acts they have committed, is. And it doesn't take much for it to be just as corrosive.

So I don't see the rationale for why some things "should" be religious experiences and other things "should not." If a career can be central to who a person is, why can't a hobby be, as well? Now, to be sure, gambling on sporting events strikes me as much more likely to lead a person to places that they will find both highly unpleasant and extremely difficult to extricate themselves from, than something like say, being a Certified Public Accountant. But that has little to do with one's ability to build one's community and identity around them.

But it's easier to decide that the downsides aren't worth the benefits for activities than it is to sort out who will, or will not take something and go off the rails with it. And it's easier to see the downsides, and to decide that they outweigh the benefits, for things that the person doing the judging does not find to be important. For my part, I don't really care which altar someone worships at, if it brings them what they're seeking from it. And when it doesn't, when it demands more than it can give, all altars are equally dysfunctional.

Monday, March 23, 2026

Literacy

There was a post on LinkedIn about the cancellation of the U.S. launch the revenge horror novel "Shy Girl" and it being withdrawn in markets where it was already available. In the LinkedIn post, the author made the following observation:

Use no AI and you're mocked for not being innovative. Use too much and you get cancelled.
Which may be true, but I would note that it wouldn't be by the same people. And that makes the answer relatively simple: know your market and your target audience.

The problem with using generative automation on a revenge horror novel, it seems to me, is that it's the sort of thing that relatively affluent young people read, and, as I understand it, middle-class young people are very opposed to generative automation, especially in the arts. Not that this will stop anyone. Because generative automation will make the process of producing a novel shorter and easier, people are going to keep searching for ways to get around any public distaste. And eventually, someone will succeed, and it won't be until after the book becomes a best seller that word gets out, at which point that damage will have been done. A publisher may be able to claw back the author's part of the proceeds, but the understanding that there's money to be made will push more efforts to repeat trick.

And eventually, people will be faced with a choice. And if history is any indication, they're going to make the one that makes life less expensive for them in the short term. But for the time being, the best things that authors and publishers to do is read the room.

Saturday, March 21, 2026

Uninstall

 

While the fact that Facebook is a privacy disaster has been understood for some time, I don't think that this sticker, which I found on the back of a parking sign, will influence all that many people to leave the platform. Facebook's network effects have lead to a high level of lock-in for their users, many of whom have made the site the primary, if not the only, way to find them online. The fact that constant digital surveillance is the price of that had become understood at this point.

Friday, March 20, 2026

Differentiated

It's been some time since I've used a graphic for The Short Form, mainly because, as I've mentioned before, it's hard for machines to read the text in a picture. But I was talking with an acquaintance earlier this week, and this part of our conversation stuck with me.

There is a difference between preventing bad outcomes, and preventing them from happening to oneself.

I suppose that it's an obvious sentiment, but I don't know that it's thought about all that much. In a lot of ways, it's like the difference between using The Club, and installing a Lojack, or other locator system, in a car. The Club is an obvious theft deterrent; it's goal is to not only make it more difficult to take the car, but to be obvious about that fact, so that the would-be thief moves on. But it doesn't really change their incentives; they simply look for a car that doesn't have such a device, and attempt to steal that one.

LoJack, and other locator systems, on the other hand, while being inobvious, carry a much greater risk to the thief if they do, in fact, steal the car... after all, it can be tracked by law enforcement, and that leads to a higher chance of being caught in a stolen car. But the fact that one cannot tell by looking if a car is equipped with a locator means that taking any car in a neighborhood where they're known to be in use carries higher risk. And this is why these systems often carry discounts in insurance premiums, they lower costs of insurers more broadly, and it's worth passing some of that savings on to those to have the systems installed.

This all came up in the context of the supposed generative automation apocalypse that's coming for certain sectors of the knowledge workforce. While a lot of people are offering various advice, from learning how to supervise automated systems to dumping the industry entirely and shifting to skilled trades, the general viewpoint is the same: This is going to happen, here's how you take care of you. It's modeled on The Club... a car is going to be stolen; this is how you ensure that it isn't yours. But maybe a LoJack model, trying to head off the worst of the transition in general, for everyone, would be better for all involved.

Wednesday, March 18, 2026

This Side, That Side

"Long term, you tend to remember that kind of negative branding," [University of Alabama Marketing Professor Karen Anne] Wallach said. "And negative language then becomes part of what you associate with the brand."

The tech startups NPR spoke with for this story said they understand the risks of alienating large numbers of people with their cryptic ads. But the upside is too great.
Do you understand this billboard? If not, that's the whole point
While this might seem to be just another story about tech, and how it divides people into groups, the above points to something important about in-group and out-group signalling. Sometimes, alienating the out-group is what the in-group demands. Groups, in general, are defined both by who is a member of the group, and who is not. And for groups that want to maintain some sort of claim to exclusivity, who is kept out can be much more important that who is let in. And hurt feelings on the part of those kept out be damned.

For technology startups who are not attempting to sell themselves to the general public, the idea that the general public is unwelcome can be just the sort of thing that their intended customers want; because it not only sorts, but stratifies. And sometimes, nothing sells a product or service like the idea that being a member of the target audience is proof of one's own superiority.

If an advertiser is willing to accede to an expectation of flattery, even at the expense of others, on the part of the in-the-know, clearly neither the advertiser, nor their audience, expects that any hard feelings on the part of the out-group will be a problem for them. And this is nothing new. I would submit that it's been a facet of human history for as long as there has been history. That said, it doesn't make the practice any less toxic, especially in its more strident forms. But perhaps that's the problem; toxicity has become such a common part of people's everyday lives that it goes unnoticed.

Monday, March 16, 2026

To Be Divine

Superhuman Platform, Incorporated, the company formerly known as Grammarly, is facing a class action lawsuit over a feature it rolled out at the end of the Summer called Expert Review. Expert Review, which was recently removed, was effectively a "this person would make these suggestions about what you're writing," sort of feature, and claimed to offer advice from virtual versions of people like Stephen King, David Abulafia and Julia Angwin (who filed the lawsuit).

When Superhuman Platform CEO Shishir Mehrotra posted an apology for the agentic feature on LinkedIn, he noted "valid critical feedback from experts who are concerned that the agent misrepresented their voices." When Ann Handley, who identified herself as one of those experts weighed in (before commenting on the post was closed), her primary complaint was "building a commercial feature around experts' names and reputations without asking permission, without notification, and without compensation." While Mr. Mehrotra claimed that "the agent was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans," given that it was a subscription feature, and Superhuman Platform wasn't sharing any of the money, it seemed more like they'd simply found another way to have people work "for exposure." And there's a reason why an increasingly common response to that sort of offer is "Fuck you; pay me."

As a random layperson, the whole thing strikes me as openly unethical; but entirely sensible. If generative automation is a race, and losing carries serious, or even existential, consequences, the time to be ethical is later. Ms. Handley calls Mr. Mehrotra out for an ethos of "take first, apologize later." And while I suspect she's correct in that, it's just like any other instance of "ask forgiveness, not permission;" permission wouldn't have been forthcoming, but forgiveness will be. And this is a rational presumption to make; Uber's known flouting of laws hasn't resulted in the general public deciding that the company is too untrustworthy to do business with. And it's unlikely that the Court of Public Opinion will render a different verdict for Superhuman Platform. Investors, on the other hand, are quick to flee a company that's unwilling to do what it takes to make itself more profitable, and they bear none of the risk for the actions the company takes in pursuit of those profits. It's not like anyone is going to spend time in prison over this, and even if someone were, it wouldn't be the investors; so why wouldn't they push for companies to place profitability over ethical considerations, given that it's unlikely that people and businesses with Grammarly subscriptions are going to go elsewhere.

The only way to stop companies (and people for that matter) from preferring to ask for forgiveness rather than permission is to be consistently unforgiving, regardless of outcomes. And that's a hard sell in a culture where many people's primary focus is their own sense of (or concern for) poverty. People may be angry when someone cheats them to pass the savings along to someone else, but they're often ready to look the other way when the savings are being passed along to them. And businesses know this, their executives are members of the public, just like everyone else. They may often speak in the stilted language of finance and investment, but they're not aliens.

Some heads may roll over this; if he's unlucky, Shishir Mehrotra's will be one of them. But Superhuman Platform, Incorporated will survive. People and businesses will still pay to use Grammarly, and investors will still see returns. And that all but guarantees that "take first, apologize later" will remain the standard order of operations.

Sunday, March 15, 2026

One of Three

I started listening to the most recent episode of EconTalk, in which Professor Roberts interviews one Hanno Sauer about the latter's new book: The Invention of Good and Evil. I have to admit that I gave up not too long into it, in part because of this statement from Mr. Sauer:

 And, now you get the opposite problem when you move to a naturalistic Darwinian framework. All of a sudden, the default assumption seems to be that it's 'nature, red in tooth and claw.' It's dog-eat-dog, it's elbows out. Everyone is selfish. Everyone is essentially sociopathic. Right?

And, now you get the problem: Okay, evidently there is friendship and heroism and love and altruism and sacrifice. But, where do those come from? It seems to not make any sense.

It irked me, because the basic idea that, under "a naturalistic Darwinian framework" that "everyone is essentially sociopathic," doesn't actually come out of any of Mr. Darwin's work. As I noted in my (unfinished) blogging of my way through On the Origin of Species:

There are three distinct facets to the Struggle for Existence, as Darwin explains it - competition within a species, competition between species, and mitigating the hostile effects of one's environment.

Mr. Sauer's book, rather than seeking to correct the misconception that the "default assumption" should be that competition within species is the norm, leans into it. And I found myself asking why. Or, on the larger scale, why does the misconception persist so? I can't possibly be the only person who has read Charles Darwin, or recalled that person-to-person competition is only part of one of three primary conflicts that Mr. Darwin identifies. So why don't more people push back against it? Why accept the hostile framing that the idea that "the Darwinian view of Evolution requires one to be murderously pseudo-Machiavellian" and then try to argue that unselfishness can grow within it, when it strikes me as much easier to point out that "friendship and heroism and love and altruism and sacrifice" make the other two conflicts much easier, and start from there?

Speculation on other people's motives is often a one-way ticket to creating a strawman argument, so I won't indulge in it, other than to say that there must be incentives at play that I am either unaware of, or not fully crediting. Because while it may seem unreasonable to me, there are assuredly reasons for it that people feel are worthwhile.

Of course, it may simply be that the misconception is widely held enough that people don't always realize that it is, in fact, a misconception. It's like Fyodor Dostoevsky's bit of dialog in The Brothers Karamazov, where Ivan notes: "If God does not exist, anything is permissible." This is commonly taken to be absolutely true in much of the Western world, especially by Christians, despite the fact that there is nothing in the viewpoint of Moral/Ethical Realism that requires some sort of divinity to create the rules, just as there in nothing in Mathematics that demands some sort of divine order for 2 + 2 to equal 4. Perhaps it's just easier to set out to prove the argument incorrect than to point out that it doesn't actually seem to make any sense, given the world as we understand it.

Saturday, March 14, 2026

Discollected

While I was thinking about the idea of collective action to change the fate of the job market, I noted that the United States is a very individualistic culture. And considering that a bit more deeply, it occurred to me that they may have been what was behind George Will's observation that here in the United States, we don't prevent catastrophes, we clean up after them. And maybe that's because prevention requires genuine collective, cooperative action, while clean-up can be countless individual and small-group efforts, localized to the specific places that people care about.

I think I need to buy some subscriptions to quality news sources. I'm starting to realize how impoverished my thinking can become when I don't have access to good thinkers, even if I may otherwise disagree with them.

Friday, March 13, 2026

Shifting

I was looking at the Bureau of Labor Statistics' Employment Projections, and the World Economic Forum's Future of Jobs Report, both of which were updated/released last year. Both of them had software developers on their lists of the fastest-growing jobs. The WEF predicted that Software and Applications Developers would see Global Net Growth of 57% between 2025 and 2030, while the BLS predicted that Software Developers would grow by 15.8% between 2024 and 2034.

It's easy to look at the numbers of layoff notices that have rocked the technology industry in the United States and decide, on that basis, that bureaucrats don't know anything, but of course they couldn't have known what choices people were actually going to make. One can fill out a survey or answer a questionnaire, and then have other factors come into play that result in different decisions being made. And, whether we like those decisions (or their impacts on our lives) or not, people are remunerated quite handsomely to make them.

And that's what came to mind when I saw this chart, in the Future of Jobs Report. It predicts that the share of work done by people, without recourse to automation or some sort of automated enhancement, will drop from 47% in 2025  to 33% in 2030, while the share of work done solely by automation grows from 22% to 34%.

And it's with these numbers in mind, I suspect, that people proclaim dire warning of what will happen to people who don't pivot into the jobs of the future (many of which pay less than the jobs of today). But this decline is no more a given than the increase in software development jobs was. This, too, is something that's going to be driven by the choices that people make. And maybe what's needed is for more people to be involved in those choices.

Now, Dario Amodei may be correct, and what he terms “powerful AI”  may indeed create a “country of geniuses in a datacenter” that's just better at everything we do than we are. But until that comes about (and, given human history, likely even when it does) we have choices as to what we value. There's no reason to presume that it's impossible to direct where the future is going to go by adding some intentional design to the mix. I've said before that a question that bears answering is what new demand for human labor generative automation is going to create. But that buys into the hostile framing that posits that valuable work for humans will be relegated to the leftovers that automation, even if otherwise ubiquitous can't do. Maybe, as people, we'd all be better off if there was an active effort to find/create and then nurture roles that lie outside of the capacity of machines to do, and to start moving towards them now. (Normally, I go out of my way to avoid using the word "we," since it tends to be something of a weasel word, but here, maybe, enough of humanity is in the same boat that "we" makes sense.)

Because if it's undesirable that the World Economic Forum's prediction that out of every 100 workers, some "11 would be unlikely to receive the reskilling or upkskilling needed, leaving their employment prospects increasingly at risk," turns out to be true, perhaps the onus is to come up with something that those 11 can do that makes good use of the skills they already have.

Passively accepting the idea that automation is a bear coming for the job market, and so people's primary goal should be running faster than enough other people that the beast is satiated before it gets to them, is a recipe for disaster. The people the bear seeks to eat are unlikely to go down without a fight, and the conflict could wind up doing much more injury to the collective than the bear ever could. Here in the highly individualistic United States, this may be something of a heresy, but perhaps it's time that people decide to hang together before technology, and the incentive structures behind it, hang everyone separately.
 

Wednesday, March 11, 2026

Scoreboard

Muslims don't belong in American society.

Pluralism is a lie.
Representative Andy Ogles (R-Tennessee)
Cue Democratic "outrage" and Republican silence.

Representative Ogles isn't the first House Republican to make such statements on social media.

Few, if any, Congressional Republicans reacted publicly to any of the posts.

But Congressional Democrats were quick to denounce it.
Tennessee GOP Rep says Muslims 'don't belong in American society'
Okay... and?

This sort of thing strikes me as pandering from both sides of the aisle. It may as well be a script. Republican lawmaker from some overwhelmingly White, Christian part of the country makes a disparaging statement about Moslems. Democrats denounce the statement and call for resignations or some punishment. Republicans, who have no Moslem members in Congress, simply say nothing. The people who care among the voters for the two groups are happy with how their side responded. Nothing changes.

What I don't know is how many people care. There was an attempt by American Moslems to lean on the Democrats by staying home back in 2024, mainly over dissatisfaction with how the party was dealing with the fighting between Israel and Hamas. I'm not sure that it worked as well as they would have hoped, mainly because they had nothing to offer Republicans other than not voting for Democrats, and it's pretty clear that the GOP had no real need for Moslem support. So they've become convenient targets for members off the Freedom Caucus who feel a need to show their constituents that Congress shares their prejudices.

Meanwhile, Democrats get to show themselves as making a lot of noise about it, but they never accomplish anything. They simply don't have the votes, and the districts held by members of the Freedom Caucus are Red enough that they wouldn't vote for Democrats to save their lives, let alone in support of a more pluralistic society. So Democratic denunciations come across mainly as virtue signalling.

Honestly, it's all an exercise in virtue signalling... only the standards of "virtue" are different.

The media helps, by portraying all of this as newsworthy on the national stage. It allows everyone to be performative in front of larger audiences, but it enlightens no-one. It's hard to imagine anyone who wasn't aware of how all of this works at this point. Still, people have to be allowed to put points on the board, even if no-one's actually watching the game.

Monday, March 9, 2026

Misfired

At the risk of coming across as flippant, I'm going to quote Superman, from the DC Comics series Kingdom Come. "You can't have a war," the Man of Steel said to Wonder Woman, "without people dying." To which most people, I expect, would respond with something along the lines of: "That, we knew already." People generally understand the nature of war. While it might not be true that "War never changes," there are certain things that tend to be constants; like casualties.

After the first three deaths were reported, Trump told NBC News on Sunday: “We have three, but we expect casualties, but in the end it’s going to be a great deal for the world.”
[...]
Then in a video posted to social media the same day, he again seemed to ask for people’s understanding about the subject.

“And sadly, there will likely be more [deaths] before it ends,” Trump said, before adding: “That’s the way it is. Likely be more.”

He then added: “But we’ll do everything possible where that won’t be the case.”
Trump’s and Hegseth’s awkward comments about US troop deaths in Iran war
But another constant is the deaths of non-combatant civilians.
Speaking aboard Air Force One on Saturday, President Trump accused Iran of being responsible for the school bombing.

"Based on what I've seen, I think it was done by Iran," Trump said. "Because they're very, inaccurate as you know, with their munitions. They have no accuracy whatsoever. It was done by Iran."
Video appears to show U.S. cruise missile striking Iranian school compound
On the one hand, I understand the President's looking to shift the blame. After all, he's been pushing a narrative of the United States being the unambiguous Good Guys in this conflict, even if looks like, once again, President Trump using the military to go after a nation that no-one else is close enough to that they'd be willing to stand up for them, and that doesn't have the wherewithal to fight back in kind.

But on the other hand, there's nothing new or unusual about inaccurate or outdated intelligence, or weapons not being quite as "precision guided" as they're advertised as being. People die in wars. And sometimes, they're people that everyone would rather had not been killed. The history of war is littered with people who has the misfortune of happening to be somewhere that a weapon also happened to be, but who weren't the intended, or presumed, targets of that weapon. Why would anyone expect this particular war to be any different?

It's reasonable for people in the United States to want their nation to have clean hands. It's less reasonable to expect that a war being fought mainly with long-distance weapons is going to result in clean hands. And if the President wants to keep American casualties to a bare minimum, then the United States is going to have to do much of its fighting from a distance. And the more that the war relies on hitting targets from a long way away, the more it relies on reports of what's where and who's who, the more that there are going to be times when a bomb, or a missile or whatever hits someone that it wouldn't if someone had realized precisely who was in the line of fire. The Commander-In-Chief, off all people, should be prepared to own up to that.

Sunday, March 8, 2026

Talkative

I had just gotten out of the car when I heard it: "Hello. Hello." The voice sounded strange, like that of an elderly person, but more high pitched than one would expect.

I looked around for the source, and then heard it again; "Hello. Hello." Now I realized that it was coming from above me. I looked up, and, there in a tree overlooking the walkway was a crow. "Hello. Hello."

"Hello, hello, little crow," I said back to it, cheerfully. It really didn't seem to take notice of me. It simply repeated "Hello. Hello." every ten seconds or so.

I had shopping to do, and a time limit on top of that, so I left the talking bird to converse with my car and went into the store. While I was wandering the aisles, it occurred to me that I'd heard that crows could do this; they were one of any number of bird species that could mimic sounds from their environment. But this was the first time that I'd actually encountered a crow actually mimicking a sound, let alone a human voice.

So now I'm curious as to why it seems to be so rare an occurrence. After all, there's no shortage of the birds in this area; I see and/or hear them pretty much every day. And when it comes to grocery store parking lots, and other places where one might encounter dropped or discarded food, they're effectively a constant presence. And while Seattle and the Eastside are much quieter (at least as it seems to me) than my native Chicagoland, there are still plenty of sounds to repeat.

It's possible that I simply haven't been paying close enough attention, so I'll have to be more alert in the future, to determine if there are more talking birds in the area. 

Saturday, March 7, 2026

Group Think

I was reading "Reclaiming Democracy From the Market," with MIT economist Daron Acemoglu sitting down to interview Harvard political philosopher Michael J. Sandel. Professor Acemoglu opens with:

From our conversations, and even more from your books, I have the sense that you see political philosophy as not just an inquiry into abstract concepts or a search for absolute truths, but as part of an ongoing dialogue with society about how we should organize our collective life, what we should value, and what we should resist.
This raised an immediate question for me: Who is the "we" Professor Acemoglu was referring to? Sure, one can make the case that it simply refers to "society," but even then, there is a question, because it's unlikely that a society is going to be unanimous about its values and the like. But just as importantly, how does "society" represent a "we" in a way that "the Market" does not, if they are the same people?

At one point Professor Sandel notes:
But even if the wealthy paid their taxes, they might still enjoy a kind of honor, prestige, and esteem that is out of proportion to the value of their contribution, especially when compared, say, to teachers or caregivers.
Okay... so? Honor, prestige, and esteem, unlike something like attention, are not rivalrous; I can give as much prestige to people I decide to, without reducing the amount I have "left over" to give to other people. So why does there need to be a society-wide dialog as to how much any given person is valued.

This is why I'm dubious of ideas, like those of Professor Sandel, that imply that certain choices should be collectively, rather than individually made, when all that really comes down to is some number of individuals deciding that their choices should trump everyone else's. Because, at least as I understand it, markets do represent a kind of social choice; it's simply emergent from a number of individual choices rather than a large group deliberation. So what really does deliberation create that can't otherwise be obtained? Certainly not unanimity. True, collective action gets a group around collective action problems, but even that's different than presuming that this creates some sort of unity.

And I think that this is what kept nagging at me as I read the interview... the idea that some sort of problem-solving solidarity would emerge without any mechanism being proposed for how that would happen. "Democratic deliberation" may be a great thing, but it's not magical. It can't bring together subsets of the populace who are actively at cross purposes with one another, or create enough of a scarce resource to share between people. Granted, markets don't necessarily do any of that, either, but I'm not sure they purport to.

This isn't to say that markets are necessarily better solutions to social problems than democratic deliberation (although they tend to be faster to operate), but in a way that's the point. There are problems, like what happens when one group considers the actions of another group to be an active threat to themselves and/or their interests, that neither institution is well-suited to solve. 

Friday, March 6, 2026

Chatty

I stopped by the bookstore this evening, and saw this rack of ChatGPT-related magazines. Having picked up an earlier one from one of the same publishers, I understand that they're about how to use generative automation more broadly, so it's interesting that they still treat the public as equating "ChatGPT" with "A.I." more broadly. Personally, I don't think that it's true anymore, but maybe it's just the circles I run in.
 

Wednesday, March 4, 2026

Available to Everyone

The United States Supreme Court has declined to hear an appeal of a lower-court ruling that the U. S. Copyright Office's understanding that copyright only applies to works by human authors. The Court had also rejected another appeal, by the same plaintiff, of a ruling that affirmed a similar policy on the part the U. S. Patent and Trademark Office.

I'm not a intellectual property lawyer, but it appears to me that between these rulings, items created by generative automation, and genuine artificial intelligence, if/when it comes along, are not eligible for intellectual property protection. In the case of most audio/visual media, I'm not sure that this will really move the needle all that much, at least at the start. But in the case of inventions, it could have repercussions. If part of the promise of automation is that it could create new medically-useful drugs, or create other products, the inability to patent them may be a strike against broad adoption of the technology for such purposes. Given this, it seems that large companies will take this lying down. I doubt that they'll attempt to directly re-litigate these sorts of cases; it's highly unlikely that this, or a future Supreme Court would reverse itself on this, simply because it was Pfizer Inc. bringing the appeal, unless things had gotten to a point where the Court simply stopped caring if the public felt that it was openly in the pocket of Big Business.

And so that leaves Congress. If corporations are going to want to outsource their research and development to some datacenter somewhere, and still be able to claim a government-enforced monopoly on whatever it is said datacenter comes up with, intellectual property law will have to change. And, regardless of what individual Representatives and Senators might say, Congress tends to be very willing to openly ally itself with business interests, and then make the case that they're doing it all in the name of helping the general public.

Of course, it's unlikely that the overall business community will be aligned on this; there are likely to be some sectors who feel that computer creations having to be either closely-guarded trade secrets or effectively in the public domain works in their favor, and so I can see lobbyists working both sides of the issue here.

But (as there always is), there's a simpler way, perhaps to deal with such issues: lying. I wouldn't put it past anyone, especially not someone who feels that they've created an amazing new advance in some field or another, to simply claim that a person invented it. The same goes for artwork, for that matter; launder something through Photoshop enough times, and would it be possible to determine that the original had been created by a machine? In this way, I can see detection of automation-generated outputs becoming a big business, if for no other reason than the amount of money that could be on the line.

There's also another angle: If the Copyright and Patent/Trademark Offices won't grant protection to the outputs or autonomous automation, that's another obstacle to the idea of a one-person company with a billion-dollar valuation. Because if they can't copyright or patent the products or services that the agents produce; they'd have to be in business that's extremely difficult to copy.

Monday, March 2, 2026

Picking Sides

Over the weekend, there was an Ipsos/Reuters poll that covered the ongoing attacks on Iran and the Trump Administration's use of force in general. While the headline proclaimed "Just one in four Americans say they back US strikes on Iran, Reuters/Ipsos poll finds," for myself I wonder if that was what was actually being measured. Consider the results that drove the headline:

While the Democrats booed louder than the Republicans cheered, there's still a pretty clear partisan divide in the numbers, to the point where I wonder if this is really a poll about partisan identity. I'm pretty sure that Ipsos/Reuters weighted their results to better align with what they understand the current partisan percentages to be, so it's unlikely that the percentages given reflect the raw numbers. It is interesting, however, where the numbers for partisans do and do not align with the "Other" category at the bottom of the graph. It's also interesting that, in terms of the "No" choice, that the numbers for the "Other" category roughly align for those for all survey participants, given the broader variance in the other two options.

Overall, the Democratic-identified participants come across as the most reflexively partisan, in the sense that they are more likely to disapprove than Republicans are to approve, less likely to approve than Republicans disapprove and less likely to be undecided about the matter. This could give Republican office-seekers heartburn come this year's election season, as the Democratic coalition tends to have more high-propensity voters, as I understand it. If that holds, and the lower-propensity voters who would otherwise lean Republican stay home, the Democrats may find that they have enough new seats in Congress to actually change things, at least on some level.
 

Sunday, March 1, 2026

Salesmanship

Part of me wants to ask: If generative automation is so great and wonderful, why are there so many messages that seem to attempt to threaten people into using it? Like the following example:

taste, domain experience and relationships are still incredibly valuable but refusing to use AI for the tactics and execution part of your job is a one way trip to being unemployed 

plan accordingly
But I suspect that I know the answer to that.

If I'm going to pitch generative automation to you as a positive thing in your life, something that can solve problems for you, I have to actually know you well enough to have an understanding of what your problems are. It doesn't do me any good to say that you'll be able to write 10x more code, if you don't write any code for a living.

But a claim that not using generative automation for tactics and execution will result in unemployment doesn't require me to really know much about the actual job someone is doing. There are a lot of jobs that have some tactical and executive functions attached to them. So the message of "use automation, or else!" can seem more broadly applicable.

This trade in anxiety doesn't serve anyone well, because its primary purpose comes across as setting people up to be blameworthy for any eventual misfortune: "Oh, you can't find another job that will support you and your family? Should have leaned into AI harder!" And what good does this do anyone?

In the end, it's an odd message: "This is critically important to you, but not so important that I feel any need to offer affirmative guidance on how to do it." And in this, it feels like American individualism talking, in that it doesn't care if anyone else succeeds. Which may be the point all along.