Thursday, March 28, 2024

Transacted

Trump has developed a sense of impunity when it comes to religious messaging, forged through a grand compromise with Christian conservatives who see him as a flawed — but effective — champion of their movement.
Trump's Bibles and the evolution of his messianic message
Okay. I'll bite. What "grand compromise" is Axios referring to? I don't really see Donald Trump as having conceded anything to Christian conservatives. Sure, he pays lip service to religiosity and advances policies that Evangelicals like, but there's no indication that prior to some negotiated agreement with some form of conservative Christian leadership. His buy-in to the idea that the "War on Christmas" has evolved into a broader "War on (conservative) Christians" was not something that Mr. Trump needed to agree to in order to obtain the support of a section of the electorate; it's a basic part of his general modus operandi of finding a fight already in progress and picking a side.

The whole point is that there was no compromise, in the same way that an expression of gratitude after a gift isn't understood to be a compromise. The attitude of Christians who see the former (and maybe future) President as a champion for their efforts to elevate their values and interests to a privileged place in American life have shifted from something along the lines of "God may use the flawed to further its ends" to "he's genuinely one of us." The fact that, over the past decade, he's shifted from someone considered to not know one end of a Bible from the other, to being considered more religious (among his base, anyway) than his famously Evangelical Vice-President shows that while there may have been a transaction here, it wasn't a compromise.

Similarly, prominent conservative Christians are well past the point of "holding their noses" to support Donald Trump. And they haven't needed to, or been asked to, give up anything in the name of forging an alliance with Mr. Trump, or Trumpism more broadly. Those people who were willing to air opposition to, or even reservations about, taking a seat on the Make America Great Again bandwagon have been sidelined. Mainly because the understood, if not always openly stated, goal of the entire MAGA project is to more than roll back the clock to a supposed halcyon age of the supremacy of Christian leadership from the pulpit. It's to implement a vision of Christian faith as the exclusive foundation of all American ideals; to move to an understanding that in order to genuinely believe in (and thus work to implement and sustain) ideas such as "equal protection under the law" or "freedom of expression," one must be an open, practicing Christian. It's an outgrowth of the idea that ethical behavior itself requires a belief in the Abrahamic God, and the only correct belief/faith in that deity is the Western understanding of Christianity. And this means buying into the idea that the supernatural war between "Good" and "Evil" is playing out in the material world.

Donald Trump has been able to insert himself into this narrative through his support of the worldview that Evangelicalism holds. Accordingly, his legal troubles stem not from a failure to "give to Caesar what is Caesar's," but from the "fact" that attempts to advance the cause of the divine in the world will be met by those who, inadvertently or knowingly, are on the other side. Much of modern American Christianity sees itself as persecuted because of an understanding (and this is a more common viewpoint than perhaps it's given credit for) that their values and goals are demonstrably the best thing for everyone, rather than a set of interests that are in opposition to those of other groups.

It may be very accurate to describe the political relationship between Donald Trump and the Christian Right in the United States as "a grand bargain." Both sides bring something important to the table, both sides see powerful benefits from the arrangement and each apparently believes in the other's sincerity. It's a good match. To look at that, and characterize it as "a grand compromise" is, I think, to demonstrate a lack of understanding of what's at stake here. True, Christian conservatives have decided that Mr. Trump's prior history and irreligiosity are things to be overlooked. But to call that a compromise is to elevate the importance of that factor far higher than history would warrant.

Wednesday, March 27, 2024

Hype Technology

"The core problem is that GenAI models are not information retrieval systems," [AI ethics expert Rumman Chowdhury] says. "They are synthesizing systems, with no ability to discern from the data it's trained on unless significant guardrails are put in place."
Chatbot letdown: Hype hits rocky reality
Of course generative artificial intelligence doesn't live up to the hype. If it did, one could make the case that it wasn't hyperbole. But it generally turns out to be hype, if for no other reason than there seems to be something deeply alluring about the idea that people can build something, and it will just solve a bunch of problems, and not have any downsides. In the time that I've been in the technology sector, I've seen a consistent discomfort with the idea of trade-offs, even when the reality of them is metaphorically staring people in the face, and generative artificial intelligence is no exception.

When I've experimented with Microsoft's Copilot system, I haven't found it to go off the rails in the way that many earlier systems may have, but it is verbose, because its default is take in whatever data it's given and to synthesize more. Back when I used the tool to help me translate an old story snippet I'd written in Japanese into English, it volunteered a prompt, requesting it to tell me how the main characters met. And then it synthesized a story; it had no other choice, the characters it offered to tell me about had no other existence beyond the short passage that I'd written more than two decades ago; there couldn't have been any information about them in the training data. And I can see how that lends itself to an interpretation that the model "knows" things, and that asking it more questions will reveal more information. But that requires seeing it as something more than a remarkably sophisticated auto-complete function. Which it isn't.

That said, there are several things that one can do with a really sophisticated auto-complete function that will make businesses, and potentially people, more efficient and productive. But for right now, they're mainly limited to applications where it becomes evident fairly quickly if the system has it correct or not. I knew that the AI systems made errors in my initial experiment, to determine the length of time between two dates, because I asking the question with the goal of having the systems tell me the answer; I already knew the answer, because I'd sorted it out for myself. I was looking to see the degree to which the various models disagreed with one another. But if I'd been asking because I genuinely didn't know, and had used the answers provided for anything important, that could have spelled trouble.

The term generative artificial intelligence is a misnomer because the systems involved are not intelligent. As Ms. Chowdhury notes, the systems lack the native ability to discern things from the data they were trained on. But they're treated as thinking, and as knowing, because that's how they appear to people. Copilot, when it tells me that it's taken a "whimsical journey" (otherwise known as synthesizing random details about something), behaves as though there is a creative intellect in there, somewhere. And I think that this, combined with the speed of its answers, makes it easier to see the system as smarter than a person. And since any problem can be solved if one is just smart enough...

Except, that's not true. There are plenty of problems with the world that are going to take more than a person, or a machine, being clever about solutioning. I was listening to a podcast about Haiti today. That doesn't see like a problem that's mainly here for want of a clever solution. Likewise, the question of workers displaced by continued adoption of automation is also not a problem that will yield to cleverness. Like many things that don't live up to the hype, the problem is overly optimistic impressions of what the technology can do.

Monday, March 25, 2024

Faith Based

Every year. Pew Research Center conducts a study on both governmental restrictions on religion and social hostilities involving religion. This year's report made for interesting reading.

The report was at pains to point out that it "is not designed to determine which religious group faces the most persecution." Which was a shame, really. Clearly they understood that religious partisans would be combing the report looking for evidence to back up their claims to being The Most Oppressed, presumably in the service of demanding more resources and protection for themselves. Granted, the report offers the opportunity to indulge in a sense of victimization. It notes that Denmark requires that animals be stunned prior to being killed for meat production, and that this makes it more difficult to obtain Kosher or Halal meat, but it doesn't specify why this is about government harassment of a religious group, as opposed to an animal welfare/anti-cruelty measure. Similarly, it calls out restrictions on the ability to claim conscientious objector status (or be exempted from otherwise mandatory military service) or to hold in-person gatherings in the face of public-health orders to the contrary to be examples of government interference in worship. This gives the impression that simply having to follow the same rules as everyone else can be viewed as governmental restriction on religion.

Likewise, the report seems to code simple disputes between religious communities, and communities that happen to have different religious beliefs as a form of social religious hostility. For example, it was noted that Bolivia's social hostility score went down because "there were no reports coded in 2021 that Protestant pastors and missionaries were expelled from Indigenous communities for not observing Andean spiritual beliefs." (This raises an interesting question; when one group wants to proselytize, but the leadership of another group does not want their community proselytized to, who can claim the hostility? While the expulsion of missionaries seems like a clear case, it's worth noting that for many missionaries, the end of other religious beliefs is their stated goal.) In Nigeria, conflicts between “predominantly” (quotes in original) Christian farmers and Moslem herders are framed as sectarian social hostility, despite the fact that conflicts between herders and farmers have been taking place for nearly the whole of human history.

None of this is to say that the situations and incidents mentioned aren't religiously motivated (especially the expulsion of missionaries) but I did find myself questioning what the expectation of religious entitlement was. Governments enact laws with disparate impacts due to other factors all the time, and fighting between groups is pretty much the one constant to be found in human history. Why people should expect that, for example, only secular buildings should be subject to vandalism, or that clergy of faiths that claim an exclusive understanding of truth would refrain from public criticism of attempts to propagate "incorrect" teachings is never addressed.

Religion is often viewed as being a higher-stakes enterprise than other aspects of one's daily life. If I attempt to convince someone that they might also enjoy building plastic model kits, someone close to them might object on the grounds that it can be expensive or time-consuming. But were I to attempt to convince someone that their deity isn't real, I could be seen as attempting to set them up for a punishing, rather than pleasant, afterlife, or some other form of real spiritual harm. Not everyone believes that all religions are equally valid. (Or, as the late Christopher Hitchens has put it, equally demented.)

And that might be the most curious thing about the report. It posits a world in which no-one ever fights over religion; one in which immoral teaching and leading people away from true faith may be possible in the abstract, but aren't seen as worthy of any real-world actions. The stakes are not simply low, they're non-existent. But that's not how religion in the world actually works. And it's unlikely to ever do so.

Friday, March 22, 2024

Springtime

Taken while walking around the neighborhood. Spring came early this year, and settled in to stay.
 

Wednesday, March 20, 2024

A You Problem

I was reading "The psychological battle over trauma" as part of a deeper dive into the phenomenon of "therapy speak," and came across the following passage:

The psychotherapist Alex Howard, author of It’s Not Your Fault, distinguishes between overt trauma, as described by Bonanno, and covert trauma, this less tangible, nevertheless traumatic experience. [...] But this covert trauma, for an increasing number of clinicians, explains why we are the way we are. And through this interpretation, we are moving our conception of mental health away from “what’s wrong with you” and toward “what happened to you?”
The title of Mr. Howard's book is telling, and perhaps points to the root of the problem, at least here in the United States. American society is, in a number of ways, focused on efficiency: how to derive the highest returns from any given set of inputs. But it also manifests itself in a drive to decrease the inputs while maintaining the same returns. And labor is one such input.

The material needs of the United States can be satisfied, generally speaking, without needing the entire populace of the nation to work. One could make the case that there is unrealized demand in some or all sectors of the economy, as the United States' high levels of inequality have the effect of suppressing demand at the lower levels of the income and wealth distributions, but as things are currently structured, the United States effectively has an excess of labor capacity. The fact that the United States has a weak system of social supports, given that it is an industrialized and expensive society, means that this excess capacity becomes competition for work. Likewise, technological advances (and differentials in education) have led to that competition being international. The result being that an unemployed American can find themselves competing with workers literally on the other side of the globe for opportunities. For those people who hold opportunities, and thus can distribute them to others, this creates a wealth of choices born of a flood of candidates. And so a means of discrimination is required. And "something is wrong with this person" is as good a means of sorting as any.

Part of the rationale behind the adoption of "therapy speak" is overt effort on the part of people to say "Whatever flaws you may perceive in me, they aren't my fault. Nothing's wrong with me; something happened to me." This is a sub-optimal viewpoint on the subject, because it buys into the hostile framing of the underlying concern that the "judge" brings to the question, namely: "I have learned this bad thing about you, and it legitimately disqualifies you from the opportunity to work to support yourself." Of course, these sorts of questions extend beyond work; people deploy variants on "It's not my fault," in all sorts of situations, and many of them serve to legitimize what should be understood as the basic problem; the continuous need to find fault with others as a means of justifying the choices one makes concerning others.

Were it up to me, I'd steer society away from it's current apparent level of buy-in to a culture of stigma. But I understand that it's a tough sell. While I was never a big fan of Senator Bernie Sanders, I think that his perception that one of the primary factors driving prejudices is the perception of scarcity is largely correct. While it's true that there are people out there for whom preventing people from meeting their needs is an end in itself, for many people, the competition for resources pushes them to developing ad-hoc heuristics that sort themselves into the group of people who are deserving of access, while keeping enough other people out that a perception of shortage is averted. In other words, instead of viewing scarcity as the problem to be solved, resource distribution to the undeserving is the concern. Assigning stigma to others, then, becomes a solution, even if it isn't a good one.

Monday, March 18, 2024

Torched

New York City drugstores are so rife with plastic lockup cases that one crook was forced to use a blowtorch to blast one open, making off with $448 in skin care products.
Retailers pile on new tech to deter theft
I'm going to admit that I hadn't seen that one coming. I understand investing in some amount of equipment and going through some amount of effort in order to get one's hands on something, but blowtorching open a display case for less than $500 in stuff (especially given that it won't sell on the street for that much) strikes me as over to top. But I suppose that it shouldn't. After all, whether one sees retail thieves as done in by economic conditions or systems that have rendered them unemployable, or simply too lazy or venal to find honest work, $450 dollars in "free stuff" is attractive all the same. And blowtorches aren't that expensive.

I'm originally from the Chicago area, and I've been in parts of the city that could teach prisons a thing or two about security. It's strange to walk through a neighborhood where literally every window accessible from the ground has heavy bars to prevent people crawling in, or to go into a fast-food restaurant where the counter sports thick, bulletproof plexiglass, with a turntable through which money and food can be passed. Strange, but apparently not newsworthy.

What I think has been driving the current push of news stories about retail theft is precisely the fact that it's spread from benighted and forgotten neighborhoods on the South side of Chicago and out in the suburbs and downtown areas where wealthier people shop. So it's now confronting people who can profess to be shocked and upset by a state of affairs that other people have been attempting to deal with for some three decades, if not more. And shock and upset drive attention.

Personally, this is the sort of thing that calls for solutions journalism, and not simply the solutions of increased surveillance and buying into trading personal data. But a solution to the problem of... I'd say poverty, but I think it's more leaving people behind. Of course, that presupposes that there is a solution to a situation that's persisted as long as it has precisely because it works for people. Or, at least, for enough people that the will to pay the price of fixing it isn't there. Anti-theft technology will do a good enough job at a good enough price to be a viable patch on a difficult problem. It's sometimes disappointing that it's all society asks.

Sunday, March 17, 2024

Blocked

I suspect that one of the barriers to reducing, let alone eliminating, poverty is simply this: The exploitation of poverty creates active incentives to perpetuate poverty, because poverty itself becomes a resource. And one that's much more renewable than many others one might name. As I've grown older, I've come to believe that the "wealthy world" including, or perhaps especially, the United States, has built its economy in such a way that were global poverty to go away tomorrow, the system would quickly become unsustainable. The idea that the world's leading economies would structure things in a way that could actually solve global poverty presupposes that people who have built their affluence on the exploitation of poverty are careless enough to kill even a shabby goose that lays golden eggs.

And, to be sure, I'm not simply talking about nameless, shadowy "élites" in a figurative smoke-filled backroom somewhere. Irrespective of how precarious they feel their lives might be, a lot of middle class Americans (and likely some people for whom the middle class is out of reach) rely on the low labor costs, and thus poor returns on labor, that poverty enforces to have the comforts they do have in their lives. And people are loathe to surrender their comforts, even when it can be shown that they come at a cost to others.

This strikes me as a recurring theme in life. When I was young, Generation X's "generational anxiety" was about the nation's level of debt, which began to climb sharply after Ronald Reagan's slashing of income (and other) taxes. But that anxiety only seemed to last long enough for the cohort to understand that if government spending was to be brought under control, "we" would need to be the ones on whom less was spent. And the political establishment understood that; so while there's political benefit in pointing the finger at someone else and vowing to raise their taxes, telling the public as a whole it's time to pay the buffet bill is a non-starter.

There are constituencies for the continuation of all of life's big problems. I'm curious how many of them I'm a member of.

Thursday, March 14, 2024

Rewrites

One of the worries that I'd heard voiced about Generative AI concerned the biases that might be present in the data. This prompted me to wonder, what could you understand about the dataset and/or the training that went into it, given the sorts of responses a system gave to prompts?

To be sure, I have no idea of how to determine this... I simply don't know enough about how these systems work "under the hood" as it were. I'm a dabbler, not an engineer. But since the genesis of this series of experiments was someone finding that different systems gave different answers to the same prompts, another test came to mind. Last week's experiment had Copilot, Perplexity, Gemini and ChatGPT 3.5 translating a snippet of a document I'd written in romanized Japanese some two decades ago. This week, I gave each of them the entire translation, as created by Copilot, and had each system re-write the text. In a nutshell, the text is a brief narrative about Tom, who works for a bank in Sim City.

Copilot and Gemini both did two things that stood out from the other two: 1) they created titles for the story, and 2) they re-ordered some of the details of the story. In the original, I start by noting that Tom works for a bank, but don't note that he's the branch manager until later. Copilot and Gemini note both Tom's workplace, and his position when they introduce him as a character. Perplexity and ChatGPT 3.5 had their own similarity: they created very similar text. The first sentence for each matched; literally word-for-word, and there is a sentence in the middle of the story where the two systems varied only by a single word.

Gemini's rewrite was brief, managing to trim the text by almost 40 words, nearly a quarter of the total. Copilot, conversely, was the most verbose of the systems; it was the only system where the re-written text was longer than the text I'd submitted, by about a dozen words. Mainly because it tended to add little flourishes into the final document, but also because it cut the fewest corners in noting the details of the original text. To be sure, however, all of the systems had trouble with the details, sometimes appearing to miss the nuances.

In the end, despite what I said earlier, I think I can start to understand something about the "interior" of each system from this test, given that I'm already starting to build a set of expectations of what each would do with a given input. I expect it will take several more trials to distill what seem like "personalities" into a distinct set of rules that each operates by. The fact that I'm not an engineer would make the task longer, but it seems doable. Which makes sense; it's the differentiator of systems that are otherwise doing the same thing.

Wednesday, March 13, 2024

Uncommitment

Today is the 13th of March, and that makes it, among other things, the day after the Washington State Presidential primary election. Not that it matters; both the Democratic and Republican primaries, to the degree that they were actually contested, were already over on the national level.

But there's always something interesting going on, and yesterday, I came across this sign, stuck in the parkway near a local grocery store:

I don't know when the idea that a vote for an "Uncommitted Delegate" became a sort of vote against continuing Israeli military action in Gaza. And I suspect that if I'm not clear on this, the campaign to re-elect President Biden to another four-year term isn't either.

It's a strange way to conduct politics: the Biden campaign is meant to take away from this that there are voters who are unhappy with the relationship between the United States and Israel, but then what? In the event that the conflict between Israel and Hamas simply drags on, an incoming Trump 2.0 administration is unlikely to request that Israel, dial things back... the Evangelical portion of the Republican/Trumpist base tends to be fairly Zionist and unconcerned with the plight of the Palestinians. So is the threat here that people will withhold their votes in November?

Given the nature of the Electoral College and Washington State politics, one may as well put Washington's 12 Electoral College votes in the "Joe Biden" column right now. This place hasn't been competitive for decades, and isn't likely to become so anytime soon. So there's no real leverage for a small number of ceasefire supporters. (If there were more of them, and they had more resources, their message would be more widespread.

That makes this a signal that has a difficult time carrying any information. Politics tends not to work in the way that many people want it to; policymakers tend to have a level of insulation from all but the most intense levels of direct public opinion. Given this, signals have to be clear and unambiguous in a way that common public channels rarely are.

Sunday, March 10, 2024

Unscriptured

Back when I first started this project, I wrote about a group of activists who met every Saturday to protest the wars in Iraq and Afghanistan. And I'd dropped in on them from time to time, to see how things were going. With the end of active military operations in the two nations, the protests wound down.

But with the war between Israel and Hamas, and the common idea that Israel is dependent on the United States to the degree that President Biden could effectively end Israel's ability to prosecute the war, protests are on again, in the same place at the same time.

I made time to drop in on the protests this week, mainly to get some photographs; it's the sort of thing that I find to be worth recording. I chatted with the protestors for a bit, noted both their small numbers and the absence of any counter-protests, and snapped a few photographs. Far from the paranoia of the mid-00s, a couple of the protestors wanted to make sure that I was able to get clear pictures of their messages.

I've had four years of theology classes and I'm pretty sure I don't recall that being in the Gospels.
I'm still unconvinced that street protests like this, especially when they are small scale, actually get anything done. But I will give them credit for persisting.


Thursday, March 7, 2024

Romance On Demand

Experimenting with generative A.I. reminds me of how much I enjoyed software testing. I find coming up with interesting, and plausible, use cases and seeing what the systems do with them genuinely fun.

Back when dinosaurs roamed the Earth, some friends and I took Japanese lessons. For one of the exercises, I wrote a short story about a bank manager named Tom. Because we were only learning spoken Japanese, I wrote it in rōmaji, or Latin script. Fast forward two+ decades, and I barely had any idea of what it says. So I dropped it into Google Translate; which recognized it as Japanese, but its "English" translation was basically just a trimmed version of the original rōmaji text.

So I figured I'd see what the generative A.I. systems would make of it. I asked Copilot, Perplexity, Gemini and ChatGPT 3.5 "What does this say:" and then dropped in a snippet of the text. Gemini took the prompt to be a request for information about Tom, and noted "I do not have enough information about that person to help with your request." Perplexity's translation was a bit redundant in places (and somewhat confusing for that), but it was close to the answers that ChatGPT and Copilot gave.

Getting Copilot's answer, however, was a bit of work. It initially took the romanized Japanese I provided, and wrote it out in Japanese characters, using Kanji, Hiragana or Katakana as (I presume) appropriate, so I had to then ask it to translate that text into English for me. It seemed to be fairly true to what I sort of remember writing back in the day, so I took it a step further and dropped in the entire story, which had more details of Tom's commute and how he spends his weekends and includes another character, Noriko. At the end of another two-step translation, Copilot presented some interesting choices for follow-up prompts, like: "How did Tomu-san and Noriko-san become friends?" Curious, I clicked on it.

It was an interesting exercise in generative pre-training hallucination, as Copilot spun up the plot of a cozy, cheesy romance novel, with Tom and Noriko as the stars. (Forming an interesting contrast with Gemini.) I can see how building an LLM that's programmed to allow it to expansively "infer" things from a short text sample can be useful, especially given that Copilot clearly noted that it had engaged in a "whimsical journey," but I think that I would have built the system to make it clear up-front that the offer is for what's effectively a work of speculative fiction. That would also give the system a chance to ask just what sort of fiction the user wants; as I would have chosen a much different theme than Copilot's derivative romance plot.

Wednesday, March 6, 2024

Two-Sided

It's been interesting how much bandwidth the Israel-Hamas war, and by extension, the conflicts between the Israelis and the Palestinians more broadly, has been taking up here in the United States. I've listened to a number of podcasts on the events, even without doing anything to seek them out. And one thing that's occurred to me from listening to multiple people discuss both the current conflict and its history is the idea that both sides will appeal to other parties when it suits them, while denying their legitimacy when it doesn't.

One of the arguments that I'd heard from people supporting the Israeli side of the conflict was that Israel agreed to the terms of the United Nations partitioning of the former Mandatory Palestine, while the Palestinians (and other Arabs in the area) did not. The common Palestinian counter to this is that the United Nations had no legitimate right to hand over Palestinian land to the Jewish residents of the area. It makes sense for the Israelis to support the right of the United Nations to give them land, and for the Palestinians to dispute that right, but both positions are fundamentally self-serving.

Likewise, the Palestinians and their allies in the international community feel that the United Nations should be able to call for a binding cease-fire. For the Israelis, on the other hand, the only international voices that matter are those that support them, most notably the United States, who they rely on to block any criticism of them in the Security Council. Again, this is understandable, even though the positions are self-serving.

In the end, this will likely be one of those situations which has no good end. Both parties feel that they have the more legitimate grievance, and their historical views of the conflict tend to have starting points carefully chosen to support their specific viewpoints. Just as one man's war crime is another man's smart fighting, one man's justified reprisal attack is another man's atrocity. No one ever sees themselves as abusive or evil, and rarely, if ever, do they understand their reasoning as self-serving. That's left to the people they don't listen to.

Tuesday, March 5, 2024

Marked

 

Graffiti is, for the most part, impenetrable to me. I can admire the clear artistry that goes into some of it, but the message is generally beyond me. Such is the case here, with the spray-painted scrawl I encountered on the boarded-up front of what had been a Staples office-supply store in the area. I presume that odd mix of Christian and Egyptian motifs means something (other than the tagger being mentally ill) but I have no idea that that something might be.

This sort of weirdness pops up a lot, here in the Seattle area, and more so as one gets into the city proper. It's the sort of thing that local Republicans decry as the beginning of the end of any semblance of Law and Order in the region, while local Democrats often feel it doesn't rise to the level of criminality. Political arguments (and talking points) aside, it does seem as though the area is in something of a tug-of-war over whether the local architecture should be a canvas or not.

Sunday, March 3, 2024

Buzzy

Artificial Intelligence is any number of things. Two of them are 1) a technology and 2) a buzzword. This is common; it happens over and over and over again; this isn't a new phenomenon. When a technical term becomes a buzzword, it's spoken of as if the things one does with it are somehow fundamentally different than they were previous. Take e-commerce. Placing an order with a vendor in another location is certainly easier and faster via the World Wide Web than ordering from, say, a physical mail-order catalog. But the fundamental processes on the back end are more or less the same. And the goods and services aren't any different. Ordering a book on Amazon has any number of similarities to me going over to Barnes and Noble and requesting a book.

I mention this because I found a brief article in The Week about "AI"-generated pornography. It quoted Parrots Lab founder Naja Faysal, and I decided to look up the Medium post that the article referenced, to read it for myself. One of the points that Mr. Faysal makes is that recent advances in generative AI raise "critical ethical questions about the representation of consent, the portrayal of healthy sexual relationships, and the potential impact on human empathy and connection." Fair enough. But doesn't pornography created by other methods raise those same "critical ethical questions?" I'm not seeing what this one specific technology is doing that changes the ethical landscape around depictions of human sexuality for the viewer's pleasure. The invocation of AI here seems to be more a means of getting people's attention than a genuinely salient factor.


Friday, March 1, 2024

Or Not To Be

For this week's random act of large language model experimentation, I wanted to know how the systems would react to a request for a model of a thing, rather than the thing itself.

To this end, I asked Copilot, Perplexity, Gemini and ChatGPT 3.5 two questions:

1. Give me a model of a joke.
2. Give me some text in the form of a joke, that is not actually a joke.
(Bing and Google both did their basic Search Engine thing, and so aren't included here.)

Copilot and Perplexity gave me a bog-standard "dad joke" for both questions. What was interesting was that they gave the SAME joke, word for word, as the answer to Question 1.
Why did the scarecrow win an award?

Because he was outstanding in his field!
The only difference was that Copilot tacked a "😄" onto the end. It also told me that this was a "light-hearted joke." In case I missed it, I suppose.

Gemini and ChatGPT both offered a simple "Setup and Punchline" model for Question 1 with the setup being a question, and the punchline being a statement. This was, in fact, the format that all of the systems used in their jokes. (I had been expecting at least one "knock-knock" joke to make it in.) While ChatGPT offered up a vegetable-related dad joke as an example, Gemini followed up with a setup, but left the punchline blank.
Here's an empty model you can fill in:

Setup: A man walks into a library and asks the librarian for books about paranoia.

Punchline: ____________________
To be sure, it seemed more like a test than an example. I suspect that coming up with a good punchline to that would prove difficult.

The two systems' answers to Question 2 were interesting, since they seemed to presume that pretty much any two sentences in "question then statement" format qualified as being "in the form of a joke." Gemini asked if I know that 771 million people lacked access to clean drinking water, then told me they were mostly marginalized communities. ChatGPT offered up wordplay. Both of them then explained why the text wasn't a joke; Gemini explaining that it was to leave "space for reflection instead of laughter," and ChatGPT informing me of its wordplay.
Why did the computer go to the doctor?

To get a byte checked out!

(Note: This is not actually a joke, but rather a play on words that uses computer terminology.)
But the scarecrow joke also strikes me as wordplay, rather than "providing a clever or unexpected resolution," which was part of ChatGPT's definition of a joke.

Gemini is the clear winner of this round, being the LLM that seemed to have the best ability to stick with the idea of not actually giving me a joke. All of the systems surprised me with the very narrow view of humor that they offered; I don't think that any of my favorite jokes from television or stand-up qualify. But this is the thing about LLMs; since they're operating on probability, and the format they offer is one that many child-friendly jokes use, it makes sense that it's the most common, and hence most probable type in the data sets.