Wednesday, July 10, 2024

Variable

The thing about generative automated systems ("A.I.") is that they're something of a black box. Prompts go in, outputs come out, and what happens in between is anyone's guess. And this can be something of a problem. If you ask a generative automated system a question, and I ask the same question, and receive a different answer, what do we make of that?

There was a LinkedIn post today that asked: "Why is the chairman of Toyota anti-electric vehicles?" It showed ChatGPT being given that prompt, and then giving an answer. "The chairman of Toyota, Akio Toyoda," it said, "has expressed skepticism about a full transition to electric vehicles (EVs) for several reasons:" It then proceeded to list the reasons.

But since when is "skepticism about a full transition to electric vehicles" equivalent to being "anti-electric vehicles?" Equating those two things struck me as odd. But it's in keeping with colloquial English, so I could see how ChatGPT could assemble that answer. But I wanted to be sure, so I entered the prompt into ChatGPT myself.

It told me: "The perception that the chairman of Toyota, Akio Toyoda, is anti-electric vehicles is not entirely accurate." It went on from there to list reasons. Then, in summarizing, it noted: "It's important to note that while Toyoda has expressed reservations about EVs, Toyota is not completely opposed to electric vehicles."

This was more like it. But why the difference between the two answers? Why should one iteration of a question take more time to explain the answer than another instance of the same question? I suspect that this is going to create concerns about how trustworthy generative automated systems are, but also the people who use them. Because maybe ChatGPT did actually give two different answers to the same question, or maybe one of us is a dab hand at Photoshop. How would you know? And this is something that the companies that offer generative systems are going to have to deal with.



No comments: