Variable
The thing about generative automated systems ("A.I.") is that they're something of a black box. Prompts go in, outputs come out, and what happens in between is anyone's guess. And this can be something of a problem. If you ask a generative automated system a question, and I ask the same question, and receive a different answer, what do we make of that?
There was a LinkedIn post today that asked: "Why is the chairman of Toyota anti-electric vehicles?" It showed ChatGPT being given that prompt, and then giving an answer. "The chairman of Toyota, Akio Toyoda," it said, "has expressed skepticism about a full transition to electric vehicles (EVs) for several reasons:" It then proceeded to list the reasons.
But since when is "skepticism about a full transition to electric vehicles" equivalent to being "anti-electric vehicles?" Equating those two things struck me as odd. But it's in keeping with colloquial English, so I could see how ChatGPT could assemble that answer. But I wanted to be sure, so I entered the prompt into ChatGPT myself.
It told me: "The perception that the chairman of Toyota, Akio Toyoda, is anti-electric vehicles is not entirely accurate." It went on from there to list reasons. Then, in summarizing, it noted: "It's important to note that while Toyoda has expressed reservations about EVs, Toyota is not completely opposed to electric vehicles."
No comments:
Post a Comment