Wednesday, March 27, 2024

Hype Technology

"The core problem is that GenAI models are not information retrieval systems," [AI ethics expert Rumman Chowdhury] says. "They are synthesizing systems, with no ability to discern from the data it's trained on unless significant guardrails are put in place."
Chatbot letdown: Hype hits rocky reality
Of course generative artificial intelligence doesn't live up to the hype. If it did, one could make the case that it wasn't hyperbole. But it generally turns out to be hype, if for no other reason than there seems to be something deeply alluring about the idea that people can build something, and it will just solve a bunch of problems, and not have any downsides. In the time that I've been in the technology sector, I've seen a consistent discomfort with the idea of trade-offs, even when the reality of them is metaphorically staring people in the face, and generative artificial intelligence is no exception.

When I've experimented with Microsoft's Copilot system, I haven't found it to go off the rails in the way that many earlier systems may have, but it is verbose, because its default is take in whatever data it's given and to synthesize more. Back when I used the tool to help me translate an old story snippet I'd written in Japanese into English, it volunteered a prompt, requesting it to tell me how the main characters met. And then it synthesized a story; it had no other choice, the characters it offered to tell me about had no other existence beyond the short passage that I'd written more than two decades ago; there couldn't have been any information about them in the training data. And I can see how that lends itself to an interpretation that the model "knows" things, and that asking it more questions will reveal more information. But that requires seeing it as something more than a remarkably sophisticated auto-complete function. Which it isn't.

That said, there are several things that one can do with a really sophisticated auto-complete function that will make businesses, and potentially people, more efficient and productive. But for right now, they're mainly limited to applications where it becomes evident fairly quickly if the system has it correct or not. I knew that the AI systems made errors in my initial experiment, to determine the length of time between two dates, because I asking the question with the goal of having the systems tell me the answer; I already knew the answer, because I'd sorted it out for myself. I was looking to see the degree to which the various models disagreed with one another. But if I'd been asking because I genuinely didn't know, and had used the answers provided for anything important, that could have spelled trouble.

The term generative artificial intelligence is a misnomer because the systems involved are not intelligent. As Ms. Chowdhury notes, the systems lack the native ability to discern things from the data they were trained on. But they're treated as thinking, and as knowing, because that's how they appear to people. Copilot, when it tells me that it's taken a "whimsical journey" (otherwise known as synthesizing random details about something), behaves as though there is a creative intellect in there, somewhere. And I think that this, combined with the speed of its answers, makes it easier to see the system as smarter than a person. And since any problem can be solved if one is just smart enough...

Except, that's not true. There are plenty of problems with the world that are going to take more than a person, or a machine, being clever about solutioning. I was listening to a podcast about Haiti today. That doesn't see like a problem that's mainly here for want of a clever solution. Likewise, the question of workers displaced by continued adoption of automation is also not a problem that will yield to cleverness. Like many things that don't live up to the hype, the problem is overly optimistic impressions of what the technology can do.

No comments: