Don't Think So
File under: Didn't everyone already know this?
I'm not sure why it took a group of Apple researchers to come to the determination that current generative "A.I." tools are not capable of genuine reasoning. I started referring to "A.I." as "generative automation" several months ago, as while it's, as I've said, a wonderful example of human artifice, it's pretty clear that there's no intelligence there. It's basically auto-complete on steroids.
Anyone who has used generative automation tools for real-world applications should understand that they're not capable of human-like reasoning. When I asked various LLM-driven chatbots how many grocery stores there were in Washington State, none of the tools I tested took the initiative to find numbers and add them up; and only one was able to find an answer that someone else had already created.
"The Truth About AI" is pretty much the truth about anything: The hype is just that... hyperbole. I'm not going to say that anyone who says that LLM-driven "AIs" can actually reason is engaging in hype; after all, there was the one Google engineer who seemed pretty convinced. But as someone who has simply sat through the standard accessible-to-laypeople explanations of how generative automated systems work, it was clear to me from the jump that there was no actual reasoning going on. And I've never really heard any differently, so I'm curious who was lying, and who they were speaking to. Any why they believed it. (The Medium article in the screencap is members-only, it turns out, so this is all I've seen of it.)
No comments:
Post a Comment