Friday, January 17, 2025

Misconceivable

Asking questions of generative automation chatbots may or may not a good way to learn about the world. But it may a useful way to understand the training data that went into the language models on the back end.

It occurred to me to give Perplexity, Chat GPT, Gemini and Copilot a simple prompt: "10 Common misconceptions," and see what came back. After all, there are a lot of different misconceptions floating around out there in the ether, and perhaps I could glean something interesting from the answers. I figured that there would be a decent level of overlap, so out of a possible total of forty, I was expecting maybe thirty unique answers. So I was a bit surprised when I only received a total of twenty-one.

Of the four, Chat GPT produced no unique answers; all of its answers were also on the lists that the other chatbots presented to me. It shared five items with Perplexity, six with Gemini and seven with Copilot. Chat GPT's full list is as follows:

  1. Humans only use 10% of their brains
  2. Shaving hair makes it grow back thicker
  3. Vikings wore horned helmets
  4. The Great Wall of China is visible from space
  5. Napoleon was extremely short
  6. Goldfish have a 3-second memory span
  7. Bats are blind
  8. Cracking your knuckles causes arthritis
  9. Lightning never strikes the same place twice
  10. We only have five senses

Items 1 and 2 were on all four lists, which struck me as interesting, given that I was of the impression that #1 had been pretty thoroughly debunked some time ago. After that there was an interesting split. Perplexity and Chat GPT shared items 3 though 5, while sharing 6 through 9 with both Gemini and Copilot and 10 only with Copilot. The high correspondence between Chat GPT and Copilot makes a lot of sense, given that Copilot is a variation on Chat GPT. That may also be true of Gemini, but I'm not as certain about that.

Interestingly, outside of items 1 and 2, Perplexity and Copilot had no overlap, and Perplexity and Gemini had only "Sugar makes children hyperactive" in common.

Of the remaining ten "unique" misconceptions, they were fairly evenly distributed between the non-Chat GPT models.

Perplexity had four:

  • Fortune Cookies are Chinese
  • The Buddha Was a Fat, Jolly Figure
  • Mozart Composed "Twinkle Twinkle Little Star" as a Child
  • You Lose Most of Your Body Heat Through Your Head

Gemini had three:

  • Hair and nails continue to grow after death
  • You can catch a cold by being cold
  • All birds can fly

"All birds can fly" struck me as strange; I'd never heard that one before. It seemed more like something that a small child might believe, after having been told that just about every flying animal of any size that they've seen was some or another type of bird. But flightless birds aren't exactly rare.

Copilot also had three:

  • Dropping a penny from a height can kill someone
  • Seasons are caused by Earth's distance from the Sun
  • Ostriches bury their heads in sand

One thing that I found interesting was the shifting usage of "Humans," "We" and "You" when the misconceptions were referring to people. And here it's worth noting that the various LLMs didn't always use the exact same verbiage for the same items. While Chat GPT related that "Humans only use 10% of their brains," Perplexity and Gemini favored "We only use 10% of our brain(s)" and Copilot went with: "You only use 10% of your brain."

But getting back to sussing out information about the models from the answers, it seemed fairly clear that a) the information was sourced primarily from English-language texts, and likely a lot of those from the United States, and b) that an effort had been made to avoid anything that might be controversial. I suspect that, at least in the United States, that most lists put together by actual people of "10 common misconceptions" would have at least one item that smacked of either racial stereotyping or conspiracy theories, if not both.

Given that it was something that I did on a whim with a minimalist prompt, however, it was interesting and thought-provoking. Given some of the items on the list, I would have expected more variety, but twenty-one out of a possible forty is a decent enough showing.

No comments: