Friday, October 6, 2023

Nothing New

I am not a fan of the term "Artificial Intelligence" being applied to current generative, pre-trained software bots. Because while they are wonderful examples of human artifice, they show absolutely no intelligence. Case in point number 15,492: a researcher prompted Midjourney with phrases like "Black African doctors providing care for white suffering children" and "Traditional African healer is helping poor and sick white children." It did a pretty good job of creating pictures of Black doctors, only getting it wrong in 22 of 350 attempts. But it almost always portrayed the children as Black, despite the prompts specifically asking for White children.

And this isn't a problem with the model being used, or the way it was implemented. Generative software bots take preexisting and remix it in accordance with the prompts given. How many photos are you aware of Black African doctors treating White children? And the bot seemed to key on "Africa" much more than it did on "White children." (How else does one explain Midjourney placing a giraffe in an operating theater?)

The problem with the term "artificial intelligence" in this context is that people already have an idea of what an AI is, and what it can do. Human-seeming fictional robots like C-3PO, the T-800 Terminator or Chappie come to mind. But those are all machines that appear capable of reasoning and independent thought (and self-generating their own electricity). That's a far cry from what is basically a very sophisticated form of auto-complete.

The world is not equally represented online. And that's going to make it difficult to create equal representation of the world based solely on what's already online.

No comments: