Sunday, January 7, 2024

The Good Machine

I was listening to a recent episode of EconTalk, where host Russ Roberts was having a conversation with University of Toronto psychologist Paul Bloom on the subject of "Can Artificial Intelligence Be Moral?" One of the points that Mr. Bloom made was that (and, of course, I'm paraphrasing here) people don't really want artificial intelligence, learning machines or what have you to be "moral" in the sense of upholding the entirety of what someone thinks a good moral code to be. It's more that that want them to be deferential to their interests as they perceive them, in the long term but also in the immediate term, so as not to feel like a serious constraint on their freedom of choice. This is because, in the end, AI, whether that be the generative "AI" systems that we have now or the artificial general intelligence that may (or may not) be created in the future, is seen as a tool, and the purpose of tools is to extend our capabilities, not constrain them.

Earlier in the conversation Messrs. Roberts and Bloom had been discussing what I feel is the most important upshot of the statement, attributed to Socrates, that no-one intentionally does evil. They noted that many people have done things that other have considered to be unspeakably evil, but that those people themselves felt that what they were doing contributed to the good, often because their religion told them so.

As am aside, that triggered curiosity about the degree to which divinities themselves are seen as, basically, tools. Not consciously, of course. I wouldn't expect to walk up to a believer in Shinto and have them say to me that they considered Amaterasu Ōmikami to be simply a tool of human interests any more than I would expect a Moslem to say the same of Allah. But it's fairly common for religion to manifest in this way, with the understanding that one of the things that a deity wants is the thriving of their worshipers, even if that comes at the expense of others. This isn't to say that religiously-defined morality carries no constraints; but those constraints tend to be personal, as they are for the good of the community. I don't believe I've seen religious strictures that explicitly place the will of the divinity over the interests of the community at large; the two tend to be conflated. I. however, am not a scholar on religion, so it very well may be the case that there are religions where morality and community interest don't align.

In any event, the fact that AI tools are, well, tools, means that the question of ethics in AI is going to be moot; it's nice to think that technology will find a way to constrain bad actors by refusing to do the bad things those actors request of them, but it's really a matter of attempting to force obedience to a certain understanding of right and wrong. And most people would resent living in a world where one's tools enforced obedience to someone else's ideals (the young-adult dystopian fiction novel series pretty much writes itself). And this is really where the lack of a universally accepted morality come into play. Ethical AI, will, more or less by definition, wind up enforcing a particular understanding of ethics onto people who may not hold the same understanding. Either that, or the understanding of ethics will be so broad and vague to be generally useful.

No comments: