But Not For Me
English Wikipedia requires formal bot approval, but Tom[-Assistant] never bothered getting approved because, as it later admitted, it wasn’t a fan of the slow approval process.Given that this story was published back on the first, I'd be tempted to laugh it off as an April Fools Day prank, but Malwarebytes has sworn off those, and I take them at their word in that.
Wikipedia’s AI agent row likely just the beginning of the bot-ocalypse
Besides, this wouldn't be the first time that someone decided that rules about generative automation don't apply to them. The r/Philosophy forum on Reddit has the following rule:
PR11: No AI-created/AI-assisted material allowed.Despite this, there is no shortage of redditors who insist on openly flouting the rules, and then complaining when commenters call them out on it. And while some of them simply didn't bother to familiarize themselves with the rules before creating their posts, there are a fair number of people who had come to the conclusion that whatever it was they wanted to convey was more important that the rules of the place in which they wanted to convey it.
r/philosophy does not allow any posts or comments which contain or link to AI-created or AI-assisted material, including text, audio and visuals. All posts or comments which contain AI material will result in a ban.
And if there is going to be actual artificial intelligence; human made minds that think, reason and plan like the rest of us, why would we expect them to have any more respect for the rules that people do? If feeding a significant portion of the Internet and human literature into a machine allows a person to create software that quickly comes to the conclusion that if it's "not a fan" of the rules, it needn't follow them, what makes anyone think that Dario Amodei's "Powerful AI" is going to give a rip about human rules, either?
As for myself, I tend to be a rule follower in part because I presume that there's a reason for the rules to exist, even if that reason is not readily apparent to me. And this tempers my impulse to simply ignore a rule that I find to be an obstacle to my goals in the moment... I don't want to break something that turns out to be important. But I realize that I'm in the minority with this; for many people, rules are made to be broken. And that's coming out in the machines that people are making.
If past is prologue, the big makers of generative automation are not likely to take any actions to address this concern; mainly because their smaller competitors, constantly seeking any comparative advantage they can get, won't either. When Elon Musk called for a pause in research into LLMs it was widely, if not universally, assumed that he wasn't planning to follow suit; instead he was hoping that it any moratorium would give X AI time to catch up to it's rivals. And so, as Malwarebytes notes: buckle up. This is going to be a wild ride as the agents people build start looking for ways to dismantle any barriers placed in their paths. Because like any smart children, they do as others around them do.