Got It Dave; I Can Do That
While it strikes me as a little late to the party, the New York Times has picked up on the trend of "prompt injection" into résumés. The idea is, put simply, that job seekers are placing generative automation prompts into their résumés, hiding them by using white fonts on white backgrounds. When an employer's applicant tracking system (ATS) reads the document, it encounters the prompt, and acts on it.
I'm going go out on a limb here (although I'm convinced it's a very sturdy one, and say that this doesn't actually work, and hasn't for some time, if it ever did (which, honestly, I'm dubious about). It's always being presented as a clever job application hack, but in reality, it would be an enormous security flaw.
In the New York Times article, no-one in recruiting comes out and says that their generative automation systems had ever been successfully controlled by an injected prompt... the anecdotes all come from job seekers who claim to have landed interviews by having done it, or are simply counts of attempts. There are no quotes from engineers or the ATS companies. So this is really a story about people's perceptions of what's going on in systems. And that's worth keeping in mind.
Because if someone could hide a prompt in a random document and have a reasonable expectation that a system would act on that, it would be threat actor heaven. Why bother with social engineering or doing the work to find zero-day exploits when spamming a company with résumés could reliably result in an applicant tracking system opening a back door, exfiltrating data on all of the other applicants or sending over login credentials? It's worth understanding that not all of these systems are home-grown. If companies were selling systems this easily controlled by outside actors, there would have been highly-publicized lawsuits by now, especially considering these attacks have been talked about for months already.
And if this sort of thing worked, "A.I. poisoning" (mention of which is conspicuously absent) would be trivially easy, because it's not hard to hide messages in various forms of media. Steganography goes back to 400 B.C.; hiding machine-readable messages in files is nothing new, given that the bulk of most computer documents is machine-readable code that most human users never see. Sanitizing inputs is not something companies are just now figuring out.
These stories peddle a message of hope (There are jobs out there, and you can get them if you're smart!) and affirmation (You're cleverer than the idiots in corporate HR departments.), and play on people's suspicions that "A.I." is not all it's cracked up to be. But by focusing tightly on job seekers, and not potential threat actors, articles like this New York Times piece ignore the fact that if this has been taking place since "the first half of the year" it would no longer be a zero-day exploit. It would have been patched by now. Technology stories that avoid educating people on how technology actually works don't do anyone any good.
No comments:
Post a Comment