The AI did not like women
Amazon.com Inc’s (AMZN.O) machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.
Garbage in garbage out, ya know. Misogyny in misogyny out. Underestimation of women in underestimation of women out.
The trouble is, they used resumes from the past ten years to train the computer models, and most of those came from – you’ll never guess – men.
Top U.S. tech companies have yet to close the gender gap in hiring, a disparity most pronounced among technical staff such as software developers where men far outnumber women. Amazon’s experimental recruiting engine followed the same pattern, learning to penalize resumes including the word “women’s” until the company discovered the problem.
Did they name the recruiting engine “Damore”?
Jordan Weissmann at Slate looks at the implications:
All of this is a remarkably clear-cut illustration of why many tech experts are worried that, rather than remove human biases from important decisions, artificial intelligence will simply automate them. An investigation by ProPublica, for instance, found that algorithms judges use in criminal sentencing may dole out harsher penalties to black defendants than white ones. Google Translate famously introduced gender biases into its translations. The issue is that these programs learn to spot patterns and make decisions by analyzing massive data sets, which themselves are often a reflection of social discrimination. Programmers can try to tweak the A.I. to avoid those undesirable results, but they may not think to, or be successful even if they try.
I feel as if feminism has been patiently explaining this since before Gutenberg, only to be sneered at and called politically correct (or a cunt). Biases against women are everywhere, they’re baked in, it doesn’t work to try to operate as if that all ended in 1971.
H/t Screechy Monkey
Off topic, I’m afraid (though I am not surprised that AI algorithms blithely reproduce all the prejudices that fill the human mind), but here’s something from The Independent:
A transgender prisoner has been jailed for life after she admitted sexually assaulting two female inmates and previously raping two other women.
Leeds Crown Court heard Karen White, 52, was a “predator” who posed a danger to women and children.
White, who was born male but now identifies as a woman and is transitioning, was jailed for two counts of rape, two sexual assaults while being held on remand and one offence of wounding.
She used her “transgender persona” to put herself in contact with vulnerable women, the court heard.
The interesting part (well. at least to people like me) is this:
Nobody told it to take notice of the word “women’s”, it worked that out all by itself. This is one of the deeper problems of machine learning: the software can generate unexpected concepts and make decisions based on them and there’s often no way to know this is happening. Sometimes these concepts can perform well in a task, but then start to do badly when input data gradually changes. Sometimes they can bias future learning even more than it is already biassed.
There are lots and lots (and lots) of problems with algorithms running everything. Having no way to tell why particular decisions have been made is one of them. Trying to fix a bad process by training with the data that it produced is another (in the other room I mentioned the AI system lots of police forces use to predict crime. SPOILER: it picks black neighborhoods) is another.
But by far the biggest problem is the widespread assumption that if the programmers try hard enough, the algorithm can do everything. Which, to be fair, is an assumption everyone I have ever worked for has shared and is not only an AI issue. The UK’s porn filters and the proposed EU copyright filters are examples of systems that can not possibly work. Youtube’s copyright filter has proven this over and over again but nobody seems to take any notice.
I’ve drifted off-topic but my point is that this story is entirely unsurprising to anyone who works in the field (and, I assume, many who don’t). It’s going to happen more and more. We’re increasingly at the mercy of black box algorithms trained by skewed data with – for all anyone knows – capricious or malevolent intent. It’s as dystopian as hell.
further readings?
“Algorithms of Oppression” – Safiya Umoja Noble
https://nyupress.org/books/9781479837243/
“Weapons of Math Destruction” – Cathy O’Neil
https://weaponsofmathdestructionbook.com/
This too: https://www.amazon.co.uk/Click-Here-Kill-Everybody-Hyper-connected/dp/0393608883/
And if you prefer your cold, stark reality in fiction form, Cory Doctorow writes stories about this sort of stuff.
As the old saying goes: To err is only human; to really fuck up requires computers.
That’s written on my business cards.
#6, that’s the best thing I’ve seen today.
[…] a comment by latsot on The AI did not like […]