Machine Learning Has a Weakness: Humans

Study finds that artificial intelligence can adopt our racism, sexism
By Elizabeth Armstrong Moore,  Newser Staff
Posted Apr 14, 2017 12:03 PM CDT
Artificial Intelligence Can Be Racist, Sexist Like Humans
File photo of a robot getting ready to catch a ball.   (AP Photo/Laurent Cipriani, File)

Artificial intelligence has come a long way in recent years, and algorithms with machine-learning applications are proving skillful at things like playing poker, lip-reading, and, unfortunately, being biased. Researchers at Princeton proved the point in an experiment involving an algorithm known as GloVe, which has learned about 840 billion words from the internet, notes Wired. For their study, the researchers adapted a word-pairing test used to gauge bias in humans to do the same for the GloVe system. The upshot? Every single human bias they tested showed up, they report in the journal Science.

"It was astonishing to see all the results that were embedded in these models," one researcher tells Live Science. They found examples of ageism, sexism, racism, and more—everything from associating men more closely with math and science and women with arts to seeing European-American names as more pleasant than African-American ones. "We have learned something about how we are passing on prejudices that we didn't even know we were doing," says another researcher. Just as Twitter users taught Microsoft's chatbot Tay to unleash neo-Nazi rants on social media last year, so, too, does this oft-used algorithm learn from our own behaviors, regardless of whether they are good or bad. (Elon Musk calls AI our "biggest existential threat.")

We use cookies. By Clicking "OK" or any content on this site, you agree to allow cookies to be placed. Read more in our privacy policy.
Get the news faster.
Tap to install our app.
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.