Can robots be racist? It appears that way, per results from Beauty.AI, a beauty competition designed to take prejudices out of the mix by having algorithms do the judging instead of humans. But results from the competition indicate that even 'bots have biases, the Guardian reports. Forty-four winners were chosen out of about 6,000 entrants from all over the globe who uploaded pics to Youth Laboratories' site, allowing the "robot jury" to make its assessments based on supposedly objective criteria such as facial symmetry and how many wrinkles and pimples a person had, TNW.com notes. But the winners had one thing in common: They were mostly white.
The five-robot panel selected only a few Asian contestants and just one dark-skinned entrant in the women's 40-49 age category. So what gives? Alex Zhavoronkov, chief science officer for the Microsoft-supported project, says there simply weren't enough models of minorities to contribute to the attractiveness-measuring algorithms—and that lack could have been caused by deep biases the engineers didn't even realize they had. A computer science professor at Haverford College says there are ways to cut down on droid bias (keeping a closer eye on entered data, for one), and Beauty.AI says it's going to work on the issues for the next contest. (Microsoft's chatbot went berserk earlier this year.)