Artificial intelligence
Racist and sexist AIs are a reflection of the human bias that is fed onto these systems iStock

Artificial intelligence (AI), used widely on algorithms and robotic devices, can be racist as well as sexist just like humans, reveals a new study.

The study, conducted by the Department of Psychology, University of Washington and published in the journal Science, shows how artificial intelligence that learns from human language is likely to pick up biases the same way as humans do. The biases range from linking women with stereotypical arts and humanities jobs, to associating European and American names with words that describe happiness, to matching pleasant words with white faces.

The research was conducted by focusing on a machine learning tool known as "word embedding", which helps computers interpret speech and text. It builds an algorithmic language, in which the meaning of a word is distilled into a series of numbers.

"A major reason we chose to study word embeddings is that they have been spectacularly successful in the last few years in helping computers make sense of language," said Arvind Narayanan, a computer scientist at Princeton University and the paper's senior author.

The study showed the algorithm is picking up deeply-ingrained race and gender prejudices. For instance, it associated European American names with pleasant words such as "gift" or "happy", while African American names were more than often associated with unpleasant words.

In another example, when comparing two identical CV's, there was a 50% more chance for a candidate's selection if he/she had an European American name compared to an African American.

"The world is biased, our historical data is biased, hence it is not surprising that we receive biased results here too," Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford told the Guardian.

According to Wachter, knowing the problem the AI faces could actually help researchers solve the issue of bias and counteract it where appropriate.

"At least with algorithms, we can potentially know when the algorithm is biased," she said. "Humans, for example, could lie about the reasons they did not hire someone. In contrast, we do not expect algorithms to lie or deceive us."