Guardian Scribe Blames White Men For ‘Racist and Sexist’ Robots

  1. Home
  2. Tech
By Masha Froliak | 10:39 am, April 24, 2017
Read More

An article in The Guardian—“Robots are racist and sexist. Just like the people who created them”—claims that robots are biased because they are created by white straight men. There’s only one problem with this bold assertion: No evidence, no data.

A recent study does indicate that artificial intelligence (AI) programs tend to exhibit gender and racist bias based on the algorithms that interpret texts online. But those algorithms are not yet equipped to counter the biases concealed within the patterns of language, they are not yet equipped to make sense of the language. Suggesting that the AI might be sexist because of the white male engineers is thin gruel.

Laurie Penny, a feminist columnist and author, writes:

Robots have been racist and sexist for as long as the people who created them have been racist and sexist, because machines can work only from the information given to them, usually by the white, straight men who dominate the fields of technology and robotics.

The author suggests that robots are being fed information by male engineers which would naturally cause a serious problem. But are the engineers really at fault here?

According to the study cited some artificial intelligence algorithms associate names like Adam and Courtney with words like “peace” and “friend,” while linking Jamel and Jasmine more strongly with “murder” and “abuse.” Also, in this algorithmic “language space”, another Guardian article notes that words like “female” and “woman” were more closely associated with arts and humanities while “male” and “man” were closer to math and engineering.

The machine-learning tool used in the study made its associations based on 840 billion words that were taken as they appear from material published online. The AI reflected the average person’s view, picking up associations based on how they appeared in the random texts.

But according to Penny, men are to blame even for the texts that exist online. She claims that robots are bigoted because they pick up texts mainly written by white western male writers.

Machines learn language by gobbling up and digesting huge bodies of all the available writing that exists online. What this means is that the voices that dominated the world of literature and publishing for centuries—the voices of white, western men—are fossilised into the language patterns of the instruments influencing our world today, along with the assumptions those men had about people who were different from them.

It is not surprising that some older texts conceal biases and prejudices that are not acceptable in contemporary society. Scientists seem to agree that the algorithms that are designed to interpret language should be improved. Those algorithms should be able to detect when bias creeps in and then act on it. But there’s a long road from understanding these shortcomings and drawing conclusions that white men are racists.

Advertisement