Amazon introduces Prime monthly membership
Amazon got into hot water for not catering to some minority communities with its free same-day delivery service Getty Images

Amazon has been accused of inadvertent racism after it was discovered that its same-day Prime delivery service didn't extend to a number of predominantly black neighbourhoods in several major US cities. The controversy followedBloomberg's analysis of information from the Amazon Prime delivery service about which postcodes were covered, compared with census data on how those areas broke down ethnically.

The researchers discovered that minority urban neighbourhoods in New York, Washington DC, Boston, Chicago, Dallas and Atlanta were less likely to have access to same-day delivery on Prime. As an example, all boroughs in New York City were covered except for the Bronx or Queens where black and Hispanic residents were in the majority.

Amazon told Bloomberg that it was a matter of pride that it treated every shopper exactly the same, offering them the same price no matter where they lived, and that demographics played no part in how its service is run. The internet giant explained it had been focusing on postcodes where there was already a high concentration of Prime members, and that over time, the service would eventually cover all areas in as many cities as possible.

And indeed, after the Bloomberg report came out and local officials in the affected cities complained to the media, Amazon announced that in May it planned to expand Prime Free Same-Day Delivery to cover all postcodes in New York City and Boston, as well as neighbourhoods on the south side of Chicago.

Poorer communities struggle to find quality low-cost goods

Amazon Prime
Is it possible for the algorithms used to compute Amazon Prime data to be inherently racist? iStock

But Preston Gralla, contributing editor at ComputerWorld, argues that although Amazon is not intentionally being racist, by relying on data that has been historically tainted by racism, the algorithms deciding where best to offer the service are inherently racist in their outcomes.

"The Amazon decision hits poor minority neighbourhoods particularly hard, because it's much more difficult for people in those neighbourhoods to buy useful goods at reasonable prices because there are fewer stores and worse public transportation than in other neighbourhoods," Gralla writes.

"The irony is that it's easier for people in wealthy neighbourhoods to find low-cost, quality goods than for people in poor and minority neighbourhoods because wealthier neighbourhoods are closer to more stores and have better public transportation. So the Amazon same-day delivery service could be of the most help to the very people to whom Amazon won't provide it."

Can Big Data be racist?

In fact, Gralla says that the Amazon Prime incident is indicative of a much bigger problem affecting the technology industry – that although technology experts claim to be using impartial computers to analyse big data in order to make logical, accurate, bias-free decisions, in fact the results can still be biased and prejudiced, as humans are the ones choosing what data is analysed, so essentially the unfair status quo of the past persists.

"The Amazon algorithm operates off of an inherited cartography of previous red-lining efforts, which created pockets of discrimination, the consequence being that the discrimination continues to be reproduced," Jovan Scott Lewis, a professor at the University of California, Berkeley's Haas Institute for a Fair and Inclusive Society told USA Today.

In fact, it turns out numerous studies have been done on big data and found that the application and use of big data is neither unbiased nor impartial. For example, a study by Harvard University's Data Privacy Lab in 2013 found that companies who provide criminal background checks were purchasing Google adwords so that when someone searched for an individual whose name was typically "racially associated" with a black person like "Jermaine", "Darnell" or "DeShawn", the ads that came up next to the search were likely to be suggestive of an arrest record between 81-100% of the time.

In contrast, when users searched for people with names that were typically given to white people, such as "Emma" or "Geoffrey", ads suggestive or an arrest only appeared 23-29% of the time, which showed that racial profiling was inherent in the delivery of the ads. The researchers do not know exactly why this happens, but they are concerned that negative ads that come up with Google searches could affect a person's career prospects.

So can big data be racist? Probably not with any deliberate intent, and it does make sense that Amazon would roll out its delivery service in areas where it knows it has the most customers willing to pay to sign up for its premium Prime subscription service.

Computers are not racist, they're machines. However, it is possible that the person who chose the data to be analysed by the computer could have intended to discriminate, which means that the outcome is inherently discriminatory.