Big Data
In a world where computers make everything easier for us, there are concerns that big data is actually driving discrimination and inequality in society iStock

A Harvard mathematician turned social activist is warning that big data is essentially driving inequality in society because mathematical computer algorithms are now in charge of making important decisions that affect our daily lives, which would previously have been made by discerning humans.

Cathy O'Neil was formerly a quantitative trader who used power software programs to monitor market movements and trading patterns in real time in order to make fast, clever decisions on the stock market, and after that, she also launched a technology startup developing a service for providing targeted advertising.

She became disillusioned when she realised that computer algorithms were being used to actively thwart equality and threaten democracy, and to this end, is now a social activist and has written a book called Weapons of Math Destruction.

But why is big data so bad?

Weapons of Math Destruction
Weapons of Math Destruction by Cathy O'Neil Crown Publishing Group

When big data first became a buzzword, it was touted as being a way to quickly analyse a huge amount of data to find meaningful connections, without requiring humans to sift through piles and piles of information manually, which is time consuming, and can lead to mistakes.

And while computer algorithms don't have feelings or an agenda – they just execute commands as programmed, humans are the ones choosing what data is analysed, which means the results can still be biased and prejudiced.

O'Neil warns that no one is regulating big data algorithms, and thus algorithms are allowed to judge and differentiate between those who are "worthy" and "unworthy" without being regulated, and that this process is in fact highly discriminatory.

You might not think big data is in your live, but think about it. When you surf the internet, your browsing habits are tracked in order to serve you targeted ads. When your children go to school, their grades decide whether teachers keep their jobs.

And if you want to apply to a loan or to take out insurance on your life, property or possessions, you no longer get to tell a bank or financial company advisor your sob story in person. Instead, data about your credit rating, spending habits, health, where you live and what you do are inputted into a computer, which makes an automatic decision about whether you are eligible for credit, or about how much your insurance premium should be.

Predictive policing

If you do something wrong and get sent to prison, then you can expect to have your rights taken away from you, and many prisoners often feel as if they cease to be individuals when they are incarcerated. But how would you feel if you knew that this cold, indifferent approach was happening even before any wrongdoing is committed?

In the book, O'Neil explains that law enforcement is now using two computer algorithms, namely predictive policing and recidivism risk scoring, in order to help them make decisions about which neighbourhoods the police should target, if they're looking to arrest someone. And apparently the legal system uses computer algorithms to decide which criminal defendants should go to jail for longer, by accessing how great a risk they are to society.

Cheaper than using humans

The problem is that programming a computer algorithm and then putting it to work to evaluate large numbers of people is much cheaper than hiring humans to do the same job, so when it comes to decisions about who to hire for a job or who to accept into higher education, those in less privileged neighbourhoods are subjected to "faceless educators and employers", whereas the wealthy get to deal with actual people.

"[The algorithms are] high-impact, they affect a lot of people. It's widespread and it's an important decision that the scoring pertains to, so like a job or going to jail, something that's important to people. So it's high-impact. The second one is that the things that worry me the most are opaque. Either that means that the people who get the scores don't understand how they're computed or sometimes that means that they don't even know they're getting scored," O'Neil told National Public Radio (NPR).

"Like if you're online, you don't even know you're scored but you are. And the third characteristic of things that I care about, which I call weapons of math destruction, the third characteristic is that they are actually destructive, that they actually can really screw up somebody's life. And most of the time, these algorithms are created with, like, good intentions in mind. But this destructiveness is typically undermines that good intention and actually creates a destructive feedback loop."

Is big data really impartial?

Numerous studies have been done on big data and found that the application and use of big data is neither unbiased nor impartial. In May, Amazon came under fire for using an algorithm that missed out a number of predominantly black neighbourhoods in several major US cities.

Also, in 2013 a study by Harvard University's Data Privacy Lab discovered that companies who provide criminal background checks were purchasing Google adwords for stereotypically African-American names. When a name "racially associated" with a black person such as Jermaine, Darnell or DeShawn was searched, the relevant ads were likely to relate to an arrest record between 81% and 100% of the time.

In contrast, when users searched for people with names that were typically given to white people, such as "Emma" or "Geoffrey", ads suggestive or an arrest only appeared 23-29% of the time, which showed that racial profiling was inherent in the delivery of the ads. The researchers do not know exactly why this happens, but they are concerned that negative ads that come up with Google searches could affect a person's career prospects.