Artificial Intelligence has grown exponentially in the past few years. However, its advancements also bring with it the risk of abuse. Microsoft Research's principal researcher Kate Crawford warned of the possibilities of AI and its subtly coded human biases likely being abused by authoritarian regimes. Speaking at SXSW in a session titled DARK DAYS: AI and the Rise of Fascism, Crawford cautioned that machine intelligence could possibly be used to further fascism.
Crawford also highlighted the "nasty history" of people using facial features to "justify the unjustifiable, warning that AI could incorporate the primary aspects of phrenology and other types of discrimination and mask it in a "black box" of algorithms.
"Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism," she said, the Guardian reported.
She also warned that AI could be used in creating registries which could be exploited by governments to target certain groups of the population. She outlined historical cases of such registry abuse, including IBM's role in supporting Nazi Germany to reportedly track Jewish and other ethnic groups with the Hollerith Machine.
President Donald Trump has indicated the possibility of the US government building a Muslim registry. "We already have that. Facebook has become the default Muslim registry of the world," Crawford said, referring to research from Cambridge University that showed that it is now possible to predict individual's religious beliefs based on what they "like" on social media.
Crawford voiced concerns over the possible use of machine intelligence in enabling the manipulation of political beliefs, pointing out that this is something Facebook and Cambridge Analytica claim they can already accomplish. Although she voiced scepticism over such claims, she acknowledged that manipulation of the public could be possible "in the next few years".
"This is a fascist's dream," she said. "Power without accountability."
Crawford warned that such "black box" systems are already being incorporated into the US government. For example, Peter Thiel's Palantir is reportedly creating a system to enable the Trump administration's plan to deport thousands of immigrants. "It's the most powerful engine of mass deportation this country has ever seen," Crawford said.
She also called for making AI systems more transparent to ensure accountability. "The ocean of data is so big. We have to map their complex subterranean and unintended effects."
"We should always be suspicious when machine learning systems are described as free from bias if it's been trained on human-generated data," Crawford said. "Our biases are built into that training data."