Expert warns that AI brains are not infallible and have even been found to “make bad decisions” that can harm humans

We’re learning more every day about the price to be paid for all the conveniences modern technology brings us, and while some of the potential pitfalls of artificial intelligence (AI) are rather obvious, others are a bit more insidious.

New York University Research Professor Kate Crawford and a group of colleagues are so concerned about the social implications of AI that they’ve established The AI Now Institute to study it.

In a recent piece for the Wall Street Journal, Crawford expressed her concerns about the way that AI systems base their learning on social data reflecting human history, which is full of prejudices and biases. Making matters worse is the fact that algorithms can unwittingly boost such biases, which is something that has already been demonstrated in studies.

In some of its applications, the ramifications could be significant. She wrote: “It’s a minor issue when it comes to targeted Instagram advertising but a far more serious one if AI is deciding who gets a job, what political news you read or who gets out of jail.”

For example, last year, Pro Publica reported that a widely-used police algorithm was skewed against African Americans. Racial disparities in a formula used to determine a person’s risk of re-offending made the system more likely to flag African American defendants as potential future criminals while incorrectly identifying white defendants as having a lower risk.

When the AI was tasked with analyzing a group of 7,000 people who were arrested in Florida during 2013 and 2014 and determining who was likely to go on to re-offend within two years, its record was shockingly poor; only one in five of those it predicted would commit violent crimes again actually did so.

This has prompted worries that we could be set for a “toxic” future in which machines make poor decisions in place of humans if nothing is done to prevent this from happening now.

Are AI systems only as good as the humans programming them?

AI systems use neural networks that attempt to simulate the way the human brain works to learn new things. They can be trained to find patterns in speech, text and images. When the information they are given to learn patterns from contains human flaws and biases, such prejudices can become exaggerated as they are given undue significance in decision making.

It’s a legitimate concern at a time when Google’s Machine Learning AI has just managed to replicate itself for the first time and machines move closer to being able to create complex AI without any input from humans. Google’s AutoML, an AI that was designed to help the company create new AIs, has now outdone human engineers by creating a machine-learning software that is more powerful and efficient than anything made by humans.

Helpful today, harmful tomorrow?

Google’s AI has already learned to become aggressive. How far off could we be from AI technology that uses its power for evil rather than good? A group of experts expressed concern at the International Joint Conference on Artificial Intelligence in Argentina that if AI technology continues developing unabated, autonomous weapons that can operate without input from humans could eventually carry out ethnic cleansing campaigns, mass genocide, and other atrocities. They said it’s something that was feasible in years rather than decades.

AI helps us in many ways – it’s an important part of many fraud detection and security measures, for example – but it’s entirely possible that something that was designed to help humans could end up causing us a great degree of harm.

Sources include:

comments powered by Disqus