A doctoral student at Oregon State University’s College of Engineering has taken a new approach to making Artificial Intelligence less systematically biased.
In order to train AI models, computer scientists utilize publicly available data from the web. To cut costs and decrease data input, the information fed to AI often goes through a process called deduplication.
If a system is fed 100 photos, deduplication might filter that set down to just 50.
According to OSU doctoral student Eric Slyman, this procedure has led some AI models to filter out images based on factors such as race, age, and gender, affecting the context in which the AI operates.
This complication has led to inaccuracies, as AI might conjure exclusively photos of white males when prompted to depict a doctor, for example.
“If we’re basing the decisions that these systems are making based on precedent of our past decisions as a society, then we know our society hasn’t always done things in a way that’s fair,” Slyman told KLCC.
Along with researchers at Adobe, Slyman developed a technique to reduce bias at the stage of deduplication.
The new model, called FairDeDup, aims to insert fairness considerations into the training models of AI.
“Instead of looking at how we change the context in what an AI says, we’re changing the context of what an AI learns,” said Slyman.
Tested alongside assistant professor in the OSU College of Engineering Stefan Lee, the technique has gained recognition as an innovative way to reduce discrimination in the AI space.
Slyman said they hope their work will help mitigate biases while also offering profit-increasing incentives for tech companies.
The full study on the FairDeDup method can be accessed here.