- Oops!Something went wrong.Please try again later.
One of the chief issues with machine learning and artificial intelligence systems is that they, like the data scientists who create them, often have their own built-in biases.
Whether that’s favoring one portion of the population over another when deciding who deserves a loan, or misidentifying people of different ethnicities via facial recognition algorithms, machine learning programs have generated problematic outcomes and shaken trust in the technology.
Microsoft (MSFT) is attempting to address the issue of bias in machine learning with its new Fairlearn toolkit. The kit, which the tech giant announced is being made available via its Azure Machine Learning platform in June, will let companies that are developing machine learning models in Azure test for biases in their systems that could dramatically impact people’s lives.
The announcement came during Microsoft’s annual Build developers conference this week. Rather than its normal live event held in Seattle, the company hosted a virtual version of the show.
The Fairlearn toolkit first debuted at Microsoft’s Ignite event in November, but is being made generally available next month.
In explaining the importance of such tools, Microsoft used the example of EY, which tested Fairlearn on a machine learning model designed to automate loan decisions.
When the firm began using Fairlearn on the model, it revealed that the company’s loan algorithm had a significant bias in approving loans for men versus women that resulted in a 15.3 percentage point difference between men receiving loans in the test compared to their female counterparts.
The algorithm was built using loan approval data from banks which included information like transaction, payment, and credit history.
But Microsoft says that can introduce biases against applicants from certain demographics. And if that bleeds into loan approvals, it can have a dramatic impact on individuals’ lives.
According to Microsoft, when EY used the Fairness toolkit to train new machine learning models, it was able to cut the difference in loan approvals to 0.43 percentage points.
“Increasingly we’re seeing regulators looking closely at these models,” said Eric Boyd, Microsoft CVP of Azure AI, said in a statement. “Being able to document and demonstrate that they followed the leading practices and have worked very hard to improve the fairness of the datasets are essential to being able to continue to operate.”
With machine learning algorithms being used across an increasingly wide range of applications, whether that includes facial recognition algorithms for law enforcement agencies, or banks, ensuring bias isn’t a part of the equation will only become more important moving forward.
Got a tip? Email Daniel Howley at firstname.lastname@example.org or email@example.com, and follow him on Twitter at @DanielHowley.