U.S. Markets open in 4 hrs 38 mins
  • S&P Futures

    4,381.50
    -93.25 (-2.08%)
     
  • Dow Futures

    33,992.00
    -261.00 (-0.76%)
     
  • Nasdaq Futures

    14,308.00
    -193.00 (-1.33%)
     
  • Russell 2000 Futures

    2,008.40
    -23.30 (-1.15%)
     
  • Crude Oil

    0 (0)
     
  • Gold

    1,841.10
    -0.60 (-0.03%)
     
  • Silver

    24.34
    -0.37 (-1.50%)
     
  • EUR/USD

    1.1348
    +0.0031 (+0.2724%)
     
  • 10-Yr Bond

    1.7350
    -0.0120 (-0.69%)
     
  • Vix

    28.85
    +3.26 (+12.74%)
     
  • GBP/USD

    1.3555
    -0.0045 (-0.3321%)
     
  • USD/JPY

    113.6500
    -0.4500 (-0.3944%)
     
  • BTC-USD

    36,348.59
    +1,786.37 (+5.17%)
     
  • CMC Crypto 200

    823.79
    +13.18 (+1.63%)
     
  • FTSE 100

    7,494.13
    -90.88 (-1.20%)
     
  • Nikkei 225

    27,522.26
    -250.64 (-0.90%)
     
  • Oops!
    Something went wrong.
    Please try again later.

Microsoft bringing anti-bias tool to its Azure AI platform

·Technology Editor
·3 min read
In this article:
  • Oops!
    Something went wrong.
    Please try again later.
  • MSFT

One of the chief issues with machine learning and artificial intelligence systems is that they, like the data scientists who create them, often have their own built-in biases.

Whether that’s favoring one portion of the population over another when deciding who deserves a loan, or misidentifying people of different ethnicities via facial recognition algorithms, machine learning programs have generated problematic outcomes and shaken trust in the technology.

Microsoft (MSFT) is attempting to address the issue of bias in machine learning with its new Fairlearn toolkit. The kit, which the tech giant announced is being made available via its Azure Machine Learning platform in June, will let companies that are developing machine learning models in Azure test for biases in their systems that could dramatically impact people’s lives.

The announcement came during Microsoft’s annual Build developers conference this week. Rather than its normal live event held in Seattle, the company hosted a virtual version of the show.

NEW YORK, NY - APRIL 30: The Microsoft store is seen on April 30, 2020 in New York City.  The company said the effects of the coronavirus may not be fully understood until future periods but it has seen an increase in the Cloud business as more people work from home. (Photo by Eduardo MunozAlvarez/VIEWpress via Getty Images)
NEW YORK, NY - APRIL 30: The Microsoft store is seen on April 30, 2020 in New York City. The company said the effects of the coronavirus may not be fully understood until future periods but it has seen an increase in the Cloud business as more people work from home. (Photo by Eduardo MunozAlvarez/VIEWpress via Getty Images)

The Fairlearn toolkit first debuted at Microsoft’s Ignite event in November, but is being made generally available next month.

In explaining the importance of such tools, Microsoft used the example of EY, which tested Fairlearn on a machine learning model designed to automate loan decisions.

When the firm began using Fairlearn on the model, it revealed that the company’s loan algorithm had a significant bias in approving loans for men versus women that resulted in a 15.3 percentage point difference between men receiving loans in the test compared to their female counterparts.

The algorithm was built using loan approval data from banks which included information like transaction, payment, and credit history.

But Microsoft says that can introduce biases against applicants from certain demographics. And if that bleeds into loan approvals, it can have a dramatic impact on individuals’ lives.

According to Microsoft, when EY used the Fairness toolkit to train new machine learning models, it was able to cut the difference in loan approvals to 0.43 percentage points.

“Increasingly we’re seeing regulators looking closely at these models,” said Eric Boyd, Microsoft CVP of Azure AI, said in a statement. “Being able to document and demonstrate that they followed the leading practices and have worked very hard to improve the fairness of the datasets are essential to being able to continue to operate.”

With machine learning algorithms being used across an increasingly wide range of applications, whether that includes facial recognition algorithms for law enforcement agencies, or banks, ensuring bias isn’t a part of the equation will only become more important moving forward.

Read more:

Got a tip? Email Daniel Howley at danielphowley@protonmail.com or dhowley@yahoofinance.com, and follow him on Twitter at @DanielHowley.

Follow Yahoo Finance on Twitter, Facebook, Instagram, Flipboard, SmartNews, LinkedIn, YouTube, and reddit