In recent years, researchers and journalists have highlighted artificial intelligence technology sometimes stumbling when it comes to minorities and women. Facial recognition technology, to name just one, is more likely to become confused when scanning dark-skinned women than light-skinned men.
Last week, AI Now, a research group at New York University, released a study about A.I.’s diversity crisis. It said that a lack of diversity among the people who create artificial intelligence and in the data they use to train it has created huge shortcomings in the technology.
For example, 80% of university professors who specialize in A.I. are men, the report said. Meanwhile, at leading A.I. companies like Facebook, women comprise only 15% of the A.I. research staff while at Google, it’s only 10%.
Furthermore, Timnit Gebru, who is an A.I. researcher at Google, is cited in the report as saying “she was one of six black people—out of 8,500 attendees” at a leading A.I. conference in 2016.
The report’s authors believe that the problem of A.I. discriminating against certain groups could be fixed if a more diverse set of eyeballs was involved in the technology’s development. And while tech companies say they are aware of the problem, they haven’t done much to fix it, the report said.
One possible solution is for tech companies to examine and repair workplace cultures that are off-putting to women and people of color. Most women, for instance, wouldn’t want to work at a company if they knew that it tolerates bigotry and unequal wages between the genders.
A solution for improving workplace diversity is for companies to be more transparent, which signals to prospective employees their seriousness about being more accommodating. This could include publishing employee compensation figures broken down by race and gender, releasing harassment and discrimination reports that reveal the number of such incidents, and ensuring that executive salaries “are tied to increases in hiring and retention of under-represented groups.”
It’s these types of public steps that could lead to more people of diverse backgrounds working on A.I., ensuring that the next big A.I. breakthrough benefits everyone.
In a related note, Joy Buolamwini, the founder of the Algorithmic Justice League and a graduate researcher at the MIT Media Lab who did not work on the report discussed here, has done remarkable work chronicling A.I. bias problems in facial recognition systems. That work earned her a spot on Fortune’s World’s Greatest Leaders list, published last week. A number of other techies are also on the list.
A.I. IN THE NEWS
Gilead’s big bet on A.I. Gilead Sciences will pay healthcare startup Insitro $15 million to help the pharmaceutical giant use A.I. to develop new liver disease drugs, reported medical news service Stat News. Insitro, whose CEO and deep-learning expert Daphne Koller helped create online education company Coursera, could receive $1 billion if it meets certain milestones.
Microsoft shuns facial-recognition contract. Microsoft president Brad Smith said the technology giant turned down a contract that would have let California law enforcement agencies incorporate the company’s facial-recognition tech in officers’ body cameras and vehicles, Reuters reported. The report said that Microsoft “concluded it would lead to innocent women and minorities being disproportionately held for questioning because the artificial intelligence has been trained on mostly white and male pictures.”
Giving Facebook a voice. Facebook’ augmented reality and virtual reality unit is developing a digital voice assistant akin to Amazon’s Alexa and Apple’s Siri, CNBC reported. The digital assistant could let users of the company’s Oculus Rift virtual reality headsets or Portal video conferencing device, which already comes installed with Alexa, give voice commands to the devices.
Intel’s chip spending. Intel said it bought specialized computer chip company Omnitek for an undisclosed amount. The semiconductor giant said that Omnitek’s technology would be used to create programmable computer chips that power A.I.-related computer-vision tasks like analyzing video streams.
THE BLACKBOX PROBLEM
Don’t expect unexplainable machine-learning technologies to be used in high-stakes scenarios such as helping nuclear plant operators manage their facilities. Dinkar Jain, the machine learning head of Facebook’s ad unit, said at a recent BootstrapLabs A.I. conference, “There’s absolutely no way I see society accepting machine learning as long as it is seen as a blackbox.”
EYE ON A.I. TALENT
The Democratic National Committee has chosen Nellwyn Thomas as chief technology officer. Thomas previously led data science teams at Facebook and Etsy and was the deputy chief analytics officer for Hillary Clinton’s 2016 presidential campaign.
Machine learning startup Algorithmia named Hernan Alvarez as vice president of product. Hernan previously held leadership positions at companies including IBM, and Hewlett Packard Enterprise.
Transaction Data Systems, a maker of pharmacy management software, picked Adam Wallace as the company’s chief technology officer. Wallace previously held leadership positions at companies like Oracle and MRI Software.
EYE ON A.I. RESEARCH
Crowdsourcing A.I. to tackle tumors. Researchers from Harvard Medical School, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, and other organizations published a paper about using crowdsourced methods to create A.I. systems that can scan and segment lung tumors. The researchers held a contest in which 564 participants created a variety of machine-learning systems like neural networks that were as effective and even faster than humans in segmenting tumors.
A.I. to screen inappropriate YouTube cartoons. Researchers from the Institute of Computing at the University of Campinas, Brazil published a paper about using deep learning to detect controversial Elsagate videos, which resemble popular cartoons but contain characters doing disturbing or inappropriate things. The researchers also debuted a publicly available data set containing hundreds of hours of Elsagate videos so other A.I. researchers can create better content-filtering tools.
FORTUNE ON A.I.
A.I. Bias Isn’t the Problem. Our Society Is – By Alex Salkever and Vivek Wadhwa
Stripe Backs $40 Million Investment in A.I. Accounting Service Pilot – By Jeff John Roberts
Beware the A.I. arms race. Paul Scharre, a senior fellow and director of the technology and national security program at Center for a New American Security, explains in Foreign Affairs the dangers of countries racing to beat each other in A.I. Scharre is concerned that in their haste to introduce A.I., countries will ignore the technology’s risks and occasional nuttiness. He cited a recent paper showing how an A.I. system learning to walk in a digital environment “discovered it could move fastest by repeatedly falling over” and another Tetris-playing A.I. bot that “learned to pause the game before the last brick fell, so that it would never lose.”