Hypergiant Chief Ethics Officer Will Griffin joins Yahoo Finance’s On The Move panel to discuss overcoming racial bias in AI systems and weigh in on the outlook for the tech industry.
JULIE HYMAN: The debate over diversity when it comes to AI has been a friend for a while, and of course, it's only heated up with the resurgence of the Black Lives Matter movement. And we want to talk more about this with Will Griffin. He is Chief Ethics Officer at Hypergiant, which is an AI company he's joining us from Austin, Texas Will, thank you for joining us. This has been something that studies have talked about. That AI, even know you wouldn't think that it would show bias, that it has tended to show bias. So is that something that you're focused on? And if so, how do you program out bias, so to speak?
WILL GRIFFIN: Well, bias is something that we're, first of all, thank you for having me. I'm honored to be here. But bias is something that we focus on because all of the algorithms are actually created by human beings. And within human beings there is implicit bias. And so you have to be aware of that. And for us, we try to create a process.
So there are three kind of elements to what we do. The first is more on a value side. So we use what we call, top of mind ethics. And that's to make sure that everybody was involved in the project or in the workflow has ethics at the top of mind. And then that has three pieces to it. The first is goodwill. Is there goodwill in this use case? That's the first question we ask. The second is categorical imperative, which is if this use case was applied by everyone in our company, every company in our industry, in every industry in the world, what would the world look like?
And then step three is the law of humanity. Are we using people as a means to an end? Or is the goal of the use case to benefit people? We found that this helps keep our team mindful of ethics at every step of the workflow. And we look at diversity as an ethical issue. Is it right or wrong? We don't see it as a separate issue in our organization that we [AUDIO OUT] later. It's part of our ethics and who we are and kind of what we're about. So it's a huge issue.
I think Black Lives Matter really brought to head facial recognition. Because by focusing on the police, then you focus on the tools that they use to execute their powers. And facial recognition was a big part of it. And as you know and I'm sure you've covered, the bias of facial recognition, the data sets in the facial libraries that were used to create facial recognition, were primarily from, it's primarily a white data set, so it was very inaccurate when came to American-Americans and minorities. And the academic community and the ethics community, tech community's been pointing that out for years.
But when the kids went out and Black Lives Matter and started saying, defund the police. And then when they started putting pressure on institutional investors, many of whom have ESG principles, when they started putting that pressure on those investors, they then put pressure on the companies. And then they were the ones who actually were able to get facial recognition to take it off IBM first, then Amazon and Microsoft. So it all obviously weaves together, and we think it's related to ethics, which is really the question of right or wrong.
DAN HOWLEY: So, Will, when you look at something like diversity in AI, you talk about how the facial recognition was geared more towards white faces, right? How do you ensure that that doesn't happen? Do you have to ensure that the people who you hire, who use these data sets are of different ethnic groups to ensure that the data they're putting in is a full representation of people regardless of what type of data it is?
WILL GRIFFIN: Right. So it works two ways. The first way is in lieu of hiring, because there are a lot of impediments to that. They should hire, and I'm going to get to that in a second. But that doesn't mean that you're off the hook for ethics in the meantime. And the example I always use, is 31 years ago last month was the Tiananmen Square massacre.
When I looked at the Tiananmen Square massacre, that offended my values as an American and what I thought was right or wrong. I don't need to know any Chinese. I don't need to speak Mandarin. I don't need to know Chinese history. I know that offends my views. And given that, I should be able to act. So companies, even if they're not diverse, should be able to vet their use cases in the projects that they create against diversity and all other ethical metrics.
The second part is, that a tech company in the world, if they want to really make a difference, it's in three ways. The first way is you need African-Americans in the C-suite. That's number one. Number two is, you need African-Americans on the board with equity. That's two. And then three, you need to hire African-Americans into positions of influence within your company.
Because McKinsey and Harvard Business Review both have put out studies that talked about how diversity actually increases innovation. So the first, it's the right thing to do. And then the second is, it will actually help you create more robust solutions. So at Hypergiant, over the last year, Ben Lamm, who's our founder, and our vision is delivering his vision, is delivering on the future we were promised.
And there can't be a future that we were promised if black lives don't matter. Because the future we were promised is a more fair, equitable, and more just world, and our solutions to be a part of that. So I think in our company, Ben has been, our CEO, has been a great champion of it. But within our company at our head of design of UI and UX, his name is Chris Klee.
ADAM SHAPIRO: Will.
WILL GRIFFIN: He's been working on this over the last year. He hired five African-American designers, which is 20% of our AI designers in our company. And they're three men and two black women. And I hold him up as an example within our company all the time, and within our industry, which is if you are committed to it, you will be able to find the talent.
ADAM SHAPIRO: Will, you know, you've got an impressive list of clients who work with your company. And I'm looking at the list online right now. Nvidia, Booz Allen, Hamilton, Shell, Bosch, I mean, the list goes on and on. Who's incorporating what you're talking about among the clients? Are they doing what needs to be done?
WILL GRIFFIN: Wow. So I can't speak to specific clients, but I can tell you that 100% of our client work is vetted ethically. So this is how it works in our workflow. A client comes to us with a business or a tech solution that they want to solve. Then our team gets together, whether it's data science engineers, designers, strategists, they all get together and then they come up with a menu of business and tech solutions.
Once we get that set of business and tech solutions, then we vet that internally based on our ethical framework. So what the client gets back is a menu of choices of options to pick from options that we have already vetted. So we don't rely on our clients to adopt our frameworks in order to be ethical [AUDIO OUT] Because we vet our [AUDIO OUT] before we even deliver them back to the client.
Now that being said, once our clients, you know, ethics and AI is a big selling point for us. Because everybody is looking at all the value that's being lost in the market from companies that are having their AI solutions blow up on them in public. Imagine all the billions of dollars that IBM spent over the years in computer vision and facial recognition.
If you worked in Microsoft in facial recognition, then you found a moratorium on the work that you've been working on for the last 10 years. It kills innovation when you have solutions on the market that impact, that negatively impact society and you have to stop doing. So the first step is, we vet it on our end. So all of our solutions go back to our clients ready to go.
The second is, once we introduce our workflow with ethics embedded into it, into our clients, I would say a good, over, the majority of our clients also want us now to come in and teach our top of mind ethics process to primarily it will be the CTO or the CIO within their engineering or information teams.
JULIE HYMAN: Will, we're glad that you guys are making this progress and working on this. Will Griffin is Hypergiant Chief Ethics Officer. Thank you so much, really appreciate it. We'll be right back.
WILL GRIFFIN: Thank you very much for the opportunity. Thank you.
JULIE HYMAN: Thanks.