Spectrum Labs CEO Justin Davis joined Yahoo Finance Live to discuss how AI can help recognize and respond to revenue-reducing toxic behaviour.
ADAM SHAPIRO: Let's move on to our next topic. You may not be aware of this, but today is safer internet day. It's about online content moderation and harassment. And our next guest knows a lot about this. Justin Davis is the CEO of Spectrum Labs. Justin, thank you for joining us.
The company actually prides itself on being able to help, quote, "consumer brands recognize and respond" to what you guys call revenue reducing toxic behaviors like harassment, hate speech, and radicalization. I thought that kind of stuff was directed mostly at individuals. How does the company get involved with this?
JUSTIN DAVIS: So, from a platform perspective, there's a set of community guidelines that every social platform, whether it's social networks, dating apps, gaming companies, marketplaces, anywhere where children can aggregate. They have to come up with some set of principles and guidelines that users can agree to when they sign up for the platform and understand how to actually act and what's allowed and what's not allowed.
From there, those policies can be converted into a set of algorithms or models or rules that then the moderators can then use to determine what behaviors are allowed and what's not allowed and actually how they go about enforcing those on a consistent and scalable manner.
SEANA SMITH: Hey, Justin, what's your reaction to how Facebook and how Twitter have handled misinformation on their sites? And if you were to give them advice and just in terms of one or two things that they could do to better address this, what would that be?
JUSTIN DAVIS: Yeah, consistency and transparency. Those are the two key themes that we see across any of the behaviors that we address here at Spectrum Labs, whether it's hate speech, cyber bullying, sexual harassment across text and voice and that sort of thing. But usually, it comes down to just transparency and communication and consistency in how you go about addressing those issues.
I think what's been frustrating for some users, that there's this tension or friction that exists between what users will allow and what they're willing to accept on any given social platform, not even just Twitter or Facebook, but anywhere across the internet. And that tension or friction may not or may be aligned with the rules and the policies that social platforms sets in place.
And so, if there's any sort of inconsistencies or expectations that aren't met with users or the platform providers themselves, then that's what causes a lot of this friction. So users don't necessarily know how to even identify or react or flag this type of content, whether it's misinformation or hate speech. And so that becomes a problem for the platform and for the users, who don't have a true understanding of what's expected of them.
ADAM SHAPIRO: What about the Facebook move to remove even more anti-vax kind of COVID falsehoods? How do you weed out legitimate questions about a vaccine versus anti-vax-- kind of fake stuff? I was about to use a vulgar expression.
JUSTIN DAVIS: It's a great question. It's a data problem at the heart of it. And that's the way that we view it here at Spectrum Labs. But, you know, ultimately, context matters. You know, it's impossible for a company that sees billions or even trillions of data points-- you know, search queries, comments, posts, forums, messages, usernames, voice content, images, memes, emojis. All these things are incredibly complex and contain some element of, you know, hate speech or misinformation, whether it's anti-vaxxers or not.
And so in order to really get down to the heart of what that content is, first off, you've just got to set the policy. You've got to set the policy that says, hey, we don't allow this type of thing on our platform and then come up with a set of examples that users can really identify that fall in or fall out of that category.
And then, look, as you create new features on the platform or you think about creating a social platform, you really need to think about safety by design. And that isn't just like the technology that you use to eradicate or identify or remove that content. It's really a fundamental design philosophy that goes into how you think about educating users on what's real, what's not real, what's your responsibility as a social platform.
And I would take it one step further. If you don't have the budget for these types of tools or these types of features or education to your users, then you might actually question if you have the budget at all for a social platform.
ADAM SHAPIRO: Justin, we appreciate your being here. Justin Davis is the CEO of Spectrum Labs.