U.S. markets open in 7 hours 11 minutes
  • S&P Futures

    3,888.50
    +21.00 (+0.54%)
     
  • Dow Futures

    31,515.00
    +157.00 (+0.50%)
     
  • Nasdaq Futures

    13,160.25
    +105.00 (+0.80%)
     
  • Russell 2000 Futures

    2,257.80
    +28.70 (+1.29%)
     
  • Crude Oil

    60.00
    +0.25 (+0.42%)
     
  • Gold

    1,731.00
    -2.60 (-0.15%)
     
  • Silver

    26.75
    -0.13 (-0.50%)
     
  • EUR/USD

    1.2093
    +0.0006 (+0.05%)
     
  • 10-Yr Bond

    1.4150
    0.0000 (0.00%)
     
  • Vix

    24.10
    +0.75 (+3.21%)
     
  • GBP/USD

    1.3969
    +0.0013 (+0.09%)
     
  • USD/JPY

    106.8500
    +0.1400 (+0.13%)
     
  • BTC-USD

    49,559.97
    +976.07 (+2.01%)
     
  • CMC Crypto 200

    995.74
    +7.65 (+0.77%)
     
  • FTSE 100

    6,613.75
    +25.22 (+0.38%)
     
  • Nikkei 225

    29,559.10
    +150.93 (+0.51%)
     

How AI plays a role in finding hate speech online

Spectrum Labs CEO Justin Davis joined Yahoo Finance Live to discuss how AI can help recognize and respond to revenue-reducing toxic behaviour.

Video Transcript

ADAM SHAPIRO: Let's move on to our next topic. You may not be aware of this, but today is safer internet day. It's about online content moderation and harassment. And our next guest knows a lot about this. Justin Davis is the CEO of Spectrum Labs. Justin, thank you for joining us.

The company actually prides itself on being able to help, quote, "consumer brands recognize and respond" to what you guys call revenue reducing toxic behaviors like harassment, hate speech, and radicalization. I thought that kind of stuff was directed mostly at individuals. How does the company get involved with this?

JUSTIN DAVIS: So, from a platform perspective, there's a set of community guidelines that every social platform, whether it's social networks, dating apps, gaming companies, marketplaces, anywhere where children can aggregate. They have to come up with some set of principles and guidelines that users can agree to when they sign up for the platform and understand how to actually act and what's allowed and what's not allowed.

From there, those policies can be converted into a set of algorithms or models or rules that then the moderators can then use to determine what behaviors are allowed and what's not allowed and actually how they go about enforcing those on a consistent and scalable manner.

SEANA SMITH: Hey, Justin, what's your reaction to how Facebook and how Twitter have handled misinformation on their sites? And if you were to give them advice and just in terms of one or two things that they could do to better address this, what would that be?

JUSTIN DAVIS: Yeah, consistency and transparency. Those are the two key themes that we see across any of the behaviors that we address here at Spectrum Labs, whether it's hate speech, cyber bullying, sexual harassment across text and voice and that sort of thing. But usually, it comes down to just transparency and communication and consistency in how you go about addressing those issues.

I think what's been frustrating for some users, that there's this tension or friction that exists between what users will allow and what they're willing to accept on any given social platform, not even just Twitter or Facebook, but anywhere across the internet. And that tension or friction may not or may be aligned with the rules and the policies that social platforms sets in place.

And so, if there's any sort of inconsistencies or expectations that aren't met with users or the platform providers themselves, then that's what causes a lot of this friction. So users don't necessarily know how to even identify or react or flag this type of content, whether it's misinformation or hate speech. And so that becomes a problem for the platform and for the users, who don't have a true understanding of what's expected of them.

ADAM SHAPIRO: What about the Facebook move to remove even more anti-vax kind of COVID falsehoods? How do you weed out legitimate questions about a vaccine versus anti-vax-- kind of fake stuff? I was about to use a vulgar expression.

JUSTIN DAVIS: It's a great question. It's a data problem at the heart of it. And that's the way that we view it here at Spectrum Labs. But, you know, ultimately, context matters. You know, it's impossible for a company that sees billions or even trillions of data points-- you know, search queries, comments, posts, forums, messages, usernames, voice content, images, memes, emojis. All these things are incredibly complex and contain some element of, you know, hate speech or misinformation, whether it's anti-vaxxers or not.

And so in order to really get down to the heart of what that content is, first off, you've just got to set the policy. You've got to set the policy that says, hey, we don't allow this type of thing on our platform and then come up with a set of examples that users can really identify that fall in or fall out of that category.

And then, look, as you create new features on the platform or you think about creating a social platform, you really need to think about safety by design. And that isn't just like the technology that you use to eradicate or identify or remove that content. It's really a fundamental design philosophy that goes into how you think about educating users on what's real, what's not real, what's your responsibility as a social platform.

And I would take it one step further. If you don't have the budget for these types of tools or these types of features or education to your users, then you might actually question if you have the budget at all for a social platform.

ADAM SHAPIRO: Justin, we appreciate your being here. Justin Davis is the CEO of Spectrum Labs.