Before Facebook's data scandal, the biggest threat to big tech was the concern over illicit content being hosted and even advertised on unwittingly by companies.
Google's YouTube announced late last year that they'd upped their effort to remove controversial content. While some have had concerns about censorship (which frankly is Google's propagative regardless), the only time YouTube's new policies have made the news was in the wake of the YouTube attacker – as we learned that YouTube had begun to take down videos she'd posted that crossed current community standards.
So, what's happened with the policy changes?
According to Google in the 4th quarter they took down more than 8 million videos for violations of the new community standards. That's an average of nearly 90,000 per day if you're keeping score... So how are these videos flagged? 6.7 million were identified by automated computer response with the balance being removed by people.
Google said that by adaptation of the computer response 76% of the videos didn't have any views at the time they'd be taken down. While censorship concerns along political lines will still likely continue its notable that videos taken down included:
- Nazi propaganda
- Pro-pedophilia material
- Terrorist messaging \How to's for weapon/bomb making
In addition to what Google's already done they've announced that they're adding 10,000 additional employees by year end to specifically police YouTube content. They've also announced a new tool you can use to check on the status of videos you report for potential content violations. There's a "Reporting History" option in your YouTube account features that lets you review what's happened with videos you expressed concern about to the company.
I still think Google is potentially the next shoe to drop with a data scandal, and anytime you're removing content they'll be some degree of controversy but it's clear that Google has taken this initiative to "clean up" YouTube seriously and theoretically it's a bit safer than it used to be.