YouTube Seeks to Hire More Moderators to Protect Children

Last month, a rather strange conspiracy theory began to dominate discussion in niche communities on Reddit and other social media forums. Dubbed “ElsaGate,” many users took note of and began to investigate a bizarre genre of YouTube videos targeted as child or family-friendly. The videos, some of which were animated while others featured masked and costumed live actors, made use of popular characters like Frozen’s Elsa, Marvel Comics hero Spiderman, and other child-friendly and recognizable characters, but the content often bordered on the offensive, using gross-out humor and other entertainment not suitable for young viewers.

While many users speculated at the purpose of the videos, in all likelihood they were designed to quickly grab the attention of child viewers in order to rapidly earn easy ad revenue. More alarmingly, after digging into these channels, users came across a more disturbing genre of videos that depicted real children in seemingly exploitative and inappropriate scenarios. While not technically illegal, the videos contained disturbing situations like mock abductions and footage of bound or restrained children. Some of these videos appeared on YouTube Kids, an app specifically designed for children to browse and view YouTube content. Advertisers were quick to respond to the scandal, with both candy and snack company M&M Mars and alcohol distributor Diageo pulling adverts from the website immediately.

A large part of the problem is that, to date, YouTube has mostly depended on automated monitoring to flag inappropriate content. The video sharing site is now changing its policy and announced plans to hire human content moderators to actively browse and flag offending content on the site. Google, YouTube’s parent company, announced they plan to hire 10,000 such moderators by 2018. The increase represents a 25 percent growth from the current number of Youtube moderators employed by the company.

Susan Wojcicki, a chief executive at YouTube, said the website’s goal is to “stay one step ahead of bad actors” in order to combat the offending content. It’s an admirable move by the site, which actually has very little in the way of legal requirement to police content. While websites like YouTube are required to remove outright illegal content like child pornography, the Communications Decency Act protects the website from liability for most other types of content. YouTube’s aggressive policy is being mirrored by Facebook, who is seeking out its own pool of 10,000 new moderators to help police content across their platform.