There’s reports coming from Bloomberg Technology that Google’s going to get a lot more strict when vetting YouTube channels in their Preferred premium advertising program. This new push is to address the major concerns surrounding big brand messages being shown alongside inappropriate content on YouTube.
Multiple anonymous sources report to Bloomberg that Google will be using both human moderators and machine learning to find and vet videos that don’t belong in Preferred bundles.
Google’s push for stricter vetting has intensified after advertisers’ have expressed their concerns over inappropriate YouTube videos targeting children, and recently the highly inappropriate behavior from YouTube stars, which includes Logan Paul, who’s been in very deep water for tastelessly uploading a video of a dead body he found in Japan’s Aokigahara forest. The video was so offensive that it got him kicked off the Preferred platform.
For a more clear definition of what channels are in Preferred, Google describes it as a collection of “the most popular YouTube channels among US 18- to 34-year-olds” and “the most engaging and brand safe content on YouTube,” which are organized into categories like pop culture, recipes and fashion.
Google did announced last month that it was expanding its staff of moderators to 10,000 people to further their human moderating capabilities.
A spokesperson for YouTube told The Verge and Polygon that “we built Google Preferred to help our customers easily reach YouTube’s most passionate audiences and we’ve seen strong traction in the last year with a record number of brands. As we said recently, we are discussing and seeking feedback from our brand partners on ways to offer them even more assurances for what they buy in the Upfronts.” For context, the Upfronts an annual sales process where brands commit to buying certain marketing spots ahead of time.
These developments point to YouTube's continued stuggle in keeping creators happy while also appealing to brands.