YouTube adopts new policies to move more decisively against rogue creators
Google’s YouTube adopted new penalties against creators who post disturbing or violent videos, as detailed in a blog that was posted the same day the company temporarily pulled ads from vlogger Logan Paul’s channels. The measures include removing a channel from Google Preferred and YouTube Original, suspension of advertising from creators’ channels and removing eligibility to have a video be recommended.
"In the past, we felt our responses to some of these situations were slow and didn’t always address our broader community’s concerns," wrote Ariel Bardin, VP or product management at YouTube, in the blog post. "Our ultimate goal here is to streamline our response so we can make better, faster decisions and communicate them clearly."
While YouTube’s has been ramping up efforts to monitor and block objectionable content over the past year — efforts that have had some success in winning back some advertisers concerned about brand safety — the platform is acknowledging that its needs to move more quickly when objectionable content appears, which is likely an attempt to build confidence among viewers and advertisers that it can keep creators in check. P&G's chief brand officer Marc Pritchard recently said that the marketer has not returned yet to YouTube following last year's brand safety issues but that he is hopeful it will do so eventually.
YouTube suspended all ads from Paul’s channels because of a pattern of behavior that made his content “unsuitable for brands,” per Variety. The actor and vlogger fired a Taster on a dead rat in a Feb. 5 video, joked about taking the “Tide Pod challenge” of eating detergent capsules and posted a video from a trip to Japan of a person who died from suicide.
YouTube has had a tumultuous year since advertisers starting pulling ads from the video-sharing platform because of concerns that their commercials were appearing in extremist hate-speech videos. PepsiCo, Walmart, Dish Network, Starbucks, General Motors and FX Networks were among the more than 250 brands that suspended advertising on YouTube or Google. At that time, racist videos with the “n” word in the title were shown with ads from Coca-Cola, Starbucks, Toyota, Dish Network and Geico, per The Wall Street Journal.
While YouTube, Facebook and others have been taking steps to ensure the quality of the content on their platforms, the brand safety issue does not appear to be over yet. Unilever is expected to warn today that it will pull back from advertising on popular tech platforms, including YouTube and Facebook Inc., if they don’t do more to combat the spread of fake news, hate speech and divisive content, per a separate report in The Wall Street Journal. Unilever Chief Marketing Officer Keith Weed is expected to deliver the warning during the Interactive Advertising Bureau’s annual leadership meeting. The company spent more than $9 billion to market brands such as Lipton, Dove and Knorr last year.
The number of brands participating in Google Preferred grew 30% in the U.S. between the second quarters of 2016 and 2017, per MediaRadar data cited by Adweek. In June, the number of brands that advertised in Google Preferred had more than doubled to 508 from the prior January, before the brand-safety concerns escalated.
- YouTube Preventing Harm to the Broader YouTube Community
- The Wall Street Journal Google’s YouTube Has Continued Showing Brands’ Ads With Racist and Other Objectionable Videos
- Variety YouTube Suspends All Ads on Logan Paul’s Channels
- The Wall Street Journal Unilever Threatens to Reduce Ad Spending on Tech Platforms That Don’t Combat Divisive Content