The difference between the protections YouTube offers its advertisers and those it provides consumers is stark. A read of advertiser-friendly content guidelines for videos uploaded to the platform, last updated in June 2019, shows the company has rigorous standards to protect advertisers from harmful content and the algorithmic ability to do so. Yet YouTube does not consistently flag these same videos as problematic to viewers, despite ongoing criticism that the platform allows conspiracy theories, the alt-right, and extremism to flourish.
“I’m extremely disturbed,” Nandini Jammi, co-founder of Sleeping Giants, which campaigns to stop companies advertising on unethical websites. “They have multiple standards. It’s like they’re two-timing their users. Content not safe for brands is likely not safe for users either.”
Historically, in both print and digital publishing, advertisers have chosen content along with which they don’t want to appear. YouTube goes a step further, informing all content partners that certain videos are deemed unsuitable for ads. This means that advertisers needn’t worry about appearing in connection with such videos.
Hate speech for advertisers versus the community
YouTube provides video-makers and -uploaders a non-exhaustive list of topics that aren’t considered “advertiser friendly,” including hateful content, incendiary and demeaning videos, violence, and harmful or dangerous acts. For example, the list of “harmful content” includes:
When it comes to prohibiting hate speech on YouTube in general, the company has a looser set of community guidelines, with a slightly narrower definition of hate speech than in advertising guidelines. For example, while the community guidelines prohibit explicit statements that groups are “physically or mentally inferior” or “subhuman”, the advertiser-friendly guidance is broader, saying ads will be restricted next to “content that encourages others to believe that a person or group is inhuman, inferior, or worthy of being hated.” The advertising guidelines also say ads will be limited or won’t appear next to content promoting hate groups, whereas the community guidelines only prohibit videos that explicitly contain “hateful supremacist propaganda.”
In practice, community guidelines are often less strictly enforced*; YouTube has a “bias toward free expression as far as what we allow on YouTube,” spokesperson Alex Joseph wrote in an email to Quartz. The gap in guidelines plays out by allowing videos that seem to encourage discrimination permitted on the platform, only without ads. Channels run by alt-right figures including Steven Crowder, Nick Fuentes, and Stefan Molyneux have been deemed unsuitable for ads even as they’ve racked up huge followings.
A gap in policies for harmful health information
A similar difference in advertising versus community guidelines is evident in YouTube’s policies on misleading health information. Their advertising-friendly guidelines read, in part:
