More policing tools for both sides of the aisle.

Last week, the UK government halted taxpayer-funded advertising on YouTube and Google because some of its ads appeared on extremist content. Today, Google responded as more companies pulled advertising from its platform. Google's Chief Business Officer Philipp Schindler explained in a blog post how the company will revamp its advertising policies to give companies more control over where their ads appear on YouTube and the Google Display Network. Schindler also signals a new epoch for Google and YouTube, one in which the company will focus more effort on preventing hate speech on its online video platform.

Schindler outlines three ways Google will be tweaking its ad policy. First, the company will remove ads from "hateful, offensive and derogatory content," or any content that's geared toward "attacking or harassing people based on their race, religion, gender or similar categories." This will begin immediately, and it basically means Google will effectively demonetize any extremist content it can find on its platform.

Second, Google will do more to ensure advertisements will show up on content made by creators in its YouTube Partners Program and not channels that impersonate or exploit those real members.

Third, the YouTube team will be closely monitoring the content that actually makes it to YouTube while reconsidering "community guidelines to determine what content is allowed on the platform—not just what content can be monetized."

For YouTube creators, the second and third guidelines make for a bittersweet combination: creators who make a living from AdSense money will likely benefit from YouTube policing fraudulent accounts, but the more controversial creators may have to change their content depending on how YouTube changes its community guidelines in the future.

Schindler's blog post also outlines the new tools that will help advertisers control where their ads will appear. Arguably the most interesting of these tools is a new default that excludes "potentially objectionable" content that advertisers may not want to be associated with. That means there will be content that advertisers won't even have the option to put their ads over by default—however, they can opt in to advertise on "broader types of content." The blog post didn't further define "potentially objectionable" content, but we'll likely get a better picture of this type of content as YouTube refines its guidelines and decides what it believes to be hateful speech.

This is an even stronger attempt to put out the fire that started last week when the UK government pulled its ads from YouTube. Google provided a statement immediately after that incident, but it clearly wasn't enough to calm the nerves of advertisers. Automated digital advertisement sales have made it easier for companies to push ads across various platforms, but it also makes it easier for those ads to appear over content that the advertiser doesn't want to be aligned with. Google's solution is twofold: provide more tools to companies to essentially police their own advertising, and police the content that appears across Google-owned platforms, particularly YouTube, more thoroughly.

Until these new tools and rules are in effect, we won't know how this move will affect YouTube creators specifically, but in general, we will likely start to hear about videos being demonetized more often than ever before. YouTube is not known for being very open about changes with its creators: since last year, many top creators expressed frustrations about old videos being demonetized for seemingly no reason, an apparent change in the YouTube algorithm, and glitches that caused mysterious and impactful drops in subscriber counts. While changes to the YouTube community guidelines will likely be disclosed (it's basically the rulebook all YouTube creators have to follow), there will inevitably be confusion over what content is allowed to thrive (make money) under the new ad guidelines and what content will be deemed unfavorable.