The decade-plus evolution of YouTube from repository of cat videos and pirated content to potential TV replacement hit a road bump last year when marketers discovered their advertisements were showing up next to extremist videos and other unsavory content.

Ever since, YouTube has been scrambling to come up with new policies that give advertisers more control over where their ads go and provide additional assurance that the videos on the online service are better screened.

On Tuesday in a blog post, YouTube said it had altered the threshold for which videos can accept advertisements and pledged more human oversight of its top-tier videos. If that sounds familiar, that’s because YouTube has made similar promises in the past.

“There’s no denying 2017 was a difficult year, with several issues affecting our community and our advertising partners,” Paul Muret, a Google vice president, wrote in the blog post. “We are passionate about protecting our users, advertisers and creators and making sure YouTube is not a place that can be co-opted by bad actors.”

The recent uproar over a video of a dead body hanging from a tree in a Japanese forest, posted by a YouTube star, demonstrated that policing the platform remains as challenging as ever.

The company said the YouTube Partner Program would now accept advertisements only for video creators whose videos have garnered 4,000 “watch hours” over the last 12 months and have at least 1,000 subscribers. YouTube said a four-minute video watched by more than 60,000 people would most likely surpass that watch hours threshold.

The new standard is an update to a rule, announced in April, that said only creators with more than 10,000 lifetime views on their videos would be able to collect advertising money. YouTube said a standard based solely on views did not filter out enough “bad actors” and that coupling the watch hours with subscribers would make it harder to game the system.

The challenge for YouTube is to make sure advertisements don’t show up next to troublesome content without cutting off the revenue stream of smaller video makers whose niche content helps make YouTube different from mainstream television.

YouTube also pledged that humans would screen all videos from creators who are part of Google Preferred, which the company says is limited to the top 5 percent of all content on YouTube when measured by popularity and engagement. YouTube created the Google Preferred tier as a way to assure advertisers that they could place advertisements on the best YouTube content, while allowing creators to generate guaranteed revenue from videos.

However, the content on Google Preferred has not always been very palatable to advertisers. Over the last year, two of YouTube’s biggest stars, Logan Paul and PewDiePie, were dropped from Google Preferred for posting inappropriate videos.

In the blog post, YouTube said all of the Google Preferred videos in the United States would be vetted by humans by mid-February. The new monitoring system will extend globally by the end of March.

While Google has long argued that the volume of video content on YouTube makes it hard to rely solely on human screeners, computers have not mastered the nuances required to distinguish between appropriate and inappropriate content.

In December, YouTube said it planned to hire thousands of reviewers to screen content on the site and remove inappropriate content.