Extreme Repercussions for Google as Big Names Pull Ads Due to Extremist Content

google-youtube-inappropiate-video
 

No one wants to be confronted by hate speech when they sit down to watch a video on YouTube, but that bad taste is even more bitter for marketers when Google’s algorithms make the mistake of pairing an expensive advertising piece with content that promotes an idea that’s well outside of a brand’s established image.

In February, the news was broken that one of YouTube advertising’s greatest fears had become a reality: family-friendly brands were being pared, algorithmically, against not so family friendly messages from extremist groups.

The Current Problem with YouTube Advertisements

Normally, the algorithms at Google work pretty well to weed out content that’s deemed “advertising inappropriate,” such as violent, overtly sexual or hate video.

However, it’s not a perfect system. With over 400 hours of video being uploaded every minute to YouTube, the sheer manpower required to review it all would be immense, which is why the search giant had turned to computers for that job.

The problem is that computers are still not to a point that they can easily distinguish between a video from Al-Qaeda and one teasing an upcoming action movie. Logically, there will be mistakes, like the ones uncovered earlier this year. The problem with this is that having even a fraction of a percent of their digital marketing matched up with the wrong kinds of videos is too much of a margin for error for many brands.

Google and Inappropriate Content 

In the past, Google has asked viewers to flag content they feel is inappropriate, even after the algorithms has attempted to determine this themselves.

Issues arise if a video makes it through computerized scrutiny and is approved for a channel that has already been deemed “advertising friendly,” it doesn’t mean the content will actually hit that mark. Content creators with small followings, like Britain First, which promotes an aggressive nationalist stance, may never be flagged by a human because the people who are viewing the video want to see it and it rarely reaches anyone outside that mindset.

While no one wants their brand to be associated with these kinds of fringe ideas, it’s even worse when brands realize that a hefty portion of the money they paid for that advertising piece went to promoting YouTube hate speech. So, like you might expect, many very large companies including AT&T and Johnson & Johnson withdrew their ads in a collective YouTube boycott.

YouTube’s Machine Learning Solution

Since YouTube can’t exist without its advertisers, it was very quick to respond to this crisis.

The machines that were already trained to try to figure out what content was worth having and what was grossly inappropriate are getting smarter with the addition of machine-learning algorithms similar to what Google’s self-driving car utilizes to help it tell the difference between a person who is about to cross the street and one who is simply standing by a streetlight waiting for the traffic light to turn.

In essence, Google has given the YouTube computers a new way to understand nuance, as they work together with a human team that can adjust and refeed bad results back into the system to teach the algorithm about inappropriate content it’s missing. YouTube marketing got a very sudden upgrade in response to a very extreme situation, but it’s not all bad for advertisers.

Google promises that ads on YouTube will be given better content to coexist with and advertisers more precise control over what sites and channels their marketing appears on. Default protections will be tightened, according to a recent report in The Guardian.

Google’s Big Mistake 

New reporting by The New York Times seems to indicate that Google realizes just how big of a mistake they’ve made by allowing anyone with a computer to upload content to YouTube. “We take this as seriously as we’ve ever taken a problem,” Google’s chief business officer Philipp Schindler said in an interview with the publication. “We’ve been in emergency mode.”

Even with so much concern and these new safeguards in place, it’s going to be difficult to know if the problem is truly over at YouTube. Unilever has chosen to stay, along with a few other advertisers, so it shouldn’t take long to see the effects of Google’s “emergency mode” efforts.

Although many marketers are still wary of the video platform, others are indicating they will return now that Google can weed out advertising inappropriate videos as a much faster rate – the New York Times indicates the machine learning-enhanced system has already flagged five times as many inappropriate videos as the prior algorithm!

 

Your browser is out of date. It has security vulnerabilities and may not display all features on this site and other sites.

Please update your browser using one of modern browsers (Google Chrome, Opera, Firefox, IE 10).

X

Google+