Google Systems Learn to Distinguish Inappropriate Videos | Koeppel Direct

Google Systems Learn to Distinguish Inappropriate Videos


Following a massive glitch in Google’s YouTube advertising system, brands like AT&T and Johnson & Johnson have threatened to pull their spots instead of risking being placed alongside radical or inflammatory video content.

Google’s response to the debacle has been swift and interesting. Instead of asking workers to dig through thousands of hours of footage to cherry-pick the problem children, it’s employing computers to do this time-consuming job.

Google’s Computers are Learning Nuance

Using the same type of sophisticated computer learning employed in its self-driving cars, Google aims to teach its team of computerized content monitors how to distinguish between videos featuring extremism, sexual or violent footage and others that might feature similar imagery.

For example, the systems will be able to readily distinguish between a woman in a sports bra doing aerobics and one in lingerie. The nuance in these contexts is very important, but it’s also something that computers have failed at time and time again.

Google’s Plan is Different

Instead of simply feeding the computer systems a few images to start with, it’s employing a team of humans to work with the computers to teach them exactly which videos to kick out and which videos are perfectly fine within their own context. The Google computers will be checking YouTube video imagery as well as the actual audio in the video, the description and other indicators to help them learn to distinguish patterns inherent to video content that is not acceptable to advertisers.

This trick has already been tried before, when Google taught a bank of computers to rate video content in a similar way to how movies are rated. Using lessons from that experiment, as well as an incredible amount of human feedback, new systems are being pushed to solve a different, but related, problem.

Currently, the job of the computer is to flag videos that it believes are will be inappropriate for digital advertising spots. A human then reviews these selections, and the computer learns even more. For the few video spots that get through the screeners, users can manually flag them as inappropriate and the data will be fed back to the video-checking computer systems.

These new Google employees have gone a long way to reassure big brands like Unilever that placing ads next to appropriate videos is YouTube’s number-one goal.


Your browser is out of date. It has security vulnerabilities and may not display all features on this site and other sites.

Please update your browser using one of modern browsers (Google Chrome, Opera, Firefox, IE 10).