YouTube is in the hot seat again for child safety shortcomings.

Last week, The Washington Post revealed that YouTube was in the late stages of a Federal Trade Commission investigation launched following complaints from consumer groups that the site mishandled children’s privacy. Thanks to repeated instances of kids being shown content — like suicide instructions, comments from child predators and violent parodies of popular cartoons — even on YouTube’s supposedly kid-friendly offshoot YouTube Kids, the issue refuses to go away

Google’s CEO has said that YouTube’s problem is too big to fix, but this isn’t just a YouTube problem. It’s an Internet problem. The prominence of social media and streaming services — and the blur between the two — calls for a reevaluation of childrens’ internet and TV regulation. 

Fixing YouTube

YouTube, owned by Google’s parent Alphabet Inc., reportedly has several potential fixes on the table, but they each come with their own set of issues. 

One is to completely separate YouTube Kids from YouTube. This would be a major change for YouTube, and not just because of the exorbitant infrastructure costs that would come with creating a completely different product. 

Kid-centric channels are some of the biggest money makers on the platform. Eliminating them from the second-most popular site in the world would be a big blow to YouTube’s bottom line. 

Take Cocomelon, for example. The channel, which features animated nursery rhymes, is the fourth-largest channel on all of YouTube with around 52 million subscribers. The channel is making anywhere from $614,000 to $9.8 million in ad revenue a month, dependent on its cost per thousand (CPM) rate, which is likely on the higher end due to its popularity. YouTube takes a 45% cut of that revenue. 

The other potential solution is turning off the autoplay feature on childrens’ videos. YouTube recommends videos that align with a users’ activity and automatically plays them. A casual YouTube browse so often leads viewers down bizarre paths that YouTuber Jenna Marbles actually turned “deep dives” on the site into a popular video series.

Turning off this feature seems like a good idea at first. However, aside from the fact that it’s nearly impossible to determine what user-generated material is considered “childrens’ content,” this would, again, cut the ad revenue autoplay is responsible for attracting. Popular music video platform Vevo, for example, told Variety in April that 75% of its YouTube views come from algorithmic recommendations; only 6% come from search.

The costs associated with either of these solutions presents YouTube with a prickly ultimatum: add safety features and sacrifice revenue, or do nothing and wait for regulators to act.

This Isn’t Just YouTube’s Problem

YouTube might be the biggest streaming company but it’s hardly the only one with these issues. Social media companies and streaming services designed for high engagement inevitably breed hazard.

In some ways, you can’t really blame the companies because their platforms weren’t created with kids in mind. Snapchat started as a discreet way for people to send potentially lewd or profane images, Facebook was originally a site to rank the attractiveness of female classmates on Harvard’s campus, and one of YouTube’s co-founders has admitted that he came up with the idea for the site after struggling to find clips online of Janet Jackson’s infamous Super Bowl wardrobe malfunction.

They can, however, be blamed for feigned ignorance about children and preteens actively using their site despite age restrictions. 

None of the five most-popular social media platforms among U.S. teens — YouTube, Instagram, Snapchat, Facebook and Twitter — technically allow users under the age of 13. This is how they get around the Children’s Online Privacy Protection Act (COPPA), which prohibits the gathering of data from minors under that age. However, a child can easily make up a birthdate that makes them “legal” and log in.

Streaming services aren’t immune either. They were made to be an entertainment destination, and they began investing in kids content once they realized it was a good business decision. While most offer parental controls and separate kid-friendly profiles, it’s not hard for today’s iPhone-toting kiddos to navigate their way into an adult’s profile. On top of that, streaming services aren’t mandated under the same regulations that linear broadcast or cable TV is, meaning they have no educational requirements, they’re not tied to the same advertising rules, and they aren’t required to disclose age ratings. 

The Internet’s Kid Protections are Stuck in the Past

Company policies and federal laws have not kept up with the evolving Internet. The two main pieces of U.S. legislation that regulate how children interact with media —  COPPA and the Children’s Television Act — are wildly outdated. 

COPPA was passed in 1998, nearly a decade before the iPhone, YouTube, Twitter, or Facebook were created. The Children’s Television Act became law in 1990, when Netflix was still 17 years away from streaming content.

COPPA only regulates how children’s data is managed, not the type of content they encounter online. And the Childrens’ Television Act only pertains to broadcast and cable television, primarily regarding a mandated amount of educational programming that must be carried to maintain a broadcast license. This has created a blind spot between online privacy management and content requirements for kids. 

Some in Washington are beginning to take it more seriously. Tech regulation has proved to be a key issue in the 2020 election race. Massachusetts Senator Ed Markey, who authored the original Children’s Television Act in 1990, is now taking on COPPA. He has proposed a bipartisan update that would extend data tracking protections to children ages 15 and under. Parents would also be able to erase their child’s personal information and a Youth Privacy Marketing Division would be set up at the FTC to manage minors’ privacy and marketing targeted at kids.

While regulating content is more difficult than privacy, as any attempts to regulate in the past have been struck down for free speech arguments, there are steps that can be taken to make sure that kids are encountering content that is suitable for them without limiting free speech. 

Streaming services could have educational requirements that map with broadcast and cable TV, and video-driven social media sites could bulk up their parental controls to help parents manage the content their tech-savvy toddlers can get their hands on. While YouTube clings to its creator-centric ethos, even human curation of content targeted at little ones (97% of whom have watched at least one YouTube video) should be considered as an option. 

At the end of the day, Internet giants like YouTube need to decide if a fat bottom line is worth jeopardizing the emotional and mental well-being of the next generation. But whether or not that’s going to be a voluntary corporate decision or a move forced by regulation is yet to be seen.