Request a Demo Blog

TikTok Addresses Content Moderation Concerns Amid Steady Growth

March 10, 2020 | Video Moderation

 

We’ve discussed how social media giants Facebook and Twitter, platforms that have existed since the early 2000s, have made recent policy changes to address the concerns of the U.S. government as well as of anyone pushing for a safer online community.

TikTok, an incredibly popular app among teens that lets users create short videos often paired with music, joined the club at the beginning of the year with its own new set of rules. The social media app serves as a particularly interesting case study in that it is owned by a Chinese company, raising questions about who makes and, ultimately, enforces these rules.

A Meteoric Rise

Unlike its social media competitors, TikTok arrived on the scene in September of 2017 when its parent company, ByteDance, bought Musical.ly. The acquisition gave TikTok a head start with the teenage market in the U.S. and by February of 2019, the app passed the one-billion-downloads milestone. At the end of its third year, TikTok had accomplished what took Facebook four years and Instagram six, in terms of active user numbers.

Although many predicted that TikTok’s growth would slow significantly this year as a result of ongoing investigations, the app hit 104.7 million downloads in January, signifying a 46% increase from last January, and making it the most downloaded non-gaming app in the world… and its presence on the world scale should not be underestimated.

India leads the world with 34.4% of TikTok’s January downloads (even after being banned by the Indian government for a short time last year), followed by China, and then the U.S. But just last month, Brazil accounted for 10.4% of downloads.

This growth is arguably what prompted the following updates to TikTok’s guidelines.

The New Rules

In January of this year, TikTok tightened their regulations on which videos would be permitted and prohibited. The following is an amended list of the categories of videos that will no longer be allowed, including content that:

  • glorifies terrorism
  • shows illegal drug use
  • shows violent, graphic or dangerous content
  • spreads misinformation intended to deceive voters
  • shows consumption of “drinking liquids or eating substances not meant for consumption”
  • depicts sexual arousal or nudity (potential exceptions for educational or artistic purposes)
  • includes slurs (potential exceptions when used self-referentially or in lyrics)
  • promotes “hateful ideologies”

The update would appear to address the content moderation concerns of users wanting a safer, friendlier platform, but others are worried about what else is being censored.

A Question of Transparency and Cybersecurity

While the company has claimed that “its U.S. operation doesn’t censor political content or take instructions from its parent company” and that no moderators for the U.S. platform are based in China, former American employees have claimed otherwise.

Last November, former TikTok employees told the Washington Post that Beijing-based moderators were in fact the ones to make the final call on flagged content. The sources also claimed that Chinese teams ignored them when they argued against content being removed. Such content included videos typically considered safe in the U.S. but subversive by the Chinese government – e.g., heavy kissing or heated political debate.

About a month later, TikTok U.S. General Manager, Vanessa Pappas released a statement in an effort to allay any fears claiming the company would:

  • Create a committee of outside experts to advise on and review content moderation policies covering a wide range of topics, including child safety, hate speech, misinformation, bullying, and other potential issues.
  • Further increase transparency around our content moderation policies and the practices we employ to protect our community.
  • Build out an even deeper bench of internal leaders so that we are well prepared to tackle the challenges that our continued rapid expansion will bring.

In spite of the plan to give American managers more control, it should come as no surprise that the United States Government is displeased – to say the least – about a Chinese company controlling the content of the most downloaded non-gaming app in the world. In fact, the Army, Navy, and most recently the TSA have banned the app from all government phones, claiming the app is a cybersecurity threat.

TikTok released its first “transparency report” early this year. The report listed legal requests for user information and requests from the government to take content down but did not include reports made by users. This omission perhaps explains the U.S. government’s concerns.

Much Remains Unanswered

TikTok’s updated guidelines suggest that the company is receptive to change, even though many feel there’s still a lot of work to be done and questions to be answered, like: Is the Chinese government as impartial as they say they are? Will TikTok keep up the pace as it continues to address concerns from its users? And what processes will the company put in place, if any, to achieve transparency in a way that satisfies the U.S. government?

Stay ahead in Trust & Safety! Subscribe for expert insights, top moderation strategies, and the latest best practices.