Report Archive

Published on October 31, 2019

Twitter Rules enforcement provides an overview of how and when we enforce our content policies.

Twitter's purpose is to serve the public conversation. We welcome everyone to share their unique point of view on Twitter, but there are some behaviors that discourage others from expressing themselves or place people at risk of offline harm. The Twitter Rules exist to help ensure that all people can participate in the public conversation freely and safely, and include specific policies that explain the types of content and behavior that are prohibited. We are deeply committed to improving the health of the public conversation and strive to enforce our Rules consistently.

The Twitter Rules (along with all incorporated policies), Privacy Policy, and Terms of Service (TOS) collectively make up the "Twitter User Agreement" that governs a user's access to and use of Twitter's services.

All individuals accessing or using Twitter’s services must adhere to the policies set forth in the Twitter Rules. Failure to do so may result in Twitter taking one or more enforcement actions, such as:

  • temporarily limiting your ability to create posts or interact with other Twitter accounts;

  • requiring you to remove prohibited content before you can create new posts and interact with other Twitter accounts;

  • asking you to verify account ownership by providing a phone number or email address; or

  • permanently suspending your account(s).

The Twitter Rules enforcement section includes information about the enforcement of the following Twitter Rules categories: abusechild sexual exploitation (CSE)hateful conductprivate informationsensitive mediaviolent threats, and impersonation. This is the first time that information about our impersonation policy enforcement has been included in the report.

We support the spirit of the Santa Clara Principles on Transparency and Accountability in Content Moderation, and are committed to sharing more detailed information about how we enforce the Twitter Rules in future reports.

Unique Accounts Reported

About the numbers

Content on Twitter is generally flagged for review for possible Twitter Rules violations through our Help Center or in-app reporting. We have a global team that manages enforcement of our Rules with 24/7 coverage in every supported language on Twitter. Our goal is to apply the Twitter Rules objectively and consistently. 

Across the seven Twitter Rules policy categories included in this report, 15,638,349 unique accounts were reported for possible violations of the Twitter Rules, amounting to a 42% increase compared to the prior reporting period. 7,760 of these accounts were reported by known government entities compared to 6,388 reported during the last reported period, an increase of 21%. 

During our review process, we may consider whether reported content violates aspects of the Twitter Rules beyond what was initially reported. For example, content reported as a violation of our private information policy may be in violation of our policies on hateful conduct. If reported content is determined to violate any Twitter Rule during the review process, it is actioned accordingly. 

We may also determine that reported content does not violate the Rules at all. As a result, the Unique Accounts Reported per policy categories above do not necessarily fall within the Unique Accounts Actioned dataset below.

Unique Accounts Actioned

About the numbers

We use the term ‘action’ to refer to our range of enforcement actions, which includes possible account suspension. During this reporting period, we actioned 1,254,226 unique accounts for violation of the included Twitter Rules categories, amounting to an 105% increase since the last reporting period. This increase may be attributable to a number of factors, including our increased focus on proactively surfacing potentially violating content for human review, the 42% increase in the number of reports received, and the inclusion of impersonation data for the first time. 

1,791 of the unique reported accounts found to be in violation of the Twitter Rules were reported by known government entities, compared to 1,601 reported and actioned during the last reporting period, an 11% increase. 

Across the seven Twitter Rules categories included in this report, we actioned 395,917 accounts under abuse policies, 584,429 accounts under hateful conduct policies, 43,536 under sensitive media policies, 30,107 under CSE policies, 124,339 under impersonation policies, 19,679 under private information policies, and 56,219 under violent threats policies.

During the review process, we will take action on any identified violation of the Twitter Rules, regardless of the original reporting reason. For example, content that was reported for abuse could ultimately be actioned under our violent threats policy. We may also determine that reported content does not violate the Rules at all. As a result, the Unique Accounts Actioned do not necessarily fall within the Unique Accounts Reported dataset above.

More information on our approach to policy development and enforcement can be found here, and information about enforcement of each of these Twitter Rules categories is detailed below.

Abuse policies enforcement

We define abusive behavior as an attempt to harass, intimidate or silence someone else’s voice. Some examples of abusive behavior include wishing or hoping serious harm on a person or group of people, encouraging someone to engage in self-harm, threats to expose someone’s private information or intimate media, unwanted sexual advances, and aggressive insults or non-consensual slurs.

Context matters when evaluating reports of abusive behavior and determining appropriate enforcement actions. Some Tweets may seem to be abusive when viewed in isolation, but when viewed in the context of a larger conversation do not have the same meaning. Sometimes it’s unclear if content is intended to harass an individual or if it is part of a consensual conversation. When evaluating reported content in context, we consider factors such as whether:

  • the behavior is targeted at an individual or a group of people;

  • the report has been filed by the target of the abuse or by a bystander; and

  • the behavior is newsworthy and in the public interest.

During this reporting period, we saw a 22% increase in accounts reported for potential violations of our abuse policies and actioned against 395,917 unique accounts for abuse violations.

Hateful conduct policies enforcement

Under our hateful conduct policy, you may not promote violence against or directly attack or threaten other people on the basis of their perceived inclusion in a protected category  (i.e., race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease). The Twitter Rules also prohibit accounts with the primary purpose of inciting harm against others on the basis of these categories. Examples of hateful conduct may also include:

  • targeting someone with references to types of violence or violent events where people were targeted on the basis of their membership in a protected category;

  • targeting someone with content that incites fear about others based on their membership in a protected category; and

  • sending someone unsolicited hateful imagery.

During this reporting period, we saw a 48% increase in accounts reported for potential violations of our hateful conduct policies and actioned against 584,429 unique accounts for hateful conduct violations.

Sensitive media policies enforcement

This section of our report reflects accounts actioned for violations of the sensitive media policy. People use Twitter to show what’s happening in the world, often sharing images and videos as part of the conversation. Sometimes, this media can depict sensitive topics. We recognize that some people may not want to be exposed to sensitive content, which is why we balance allowing people to share this type of media with helping people who want to avoid it to do so. For this reason, you can’t include violent, hateful, or adult content within areas that are highly visible on Twitter, including live video, and profile or header images. If you share this content within Tweets, you need to mark your account as sensitive (which places your images and videos behind an interstitial or warning message). Under this policy, we don’t allow any media related to violent sexual conduct or gratuitous gore, because they have the potential to normalize violence and cause distress to those who view them. 

Examples of content covered under these policies include:

  • graphic violence (e.g., media that depicts death or serious injury);

  • adult content (e.g., media that is pornographic or intended to cause sexual arousal);

  • violent sexual conduct (e.g., media that depicts violence, whether real or simulated, in association with sexual acts);

  • gratuitous gore (e.g., media that depicts excessively graphic or gruesome content related to death, violence or severe physical harm, or violent content that is shared for sadistic purposes); and

  • hateful imagery (e.g., logos, symbols, or images whose purpose is to promote hostility and malice against others on the basis of protected category).

During this reporting period, we saw a 37% increase in accounts reported for potential violations of our sensitive media policies and actioned 43,536 unique accounts for sensitive media violations.

Child sexual exploitation (CSE) policy enforcement

We do not tolerate child sexual exploitation on Twitter. When we are made aware of child sexual exploitation media, including links to images of or content promoting child exploitation, the material will be removed from the site without further notice and reported to The National Center for Missing & Exploited Children ("NCMEC"). People can report content that appears to violate the Twitter Rules regarding Child Sexual Exploitation via our web form or through in-app reporting.

During this reporting period, we suspended a total of 244,188 unique accounts for violations related to child sexual exploitation. Of those unique accounts suspended, 91% were flagged by a combination of technology (including PhotoDNA and internal, proprietary tools).  

Impersonation policies enforcement

For the first time, we’re reporting metrics pertaining to our impersonation policy. impersonation occurs when an account poses as another person, brand, or organization in a confusing or deceptive manner and is prohibited by the Twitter Rules. During this reporting period, we actioned 124,339 accounts for violating our impersonation policy.

Private information policies enforcement

This section provides information about accounts actioned under our private information and non-consensual nudity policies. Under these policies, you cannot share people’s private information or their intimate photos or videos without their express authorization and permission. Examples of content covered by these policies include:

  • private identifiers or financial information, such as credit card information, social security or other national identity numbers;

  • locations of private residences or other places that are considered private; 

  • non-public personal contact information, such as phone numbers and email addresses; and

  • non-consensual nudity (e.g., explicit sexual images or videos of someone produced or distributed without their consent). 

Context matters, and not all postings of such information may be a violation of this policy. We consider the nature and public availability of the information posted, local privacy laws, and other case-specific facts. For example, if the information was previously posted or shared elsewhere on the internet (e.g., someone lists their personal phone number on their public blog), reposting it on Twitter may not be a violation of this policy.

During this period, we saw a 48% increase in accounts reported for potential violations of our private information policies and actioned 19,679 unique accounts for private information violations. This increase is likely related to updates to our private information reporting flow and internal enforcement processes, which now permit bystanders to report more potential private information violations for review.

Violent threats policies enforcement

The Twitter Rules prohibit violent threats and the promotion of terrorism and violent extremism. Specifically, we do not allow users to make specific threats of violence against an individual or group of people, or threaten or promote violent extremism or terrorism. Examples of content covered under this policy include:

  • explicit statements of intent to inflict violence on a specific person or group of people;

  • promoting terrorism;

  • soliciting or offering bounties in exchange for committing serious acts of violence; and

  • affiliating with and promoting organizations that use or promote violence against civilians to further their causes.

During this reporting period, we saw a 17% increase in accounts reported for potential violations of our violence & extremism policies and actioned 56,219 unique accounts for policy violations. 

Twitter suspended 115,861 unique accounts for violations related to promotion of terrorism. We surfaced for review 87% of the unique accounts suspended using our internal, proprietary tools. While this total number of unique accounts suspended during the reporting period has decreased 30% since the previous reporting period, this likely reflects the changing behaviour patterns and is generally consistent with an overall downward trend we have been noticing over the past several years.

Footnotes

  • Each report may identify multiple pieces of content for Twitter to review. For example, a single report may ask us to review individual Tweets or an entire user account.
  • Reported content may be actioned for the reported reason or for other Rules violations. If we determine the reported content does not violate our Rules, no action will be taken.

  • "Unique Accounts Reported" reflects the total number of accounts which users reported as potentially violating the Twitter Rules.

    • To provide meaningful metrics, we deduplicate accounts which were reported multiple times (whether multiple users reported an account for the same potential violation, or whether multiple users reported the same account for different potential violations). For the purposes of these metrics, we similarly de-duplicate reports of specific Tweets. This means that even if we received reports about multiple Tweets by a single user, we only counted these reports towards the "Unique Accounts Reported" metric once.

  • "Unique Accounts Actioned" reflects the total number of accounts that Twitter took some enforcement action on during this reporting period.

    • We use the term "action" to refer to our range of enforcement actions, which includes possible account suspension.

    • To provide meaningful metrics, we de-duplicate accounts which were actioned multiple times for the same policy violation. This means that if we took action on a Tweet or account under multiple policies, the account would be counted separately under each policy. However, if we took action on a Tweet or account multiple times under the same policy (for example, we may have placed an account in read-only mode temporarily and then later also required media or profile edits on the basis of the same violation), the account would be counted once under the relevant policy.

    • If a reported account is determined to be dedicated to violating the Twitter Rules i.e., the vast majority of content and account activity is in violation of the Rules, we may permanently suspend the account under our “majority abuse” policy. This data is reflected under the abuse section of Unique Accounts Actioned within this report.