Rules Enforcement Blocks

Use the blocks below to author content for individual report periods.

 

Accounts Actioned

Published on January 25, 2022

 

02.

Overview

Twitter's purpose is to serve the public conversation. We welcome people to share their unique point of view on Twitter, but there are some behaviors that discourage others from expressing themselves or place people at risk of harm. The Twitter Rules exist to help ensure that all people can participate in the public conversation freely and safely, and include specific policies that explain the types of content and behavior that are prohibited.

 

This section covers the latest data about instances where we've taken enforcement actions under the Twitter Rules to either require the removal of specific Tweets or to suspend accounts. These metrics are referred to as: accounts actioned, content removed, and accounts suspended. More details about our range of enforcement options are available in our Help Center.

 

Twitter’s operations continued to be affected due to the unprecedented COVID-19 pandemic. 

Impressions

We continue to explore ways to share more context and details about how we enforce the Twitter Rules. As such, we are introducing a new metric – impressions – for enforcement actions where we required the removal of specific Tweets. Impressions capture the number of views a Tweet received prior to removal.

 

From January 1, 2021 through June 30, 2021, Twitter removed 4.7M Tweets that violated the Twitter Rules. Of the Tweets removed, 68% received fewer than 100 impressions prior to removal, with an additional 24% receiving between 100 and 1,000 impressions. Only 8% of removed Tweets had more than 1,000 impressions. In total, impressions on violative Tweets accounted for less than 0.1% of all impressions for all Tweets during that time period.

 

Some notable changes since our last report:

 

Big picture

We have a global team that manages enforcement of the Twitter Rules with 24/7 coverage in every supported language on Twitter. Our goal is to apply the Twitter Rules objectively and consistently. Enforcement actions are taken on content that is determined to violate the Twitter Rules.

 

We are committed to providing due process and to better ensure that the enforcement of the Twitter Rules is fair, unbiased, proportional and respectful of human rights, influenced by the spirit of the Santa Clara Principles on Transparency and Accountability in Content Moderation and other multi stakeholder processes. We will continue to invest in expanding the information available about how we do so in future reports.

 
Safety

The "Safety" section of the Twitter Rules covers violence, terrorism/violent extremism, child sexual exploitation, abuse/harassment, hateful conduct, promoting suicide or self-harm, sensitive media (including graphic violence and adult content), and illegal or certain regulated goods or services. More information about each policy can be found in the Twitter Rules.

 

Some notable changes since the last report:

Other select takeaways:

 

Terrorism/violent extremism

The Twitter Rules prohibit the promotion of terrorism and violent extremism. We suspended 44,974 unique accounts for violations of the policy during this reporting period. Of those accounts, 93% were proactively identified and actioned. Our current methods of surfacing potentially violating content for review include leveraging the shared industry hash database supported by the Global Internet Forum to Counter Terrorism (GIFCT).

 

Child sexual exploitation

We do not tolerate child sexual exploitation on Twitter. When we are made aware of child sexual exploitation media, including links to images of or content promoting child exploitation, the material will be removed from the site without further notice and reported to The National Center for Missing & Exploited Children ("NCMEC"). People can report content that appears to violate the Twitter Rules regarding Child Sexual Exploitation via our web form.


We suspended 453,754 unique accounts during this reporting period for violating Twitter policies prohibiting child sexual exploitation with 89% of them identified proactively by employing internal proprietary tools and industry hash sharing initiatives. These tools and initiatives support our efforts in surfacing potentially violative content for further review and, if appropriate, removal.

Abuse/Harassment

Under our Abusive Behaviour policy, we prohibit content that harasses or intimidates, or is otherwise intended to shame or degrade others. We took action on 1,043,525 pieces of content during the reporting period. We also updated our policy and removed the targeting requirement for content that denies that mass murder or other mass casualty events took place, where we can verify that the event occurred, and when the content is shared with abusive intent. 

Violence

Our policies prohibit sharing of content that threatens violence against an individual or a group of people. We also prohibit the glorification of violence. We saw a significant increase in the number of content removed for violence and 66,445 accounts suspended due to initiatives launched to bolster operational capacity. 

 
Hateful conduct

We made some changes to our Hateful Conduct policy during the first half of 2021. The policy was updated in January 2021 to expand our enforcement approach towards content that incites others to discriminate by denying support to the economic enterprise of an individual or group because of their perceived membership in a protected category. In addition to the policy update, we also removed the targeting requirement for content aimed at individuals or groups that references forms of violence or violent events where a protected category was the primary target or victims and where the intent is to harass.

Promoting suicide or self-harm

We prohibit content that promotes, or otherwise encourages, suicide or self-harm. During this reporting period there was a significant increase in the volume of accounts actioned (83%), accounts suspended (101%), and content removed (82%). Initiatives were launched to better detect and take action on content that violated our policy on suicide and self-harm which led to the spike in enforcment numbers. 

 

Sensitive media, including graphic violence and adult content

We saw the largest increase in the number of accounts actioned and content removed during this reporting period. Initiatives were launched to bolster operational capacity that resulted in an increase in actioning of content that violates our sensitive media policies.

 
Illegal or certain regulated goods or services

Since the launch of the policy in 2019, and more specifically at the end of the last year, we have continued to refine our enforcement guidelines. This improvement resulted in more accounts being actioned for violation of the policy which in turn triggered an increase in the number of accounts trying to circumvent their previous suspension or enforcement action, thus violating Twitter policy on ban evasion.

 
Privacy

The "Privacy" section of the Twitter Rules covers private information and non-consensual nudity. More information about each policy can be found in the Twitter Rules.

 

Some notable changes since the last report:

Other select takeaways:

 

Non-consensual Nudity

This reporting period saw the largest increase in the number of accounts suspended under this policy. We suspended 7,519 accounts for violating our non-consensual nudity policies. We launched initiatives to better detect and take action on content, which led to an increase in accounts suspended under our non-consensual nudity policy by 104%. In total, we suspended 7,519 accounts for violating this policy. 

 
Authenticity

The "Authenticity" section of the Twitter Rules covers platform manipulation and spam, civic integrity, impersonation, synthetic and manipulated media, and copyright and trademark. We have standalone report pages for platform manipulation and spam, copyright, and trademark, and cover civic integrity and impersonation enforcement actions in this section.[1] More information about each policy can be found in the Twitter Rules.

 

Some notable changes since the last report:

Other select takeaways:

 

Civic Integrity

The end of the 2020 US election cycle led to a significant decrease in the number of accounts actioned under our civic integrity policy since the last report.

 

Impersonation

This reporting period saw more activity related to impersonation scams from accounts based in West Africa and Southeast Asia, which may account for the increase in accounts actioned under our impersonation policy.

 

COVID-19 misleading information

Since the introduction of COVID-19 guidance last year, there was increased focus on scaling the enforcement of the policy in particular in areas related to vaccine misinformation. In instances where accounts repeatedly violate this policy, a strike system is now used to determine if further enforcement actions should be applied. These actions include requests for tweet deletion, temporary account locks and permanent suspensions. We believe this system further helps to reduce the spread of potentially harmful and misleading information on Twitter, particularly for high-severity violations of our rules.

 

Accounts Reported

Published on January 25, 2022

 

02.

Overview

Insights into accounts reported for violations of the Twitter Rules.

 

03.

Analysis

Big picture

Reported content is reviewed to determine whether it violates any aspects of the Twitter Rules, independent of its initial report category. For example, content reported under our private information policy may be found to violate – and be actioned under – our hateful conduct policies. We may also determine that reported content does not violate the Rules at all. 


The policy categories in this section do not map cleanly to the ones in the Accounts Actioned section above. This is because people typically report content for possible Twitter Rules violations through our Help Center or in-app reporting



We are committed to providing due process and to better ensure that the enforcement of the Twitter Rules is fair, unbiased, proportional and respectful of human rights, influenced by the spirit of the Santa Clara Principles on Transparency and Accountability in Content Moderation and other multi stakeholder processes. We will continue to invest in expanding the information available about how we do so in future reports.

Footnotes
 
Accounts Actioned

To provide meaningful metrics, we de-duplicate accounts which were actioned multiple times for the same policy violation. This means that if we took action on a Tweet or account under multiple policies, the account would be counted separately under each policy. However, if we took action on a Tweet or account multiple times under the same policy (for example, we may have placed an account in read-only mode temporarily and then later also required media or profile edits on the basis of the same violation), the account would be counted once under the relevant policy.

 
Accounts Reported

To provide meaningful metrics, we de-duplicate accounts which were reported multiple times (whether multiple users reported an account for the same potential violation, or whether multiple users reported the same account for different potential violations). For the purposes of these metrics, we similarly de-duplicate reports of specific Tweets. This means that even if we received reports about multiple Tweets by a single account, we only counted these reports towards the "accounts reported" metric once.