We use the term ‘action’ to refer to our range of enforcement actions, which includes possible account suspension. During this reporting period, we actioned 1,254,226 unique accounts for violation of the included Twitter Rules categories, amounting to an 105% increase since the last reporting period. This increase may be attributable to a number of factors, including our increased focus on proactively surfacing potentially violating content for human review, the 42% increase in the number of reports received, and the inclusion of impersonation data for the first time.
1,791 of the unique reported accounts found to be in violation of the Twitter Rules were reported by known government entities, compared to 1,601 reported and actioned during the last reporting period, an 11% increase.
Across the seven Twitter Rules categories included in this report, we actioned 395,917 accounts under abuse policies, 584,429 accounts under hateful conduct policies, 43,536 under sensitive media policies, 30,107 under CSE policies, 124,339 under impersonation policies, 19,679 under private information policies, and 56,219 under violent threats policies.
During the review process, we will take action on any identified violation of the Twitter Rules, regardless of the original reporting reason. For example, content that was reported for abuse could ultimately be actioned under our violent threats policy. We may also determine that reported content does not violate the Rules at all. As a result, the Unique Accounts Actioned do not necessarily fall within the Unique Accounts Reported dataset above.
More information on our approach to policy development and enforcement can be found here, and information about enforcement of each of these Twitter Rules categories is detailed below.
Abuse policies enforcement
We define abusive behavior as an attempt to harass, intimidate or silence someone else’s voice. Some examples of abusive behavior include wishing or hoping serious harm on a person or group of people, encouraging someone to engage in self-harm, threats to expose someone’s private information or intimate media, unwanted sexual advances, and aggressive insults or non-consensual slurs.
Context matters when evaluating reports of abusive behavior and determining appropriate enforcement actions. Some Tweets may seem to be abusive when viewed in isolation, but when viewed in the context of a larger conversation do not have the same meaning. Sometimes it’s unclear if content is intended to harass an individual or if it is part of a consensual conversation. When evaluating reported content in context, we consider factors such as whether:
the behavior is targeted at an individual or a group of people;
the report has been filed by the target of the abuse or by a bystander; and
the behavior is newsworthy and in the public interest.
During this reporting period, we saw a 22% increase in accounts reported for potential violations of our abuse policies and actioned against 395,917 unique accounts for abuse violations.
Hateful conduct policies enforcement
Under our hateful conduct policy, you may not promote violence against or directly attack or threaten other people on the basis of their perceived inclusion in a protected category (i.e., race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease). The Twitter Rules also prohibit accounts with the primary purpose of inciting harm against others on the basis of these categories. Examples of hateful conduct may also include:
targeting someone with references to types of violence or violent events where people were targeted on the basis of their membership in a protected category;
targeting someone with content that incites fear about others based on their membership in a protected category; and
sending someone unsolicited hateful imagery.
During this reporting period, we saw a 48% increase in accounts reported for potential violations of our hateful conduct policies and actioned against 584,429 unique accounts for hateful conduct violations.
Sensitive media policies enforcement
This section of our report reflects accounts actioned for violations of the sensitive media policy. People use Twitter to show what’s happening in the world, often sharing images and videos as part of the conversation. Sometimes, this media can depict sensitive topics. We recognize that some people may not want to be exposed to sensitive content, which is why we balance allowing people to share this type of media with helping people who want to avoid it to do so. For this reason, you can’t include violent, hateful, or adult content within areas that are highly visible on Twitter, including live video, and profile or header images. If you share this content within Tweets, you need to mark your account as sensitive (which places your images and videos behind an interstitial or warning message). Under this policy, we don’t allow any media related to violent sexual conduct or gratuitous gore, because they have the potential to normalize violence and cause distress to those who view them.
Examples of content covered under these policies include:
graphic violence (e.g., media that depicts death or serious injury);
adult content (e.g., media that is pornographic or intended to cause sexual arousal);
violent sexual conduct (e.g., media that depicts violence, whether real or simulated, in association with sexual acts);
gratuitous gore (e.g., media that depicts excessively graphic or gruesome content related to death, violence or severe physical harm, or violent content that is shared for sadistic purposes); and
hateful imagery (e.g., logos, symbols, or images whose purpose is to promote hostility and malice against others on the basis of protected category).
During this reporting period, we saw a 37% increase in accounts reported for potential violations of our sensitive media policies and actioned 43,536 unique accounts for sensitive media violations.
Child sexual exploitation (CSE) policy enforcement
We do not tolerate child sexual exploitation on Twitter. When we are made aware of child sexual exploitation media, including links to images of or content promoting child exploitation, the material will be removed from the site without further notice and reported to The National Center for Missing & Exploited Children ("NCMEC"). People can report content that appears to violate the Twitter Rules regarding Child Sexual Exploitation via our web form or through in-app reporting.
During this reporting period, we suspended a total of 244,188 unique accounts for violations related to child sexual exploitation. Of those unique accounts suspended, 91% were flagged by a combination of technology (including PhotoDNA and internal, proprietary tools).
Impersonation policies enforcement
For the first time, we’re reporting metrics pertaining to our impersonation policy. impersonation occurs when an account poses as another person, brand, or organization in a confusing or deceptive manner and is prohibited by the Twitter Rules. During this reporting period, we actioned 124,339 accounts for violating our impersonation policy.
Private information policies enforcement
This section provides information about accounts actioned under our private information and non-consensual nudity policies. Under these policies, you cannot share people’s private information or their intimate photos or videos without their express authorization and permission. Examples of content covered by these policies include:
private identifiers or financial information, such as credit card information, social security or other national identity numbers;
locations of private residences or other places that are considered private;
non-public personal contact information, such as phone numbers and email addresses; and
non-consensual nudity (e.g., explicit sexual images or videos of someone produced or distributed without their consent).
Context matters, and not all postings of such information may be a violation of this policy. We consider the nature and public availability of the information posted, local privacy laws, and other case-specific facts. For example, if the information was previously posted or shared elsewhere on the internet (e.g., someone lists their personal phone number on their public blog), reposting it on Twitter may not be a violation of this policy.
During this period, we saw a 48% increase in accounts reported for potential violations of our private information policies and actioned 19,679 unique accounts for private information violations. This increase is likely related to updates to our private information reporting flow and internal enforcement processes, which now permit bystanders to report more potential private information violations for review.
Violent threats policies enforcement
The Twitter Rules prohibit violent threats and the promotion of terrorism and violent extremism. Specifically, we do not allow users to make specific threats of violence against an individual or group of people, or threaten or promote violent extremism or terrorism. Examples of content covered under this policy include:
explicit statements of intent to inflict violence on a specific person or group of people;
soliciting or offering bounties in exchange for committing serious acts of violence; and
affiliating with and promoting organizations that use or promote violence against civilians to further their causes.
During this reporting period, we saw a 17% increase in accounts reported for potential violations of our violence & extremism policies and actioned 56,219 unique accounts for policy violations.
Twitter suspended 115,861 unique accounts for violations related to promotion of terrorism. We surfaced for review 87% of the unique accounts suspended using our internal, proprietary tools. While this total number of unique accounts suspended during the reporting period has decreased 30% since the previous reporting period, this likely reflects the changing behaviour patterns and is generally consistent with an overall downward trend we have been noticing over the past several years.