Opening Remarks

X was founded on a commitment to transparency. We also want people on X to feel they are able to freely express themselves, while also ensuring that conversations on X are safe, legal and unregretted. When you think about some of the world’s most powerful moments, movements, and memes, they prevailed because people had a place to express their ideas, challenge conventional norms, and demand better. That’s why free expression matters.

We also believe, and we’re proving, that free expression and platform safety can coexist. X is reflective of real conversations happening in the world, and that sometimes includes perspectives that may be offensive, controversial, and/or narrow-minded to others. While we welcome everyone to express themselves on X, we will not tolerate behaviour that harasses, threatens, dehumanises or uses fear to silence the voices of others. Our TIUC Terms of Service and Rules - which are continually being reviewed, and are informed by feedback from the people who use X - help ensure everyone feels safe expressing themselves.

We are committed to fair, informative, responsive, and accountable enforcement. In the past, we too often got caught in a binary paradigm of whether to leave content up, or take it down.

To be clear, we do continue to remove dangerous and illegal content and accounts. X also responds to reports of illegal content and takes action on content that violates local laws. But what we’ve learned is that there are other types of content where a range of potential reasonable, proportionate, and effective approaches, that also seek to balance fundamental rights, can be appropriate.

You can think about how we moderate on X in three buckets: content and accounts that remain, are restricted, and are removed.

  1. Remain: The overwhelming majority of content on X is healthy—meaning it does not violate our TIUC Terms of Service and Rules or our policies such as Hateful Conduct, Abuse & Harassment, and more. Keep in mind: just because a post doesn’t violate a policy, doesn’t mean everyone will like it.
  2. Restrict: This is where our new Freedom of Speech, Not Reach enforcement philosophy is used. For content that may be interpreted as potentially violating our policies—meaning it’s awful, but lawful—we restrict the reach of posts by making the content less discoverable, and we’re making this action more transparent to everyone. When we decide to restrict a piece of content, a restricted reach label is applied, the ability to engage with the content is taken away, and its reach is restricted to views occurring directly on the author's profile. Restricted reach labels are not in use for all policies; our restricted reach labels were initially only applied to Hateful Conduct, but we have since expanded application to our Abuse & Harassment, Civic Integrity, and Violent Speech Policies. That said, restricting content—or even a whole account—is something we’ve done for a long time, and we have a range of enforcement options for the variety of use cases that we face every day. For example, we may also place an account in read-only mode, temporarily limiting its ability to post, Repost, or Like.
  3. Remove: If reported content is illegal, we withhold access to it in the respective jurisdictions. We also know that certain types of content, such as targeted violent threats, targeted harassment, or privacy violations, can be extremely harmful if not removed and we either suspend outright or require that this content be deleted before returning to the platform.

We've made significant progress towards improving the safeguards to protect our users and our platform, but we know that this critical work will never be done. X is committed to ensuring the safety and health of the platform and fulfilment of its DSA Compliance obligations through our continued investment in human and automated protections.

This report covers the content moderation activities of X’s international entity Twitter International Unlimited Company (TIUC) under the Digital Services Act (DSA), during the date range August 28, 2023 to October 20, 2023.

We refer to “notices” as defined in the DSA as “user reports” and “reports”.

Description of our Content Moderation Practices

X's purpose is to serve the public conversation. Violence, harassment, and other similar types of behaviour discourage people from expressing themselves, and ultimately diminish the value of global public conversation. Our rules are designed to ensure all people can participate in the public conversation freely and safely.

X has policies protecting user safety as well as platform and account integrity. The X Rules and Policies are publicly accessible on our Help Center, and we are making sure that they are written in an easily understandable way. We also keep our Help Center regularly updated anytime we modify our rules.

Additionally, you will find explanations in our Help Center on our policy development process and rules enforcement philosophy. Creating a new policy or making a policy change requires in-depth research around trends in online behaviour, developing clear external language that sets expectations around what’s allowed, and creating enforcement guidance for reviewers that can be scaled across millions of pieces of content and accounts. Our policies are dynamic, and we continually review them to ensure that they are up-to-date, necessary, and proportional.

We consider diverse perspectives around the changing nature of online speech, including how our Rules are applied and interpreted in different cultural and social contexts. We then test the proposed rule with samples of potentially violative content to measure the policy effectiveness, and once we determine it meets our expectations, we build and operationalise product changes to support the update. Finally, we train our global review teams, update the X Rules, and start enforcing the relevant policy.

While we aim to enable open discussion of differing opinions and viewpoints, we are committed to the objective, timely, and consistent enforcement of our rules. This approach allows many forms of speech to exist on our platform and, in particular, promotes counterspeech: speech that presents facts to correct misstatements or misperceptions, points out hypocrisy or contradictions, warns of offline or online consequences, denounces hateful or dangerous speech, or helps change minds and disarm.

Thus, context matters. When determining whether to take enforcement action, we may consider a number of factors, including (but not limited to) whether:

When we take enforcement actions, we may do so either on a specific piece of content (e.g., an individual post or Direct Message) or on an account. We may employ a combination of these options. In most cases, this is because the behaviour violates the X Rules.

X strives to provide an environment where people can feel free to express themselves. If abusive behaviour happens, we want to make it easy for people to report it to us. EU users can also report any violation of our rules or their local laws, no matter where such violations appear, and we’ve recently improved our reporting flow to make it easier to use in several key ways. It now takes less steps to report most content, with extra steps only when it helps us take the right action. We now have clearer choices that match directly to our policies and how they’re communicated externally. We’ve also included new options that were previously only available at help.x.com.

EXERCISE OF MODERATION

To enforce our rules, we are using a combination of machine learning and human review. Our systems are able to surface content to human moderators who use important context to make decisions about potential rule violations. This work is led by an international, cross-functional team with 24-hour coverage and the ability to cover multiple languages. We also have a complaints process for any potential errors that may occur.

Examples of actions we may take:

To ensure that our human reviewers are prepared to perform their duties we provide them with a robust support system. Each human reviewer goes through extensive training and refreshers, they are provided with a suite of tools that enable them to do their jobs effectively, and they have a suite of wellness initiatives available to them. For further information on our human review resources, see the section titled “Human resources dedicated to Content Moderation”.

We always aim to exercise moderation with transparency. Where our systems or teams take action against content or an account as a result of violating our rules or in response to a valid and properly scoped request from an authorised entity in a given country, we strive to provide context to users. Our Help Center article explains notices that users may encounter following actions taken. We will also promptly notify affected users about legal requests to withhold content, including a copy of the original request, unless we are legally prohibited from doing so.

COOPERATION WITH PUBLIC AUTHORITIES

Cooperation with law enforcement authorities within the EU is crucial to X. We work closely with law enforcement, and we do our best to assist them in identifying users whose content may be in violation of local laws. Any law enforcement authority or agency can find guidelines on our Help Center specifically for law enforcement and can reach out to X using a dedicated form.

TIUC is headquartered in Dublin, Ireland, and processes law enforcement requests relating to users who live in the EU. We receive and respond to requests related to user data from EU law enforcement agencies and judicial authorities wherever there is a valid legal process. We have existing processes in place, including a dedicated online portal for law enforcement, and expert teams with global coverage across all timezones that review and respond to reports in diverse languages.

Law enforcement can use our dedicated portal to submit their legal demands and can request the following information:

Our Own Initiative Content Moderation Activities

AUTOMATED CONTENT MODERATION

X employs a combination of heuristics and machine learning algorithms to automatically detect content that violates the X Rules and policies enforced on our platform.

MACHINE LEARNING MODELS

We use combinations of natural language processing models, image processing models and other sophisticated machine learning methods to detect potentially violative content. These models vary in complexity and in the outputs they produce. For example, the model used to detect abuse on the platform is trained on abuse violations detected in the past. Content flagged by these machine learning models are either reviewed by human content reviewers before an action is taken or, in some cases, automatically actioned based on model output.

HEURISTIC MODELS

Heuristics are typically utilised to enable X to react quickly to new forms of violations that emerge on the platform. Heuristics are common patterns of text or keywords that may be typical of a certain category of violations. Pieces of content detected by heuristics may also get reviewed by human content reviewers before an action is taken on the content. These heuristics are used to flag content for review by human agents and prioritise the order such content is reviewed.

TESTING, EVALUATION, AND ITERATION

Automated enforcements under the X Rules and policies undergo rigorous testing before being applied to the live product. Both machine learning and heuristic models are trained and/or validated on thousands of data points and labels (e.g., violative or non-violative) that are generated by trained human content reviewers. For example, inputs to content-related models can include the text within the post itself, the images attached to the post, and other characteristics. Training data for the models comes from both the cases reviewed by our content moderators, random samples, and various other samples of pieces of content from the platform.

Once reviewers have confirmed that the detection meets an acceptable standard of accuracy, we consider the automation to be ready for launch. Once launched, automations are monitored dynamically for ongoing performance and health. If we detect anomalies in performance (for instance, significant spikes or dips against the volume we established during sizing, or significant changes in user complaint/overturn rates), our Engineering (including Data Science) and Policy teams revisit the automation to diagnose any potential problems and adjust the automations as appropriate.

USE OF HUMAN MODERATION

Before any given algorithm is launched to the platform, we verify its detection of policy violating content or behaviour by drawing a statistically significant test sample and performing item-by-item human review. Reviewers have expertise in the applicable policies and are trained by our Policy teams to ensure the reliability of their decisions. During this testing phase, we also calculate the expected volume of moderation actions a given automation is likely to perform in order to set a baseline against which we can monitor for anomalies in the future (called “sizing”). Human review helps us to confirm that these automations achieve a level of precision, and sizing helps us understand what to expect once the automations are launched.

In addition, humans proactively conduct manual content reviews for potential policy violations. We conduct proactive sweeps for certain high-priority categories of potentially violative content both periodically and during major events, such as elections. Agents also proactively review content flagged by heuristic and machine learning models for potential violations of other policies, including our sensitive media, child sexual exploitation (CSE) and violent and hateful entities policies.

AUTOMATED MODERATION ACTIVITY EXAMPLES

A vast majority of all accounts that are suspended for the promotion of terrorism and CSE are proactively flagged by a combination of technology and other purpose-built internal proprietary tools.

When we remove CSE content, we immediately report it to the National Center for Missing and Exploited Children (NCMEC). NCMEC makes reports available to the appropriate law enforcement agencies around the world to facilitate investigations and prosecutions.

Our current methods for surfacing potentially violative terrorist content for review include leveraging the shared industry hash database, e.g., supported by the Global Internet Forum to Counter Terrorism (GIFCT), and deploying a range of internal tools and/or utilising the industry hash sharing (e.g., PhotoDNA) prior to any reports filed. We commit to continuing to invest in technology that improves our capability to detect and remove, for instance, terrorist and violent extremist content online, including the extension or development of digital fingerprinting and AI-based technology solutions. Our participation in multi-stakeholder communities, such as the Christchurch Call to Action, Global Internet Forum to Counter Terrorism and EU Internet Forum (EUIF), helps to identify emerging trends in how terrorists and violent extremists are using the Internet to promote their content and exploit online platforms.

You can learn more about our commitment to eradicating CSE and terrorist content, and the actions we’ve taken here. Our continued investment in proprietary technology is steadily reducing the burden on people to report this content to us.

SCALED INVESTIGATIONS

These moderation activities are supplemented by scaled human investigations into the tactics, techniques and procedures that bad actors use to circumvent our rules and policies. These investigations may leverage signals and behaviours identifiable on our platform, as well as off-platform information, to identify large-scale and/or technically sophisticated evasions of our detection and enforcement activities. For example, through these investigations, we are able to detect coordinated activity intended to manipulate our platform and artificially amplify the reach of certain accounts or their content.  

CLOSING STATEMENT ON CONTENT MODERATION ACTIVITIES

Our content moderation systems are designed and tailored to mitigate systematic risks without unnecessarily restricting the use of our service and fundamental rights, especially freedom of expression. Content moderation activities are implemented and anchored on principled policies and leverage a diverse set of interventions to ensure that our actions are reasonable, proportionate and effective. Our content moderation systems blend automated and human review paired with a robust appeals system that enables our users to quickly raise potential moderation anomalies or mistakes.

Enforcement Activity Summary Data

RESTRICTED REACH LABELS DATA: FREEDOM OF SPEECH, NOT REACH

Our mission at X is to promote and protect the public conversation. We believe X users have the right to express their opinion and ideas without fear of censorship. We also believe it is our responsibility to keep users on our platform safe from content that violates our rules.

These beliefs are the foundation of Freedom of Speech, Not Reach - our freedom of expression based enforcement philosophy which means, where appropriate, restricting the reach of posts that are classified as potentially meeting our threshold for enforcement under our Hateful Conduct, Abuse & Harassment, Civic Integrity, and Violent Speech policies. Please note these policies have a range of enforcement actions, such as removal, suspension, and restricted reach.

Restricting the reach of posts, also known as visibility filtering, is one of our existing enforcement actions that allows us to move beyond the binary “leave up versus take down” approach to content moderation. Posts with these labels will be made less discoverable on the platform. This can include:

Additionally, these labels bring transparency to this enforcement action by displaying which policy the post potentially violates to both the author and other users on X, and communicating that the post’s visibility is limited. Authors can submit a complaint on the label if they think we incorrectly limited their post’s visibility.

RESTRICTED REACH LABELS DATA

Restricted Reach Labels - Aug 28 to Oct 20

Detection

Enforcement

Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

Own Initiative

Automated Means

Hateful Conduct

1,118

2,137

653

759

251

1,333

1,311

281

1,389

11,279

9,913

1,015

707

3,760

2,631

320

461

190

121

6,711

5,263

1,528

1,944

417

423

9,706

3,827

69,448

Manual Review

Abuse & Harassment

1

1

0

1

0

0

0

0

0

1

10

0

0

0

0

0

0

0

0

2

6

0

1

0

1

1

2

27

Hateful Conduct

31

10

11

17

14

15

61

1

22

56

127

14

4

62

26

2

2

2

0

169

407

12

30

2

10

39

82

1,228

Violent Speech

1

1

2

1

1

1

2

4

2

15

User Report

Manual Review

Abuse & Harassment

87

244

73

42

24

159

99

38

75

2,093

1,069

194

87

223

827

53

40

16

1

672

866

299

221

29

19

1,868

204

9,622

Hateful Conduct

95

251

52

65

28

144

90

17

139

2,313

1,046

97

115

248

526

54

85

18

9

727

803

300

145

32

38

1,429

301

9,167

Violent Speech

13

35

12

9

6

22

23

19

321

261

22

8

29

79

4

15

1

3

173

79

56

22

2

9

131

69

1,423

Grand Total

1,345

2,678

802

893

323

1,673

1,584

337

1,644

16,064

12,428

1,342

922

4,323

4,089

433

604

227

134

8,456

7,428

2,195

2,363

482

500

13,174

4,487

90,930

Important Note: The table lists actions of visibility filtering on content potentially violative of our rules in accordance with our Freedom of Speech, Not Reach enforcement philosophy. We did not apply any visibility filtering based on illegal content.

ACTIONS TAKEN ON CONTENT FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS

TIUC Terms of Service and Rules Content Removal Actions - Aug 28 to Oct 20*

Detection Method

Enforcement Process

Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

User Report

Manual Review

Abuse & Harassment

78

197

83

54

27

123

72

17

85

4,291

1,088

142

49

89

560

348

197

32

4

1,077

976

226

730

17

21

1,214

148

11,945

Child Sexual Exploitation

1

3

1

1

0

0

0

1

0

10

16

0

0

0

2

1

1

0

1

5

15

0

4

0

0

7

0

69

Counterfeit

0

0

0

0

0

0

2

0

0

15

1

0

0

0

1

0

0

0

0

9

0

0

0

0

0

15

0

43

Deceased Individuals

1

4

0

0

0

0

0

0

0

13

9

1

0

1

6

0

0

0

0

2

3

0

2

0

0

6

2

50

Hateful Conduct

3

15

0

1

0

1

0

0

4

130

34

5

2

9

14

0

0

0

0

11

33

3

11

2

0

22

6

306

Illegal or Certain Regulated Goods and Services

2

6

50

22

1

20

1

2

17

1,420

225

13

3

7

94

101

139

0

0

342

214

8

59

8

0

184

2

2,940

Misleading & Deceptive Identities

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Non-Consensual Nudity

0

16

31

1

0

10

2

2

5

132

134

49

7

10

34

6

3

1

0

232

128

1

93

0

1

144

30

1,072

Perpetrators of Violent Attacks

1

0

0

0

0

0

0

0

0

4

0

0

0

1

0

0

0

0

0

4

0

1

0

0

0

2

0

13

Private Information & Media

2

6

1

0

0

3

1

0

1

90

39

0

0

7

8

10

0

0

8

25

26

6

1

0

1

36

2

273

Sensitive Media

26

44

7

5

5

19

6

7

18

360

301

23

16

121

119

8

5

3

0

123

89

38

17

9

4

256

58

1,687

Suicide & Self Harm

12

28

8

5

2

12

22

4

17

177

189

20

10

20

124

5

10

7

0

122

171

117

21

3

5

215

99

1,425

Synthetic & Manipulated Media

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

1

Violent Speech

73

170

25

31

11

72

70

9

51

1,669

1,143

65

55

93

455

26

29

9

2

582

487

329

80

21

20

529

202

6,308

Own Initiative

Automated Means

Abuse & Harassment

3

5

0

0

2

1

3

0

0

19

47

1

1

5

5

0

0

0

0

7

2

3

0

0

0

6

3

113

Hateful Conduct

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

9

0

0

0

2

0

11

Non-Consensual Nudity

2

1

0

0

0

108

19

0

2

201

26

0

44

103

30

26

0

1

0

7

3

4

7

0

0

0

27

611

Other

1

1

0

0

0

0

0

0

0

3

3

0

0

1

0

0

0

0

0

1

0

0

0

0

0

0

1

11

Perpetrators of Violent Attacks

0

0

0

0

0

0

0

0

0

0

1

1

0

0

2

0

0

0

0

2

0

0

1

0

0

0

1

8

Private Information & Media

0

1

0

0

0

1

0

0

0

2

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

6

Sensitive Media

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Violent Speech

224

564

167

109

44

202

260

51

204

6,041

1,986

216

155

592

575

54

100

48

27

1,160

825

346

371

70

61

4,118

597

19,167

Manual Review

Abuse & Harassment

3

2

3

1

0

0

2

0

0

3

7

1

0

0

0

0

3

0

0

0

15

1

7

0

0

1

1

50

Hateful Conduct

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Illegal or Certain Regulated Goods and Services

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

3

0

0

0

0

5

Non-Consensual Nudity

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

1

1

0

0

0

0

0

0

3

Private Information & Media

3

1

0

2

0

4

8

0

0

9

14

0

6

5

2

16

0

0

1

4

3

1

2

0

0

1

1

83

Sensitive Media

140

336

42

41

41

182

106

13

112

1,683

1,866

63

24

96

330

3

69

30

2

1,107

291

70

127

15

51

977

421

8,238

Suicide & Self Harm

5

0

0

1

0

4

2

0

3

8

18

0

1

2

5

2

2

0

0

9

20

4

2

1

0

7

4

100

Violent Speech

3

3

1

0

0

0

0

0

2

7

11

1

1

2

2

0

4

0

0

17

12

2

2

0

0

1

3

74

Grand Total

583

1403

420

274

133

762

576

106

521

16,288

7160

601

374

1,164

2,369

606

562

131

45

4,850

3,315

1,169

1,540

146

164

7,743

1,609

54,614

ACTIONS TAKEN ON ACCOUNTS FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS

TIUC Terms of Service and Rules Account Suspensions - Aug 28 to Oct 20

Detection Method

Enforcement Process

Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

User Report

Manual Review

Abuse & Harassment

86

70

47

15

32

73

81

8

36

2,475

726

38

27

63

199

312

266

12

3

887

669

68

162

8

3

486

43

6,895

Ban Evasion

1

0

1

0

0

1

2

0

12

26

14

2

0

1

1

3

2

0

1

6

3

1

0

0

1

5

4

87

Child Sexual Exploitation

194

237

372

122

55

661

138

36

121

4,157

2,150

164

270

1,823

695

557

581

84

63

2,387

2,543

634

1,768

258

32

875

1,566

22,543

Copyright Repeated Infringer 

2

7

2

1

1

6

5

0

4

116

50

3

2

9

41

5

3

1

0

22

41

20

5

1

1

91

8

447

Counterfeit

4

1

3

0

0

5

1

0

3

60

21

1

0

0

5

8

7

1

1

30

13

0

13

0

0

25

2

204

Deceased Individuals

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

1

0

0

0

0

3

Distribution of Hacked Materials

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

1

Financial Scam

3

0

7

1

2

2

0

0

2

48

24

3

0

1

13

1

0

1

0

4

20

0

5

0

0

9

4

150

Hateful Conduct

1

12

1

4

0

5

4

2

7

79

59

4

3

5

14

8

2

0

0

22

41

10

8

1

2

38

11

343

Illegal or Certain Regulated Goods and Services

27

24

54

21

9

56

34

7

24

1,241

416

27

26

43

95

167

253

6

2

341

479

21

89

6

6

187

9

3,670

Misleading & Deceptive Identities

13

29

24

11

9

38

14

4

7

274

213

28

16

112

106

12

16

9

5

164

180

38

70

10

3

208

47

1,660

Non-Consensual Nudity

5

13

9

7

2

8

5

1

4

73

108

16

8

16

26

3

11

0

0

70

76

6

29

2

3

57

13

571

Other

30

27

12

5

5

29

20

2

8

4,744

634

14

17

31

88

67

55

7

3

160

263

23

65

2

0

98

36

6,445

Perpetrators of Violent Attacks

0

1

0

1

0

3

0

3

6

13

12

2

1

6

8

0

0

1

1

6

16

2

1

0

0

18

4

105

Platform Manipulation & Spam

1,861

2,299

2,423

1,216

430

3,623

1,233

194

729

139,102

28,537

1,806

2,671

5,080

209,019

2,494

6,304

923

319

7,201

16,176

1,954

2,592

675

263

7,521

2,483

449,128

Private Information & Media

0

0

1

1

0

0

1

0

0

3

7

1

0

2

2

1

0

0

0

4

3

1

2

0

0

5

0

34

Sensitive Media

2

0

2

0

0

2

1

1

0

15

27

2

3

1

5

1

0

0

0

8

6

1

3

0

0

4

1

85

Suicide & Self Harm

2

4

0

0

0

2

5

0

4

18

17

2

1

3

10

0

1

0

0

8

12

3

4

1

0

13

2

112

Trademark

0

0

0

0

1

0

0

0

0

3

4

1

1

0

2

0

1

0

0

0

2

0

2

0

0

3

0

20

Username Squatting

0

1

0

0

0

2

2

0

0

5

4

1

1

0

3

0

0

0

0

3

0

1

2

1

0

4

2

32

Violent & Hateful Entities

29

38

8

3

18

17

39

4

38

350

537

108

12

12

402

4

5

42

2

325

166

20

42

1

0

110

170

2,502

Violent Speech

228

441

104

100

23

222

312

36

204

4,375

2,859

192

151

360

1,109

58

114

45

20

1,403

1,678

655

258

56

65

1,605

592

17,265

Own Initiative

Automated Means

Child Sexual Exploitation

351

559

372

135

140

698

253

58

265

5,708

8,985

205

432

793

1,327

365

542

591

102

5,989

5,108

627

1,118

256

61

1,711

1,083

37,834

Financial Scam

1

3

7

1

0

6

1

0

0

41

31

0

2

9

7

4

9

3

0

104

30

8

27

1

0

8

2

305

Illegal or Certain Regulated Goods and Services

0

2

1

0

1

3

4

0

0

53

30

0

0

4

3

1

1

5

0

43

41

2

18

0

0

2

0

214

Other

3

7

6

4

0

3

1

0

1

51

26

2

6

0

18

9

4

1

1

31

40

2

10

0

1

32

2

261

Perpetrators of Violent Attacks

0

0

0

1

0

1

6

0

4

8

5

0

0

1

3

0

0

0

0

4

10

7

3

1

0

14

0

68

Platform Manipulation & Spam

11,954

26,623

39,771

15,529

2,465

33,141

33,991

3,790

11,395

261,976

186,543

24,423

34,708

18,735

228,091

20,273

30,171

3,583

1,665

76,471

138,144

32,099

53,887

8,391

5,737

121,340

23,487

1,448,383

Violent & Hateful Entities

8

9

1

1

0

3

5

0

5

73

96

2

1

1

11

1

2

1

0

70

33

3

17

0

1

21

11

376

Manual Review

Abuse & Harassment

1

0

0

1

0

2

0

0

0

17

9

2

0

3

1

0

0

1

0

1

2

0

1

0

0

8

0

49

Grand Total

14,806

30,407

43,228

17,180

3,193

38,612

36,159

4,146

12,879

425,104

232,144

27,049

38,359

27,114

441,304

24,354

38,350

5,317

2,188

95,764

165,797

36,206

60,202

9,671

6,179

134,498

29,582

1,999,792

Important Notes about Action based on TIUC Terms of Service and Rules Violations:

  1. The categories “Other” refer to cases of workflow exceptions and tooling inconsistencies which prevent a further clarification on the violated policy of TIUC Terms of Service and Rules.
  2. User reports of illegal content which have been actioned under TIUC Terms of Service and Rules are displayed in the table "Actions Taken on Illegal Content".

*A data extraction limitation is impacting the availability of data ranging from Aug 28 to Sept 23. See the table "TIUC Terms of Service and Rules Content Removal Actions - Sep 5 to Sep 23” in the Appendix.

Orders received from Member States’ authorities including orders issued in accordance with Articles 9 (Removal Orders) and 10 (Information Requests)

REMOVAL ORDERS, Art. 9 DSA

Removal Orders Received - Aug 28 to Oct 20

Illegal Content Category

France

Italy

Spain

Grand Total

Unsafe and/or Illegal Products

1

1

Illegal or Harmful Speech

4

1

5

Grand Total

1

4

1

6

Removal Orders Median Handle Time (Hours) - Aug 28 to Oct 20

Illegal Content Category

France

Italy

Spain

Unsafe and/or Illegal Products

32

124

Illegal or Harmful Speech

73

Removal Orders Median Time to Acknowledge Receipt - Aug 28 to Oct 20

X provides an automated acknowledgement of receipt of removal orders submitted by law enforcement through our Legal Request submission portal. As a consequence of this immediate acknowledgement of receipt, the median time is zero.

Important Notes about Removal Orders:

  1. To improve clarity, we've omitted countries and violation types with no legal requests from the tables above.
  2. The table “Removal Orders Median Handle Time” shows the category which we considered to fit best and under which we handled the order. This category might deviate from the information provided by the authority when submitting the order via the X online submission platform.
  3. In the cases from France and Spain, we asked the submitting authority to fulfil Article 9 information requirements but did not receive responses in the reporting period.

INFORMATION REQUESTS, Art. 10 DSA

Information Requests Received - Aug 28 to Oct 20

Content Category

Austria

Belgium

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Malta

Netherlands

Poland

Portugal

Spain

Grand Total

Data Protection and Privacy Violations

2

1

1

1

5

Illegal or Harmful Speech

4

3

1

43

623

4

1

6

9

6

700

Intellectual Property Infringements

2

1

3

Negative Effects on Civic Discourse or Elections

2

1

3

Non-Consensual Behaviour

3

23

1

1

28

Not Specified

1

4

7

1

3

4

20

Other

2

8

4

1

1

16

Pornography or Sexualized Content

13

13

Protection of Minors

7

1

31

2

1

1

43

Risk for Public Security

19

654

16

1

17

1

1

7

716

Scams and/or Fraud

1

1

2

7

1

1

1

1

15

Self-Harm

1

1

Unsafe and/or Illegal Products

4

2

6

Violence

1

71

61

1

2

6

8

7

1

1

159

Grand Total

6

32

2

787

795

9

1

4

33

1

9

22

3

24

1,728

Information Request Median Time to Acknowledge Receipt - Aug 28 to Oct 20

X provides an automated acknowledgement of receipt of information requests submitted by law enforcement through our Legal Request submission portal. As a consequence of this immediate acknowledgement of receipt, the median time is zero.

Information Request Median Handle Time (Hours) - Aug 28 to Oct 20

Content Category

Austria

Belgium

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Malta

Netherlands

Poland

Portugal

Spain

Data Protection and Privacy Violations

152

141

146

173

Illegal or Harmful Speech

146

42

146

138

127

64

73

164

78

129

Intellectual Property Infringements

114

175

Negative Effects on Civic Discourse or Elections

183

20

Non-Consensual Behaviour

21

117

124

170

Not Specified

219

35

24

5

1

2

Other

170

152

149

2

165

Pornography or Sexualized Content

56

Protection of Minors

2

49

4

2

26

209

Risk for Public Security

5

8

47

18

149

43

126

74

Scams and/or Fraud

30

172

169

120

120

20

124

73

Self-Harm

194

Unsafe and/or Illegal Products

146

126

Violence

154

132

119

51

19

147

19

48

241

190

Important Notes about Information Requests:

  1. The content category for each request is determined by the information law enforcement provides while submitting such requests through the X online submission platform.
  2. The median handling time is the time between receiving the order and either: 1) disclosing information to law enforcement if the order is valid; or 2) pushing back due to legal issues. The median handling time does not include extra time where X pushes back due to legal issues, receives a valid order later, and disclosure is eventually made.
  3. To improve clarity, we've omitted countries and violation types with zero legal requests from the tables above.
  4. The “Not Specified” category shows cases where the illegal content category could not be determined based on the information law enforcement provided during the submission process.
  5. The “Other” category here shows cases where law enforcement selects “Cybercrime” as the content category during the case submission process without providing more details to determine a more specific content category.

Reports submitted in accordance with Article 16 (Illegal Content)

ACTIONS TAKEN ON ILLEGAL CONTENT:

ACTIONS TAKEN ON ACCOUNTS FOR POSTING ILLEGAL CONTENT: We suspended accounts in response to 855 reports of Intellectual Property Infringements. This was the only type of violation of local law that resulted in account suspension as many types of illegal behaviour are addressed in our policies, such as account suspensions for posting CSE. On our own initiative, we withheld 1 account for breaching local laws connected to unsafe and/or illegal products.

Also, we withheld 15 accounts in one Member State each for provision of illegal content.

REPORTS OF ILLEGAL CONTENT

Illegal Content Reports Received - Aug 28 to Oct 20

Content Category

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

EU

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

Animal Welfare

14

4

1

2

4

2

4

1

88

4

96

58

3

1

11

10

1

1

1

1

15

16

3

2

2

2

56

3

392

Data Protection & Privacy Violations

18

50

7

5

6

16

21

6

500

13

727

592

45

9

90

98

5

0

1

0

164

94

60

17

3

10

703

27

3,269

Illegal or Harmful Speech

397

448

46

34

32

205

175

46

5,258

133

9,499

11,265

198

32

335

1203

60

41

35

7

995

893

626

96

27

26

3088

203

35,006

Intellectual Property Infringements

16

19

4

14

17

8

14

2

0

35

737

872

19

7

64

185

7

29

5

4

701

601

262

52

0

0

835

21

4,531

Negative Effects on Civic Discourse or Elections

27

33

6

1

3

25

12

7

475

14

314

934

15

7

26

132

3

5

2

1

219

514

24

16

14

3

127

8

2,940

Non-Consensual Behaviour

15

16

2

4

4

2

11

4

179

9

196

143

7

15

35

36

0

2

0

0

34

17

16

2

1

0

186

22

943

Pornography or Sexualized Content

38

50

9

3

4

25

23

1

468

10

865

641

44

109

55

145

5

3

2

2

113

107

67

48

8

1

324

26

3,158

Protection of Minors

43

49

11

7

3

20

24

3

462

22

672

564

24

7

65

57

12

0

1

0

107

78

17

8

2

13

305

17

2,550

Risk for Public Security

39

105

8

4

4

46

13

8

414

24

981

950

17

8

22

59

9

5

1

0

120

111

35

9

3

4

181

22

3,163

Scams and/or Fraud

96

140

12

23

33

83

90

20

833

48

1292

749

46

42

233

356

8

65

34

1

520

300

177

79

7

9

743

70

6,013

Scope of Platform Service

3

2

1

0

0

0

0

0

53

1

31

35

0

0

3

9

4

0

0

0

10

4

8

0

0

0

28

0

189

Self-Harm

1

4

0

1

2

4

5

0

74

2

41

72

0

0

4

8

1

0

1

0

7

11

4

2

0

0

56

6

305

Unsafe and Illegal Products

5

20

0

0

2

6

2

2

126

5

600

179

4

0

21

18

8

3

0

1

55

21

19

5

1

4

105

17

1224

Violence

57

177

12

4

4

45

41

7

1095

47

2274

1448

37

10

78

219

8

3

5

11

182

135

94

16

5

6

743

64

6,770

Grand Total

769

1117

119

102

118

487

435

107

10,025

367

18,325

18,502

459

247

1042

2,536

131

157

88

28

3,242

2,902

1412

352

73

78

7,480

506

71,206

REPORTS RESOLVED BY ACTIONS TAKEN ON ILLEGAL CONTENT

Actions Taken on Illegal Content - Aug 28 to Oct 20

Enforcement Process

Action Type

Reason Code

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

EU

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

Automated Means

Global content deletion based on a violation of TIUC Terms of Service and Rules

Illegal or Harmful Speech

1

0

0

0

0

0

0

0

2

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

3

Non-Consensual Behaviour

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

8

0

8

Self-Harm

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

0

2

Violence

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Country withheld Content

Data Protection & Privacy Violations

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

1

Illegal or Harmful Speech

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

1

No Violation Found

Animal Welfare

0

0

0

0

0

0

0

0

4

0

0

0

0

0

0

0

0

0

0

0

2

0

0

1

0

0

7

0

14

Data Protection & Privacy Violations

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

1

0

0

0

0

1

1

1

0

0

0

3

0

8

Illegal or Harmful Speech

5

2

0

0

0

0

0

0

202

0

0

0

0

0

3

3

0

0

0

0

1

0

1

0

0

0

13

0

230

Non-Consensual Behaviour

0

0

1

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

3

Pornography or Sexualized Content

0

0

0

0

0

0

1

0

2

0

0

0

1

5

0

2

0

0

0

0

0

0

0

0

0

0

7

1

19

Protection of Minors

1

0

0

0

0

0

0

0

9

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

8

0

18

Risk for Public Security

0

0

0

0

0

0

0

0

2

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

Scams and Fraud

8

5

1

0

2

0

4

1

33

3

0

0

0

1

5

4

0

0

3

0

35

3

1

12

0

0

15

5

141

Scope of Platform Service

0

0

0

0

0

0

0

0

7

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

1

0

9

Self-Harm

0

1

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

3

Unsafe and Illegal Products

0

0

0

0

1

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

3

0

0

0

0

0

1

0

6

Violence

0

1

0

0

0

0

2

0

9

0

0

0

0

0

1

1

0

0

0

0

1

3

0

0

0

0

10

0

28

Manual Closure

Global content deletion based on TIUC Terms of Service and Rules

Animal Welfare

0

0

0

0

0

0

0

0

14

0

11

8

0

0

0

0

0

0

0

0

3

1

0

0

1

0

0

0

38

Data Protection & Privacy Violations

1

2

0

0

0

3

3

0

15

1

15

33

4

0

3

4

0

0

0

0

5

7

7

1

0

1

15

3

123

Illegal or Harmful Speech

26

14

2

4

3

10

5

1

231

7

440

1,270

9

0

10

37

2

5

0

0

40

62

41

6

0

10

73

29

2,337

Negative Effects on Civic Discourse or Elections

0

2

0

0

0

0

0

0

2

0

3

8

0

1

2

0

0

0

0

0

0

1

0

0

0

0

0

1

20

Non-Consensual Behaviour

1

1

0

0

0

0

0

0

35

0

13

14

1

0

1

0

0

0

0

0

1

1

0

0

0

0

13

0

81

Pornography or Sexualized Content

16

1

0

1

0

3

8

1

80

4

55

108

4

0

6

11

1

0

0

0

19

13

4

9

1

0

30

6

381

Protection of Minors

5

8

3

1

0

3

5

1

211

12

152

308

6

2

17

5

1

0

0

0

51

23

5

3

0

1

98

7

928

Risk for Public Security

0

3

0

1

0

1

0

0

28

0

56

87

2

0

3

2

0

0

0

0

5

5

6

0

0

0

33

5

237

Scams and Fraud

2

0

0

0

0

1

0

0

12

0

3

1

0

0

0

0

0

0

0

0

27

1

0

0

0

0

2

0

49

Scope of Platform Service

0

0

0

0

0

0

0

0

4

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

4

Self-Harm

0

0

0

0

0

0

1

0

8

0

2

5

0

0

1

0

1

0

0

0

0

3

1

0

0

0

12

1

35

Unsafe and Illegal Products

1

0

0

0

0

0

1

0

3

1

69

19

0

0

7

0

0

0

0

0

3

0

0

1

0

0

0

2

107

Violence

11

7

0

0

1

4

3

0

120

9

215

192

4

1

8

26

0

1

0

0

34

25

12

2

0

1

48

17

741

Temporary suspension and global content deletion based on TIUC Terms of Service and Rules

Data Protection & Privacy Violations

0

0

0

0

0

0

1

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

Illegal or Harmful Speech

0

0

0

0

0

0

0

0

0

0

16

2

0

0

0

0

0

0

0

0

2

0

0

0

0

0

0

0

20

Pornography or Sexualized Content

0

0

0

0

0

0

0

0

0

0

4

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

4

Protection of Minors

0

0

0

0

0

0

0

0

0

0

1

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

Risk for Public Security

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Scams and Fraud

0

0

0

0

0

0

0

0

0

0

2

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

Violence

0

0

0

0

0

0

0

0

0

0

28

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

29

Offer of help in case of self-harm and suicide concern based on TIUC Terms of Service and Rules

Protection of minors

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

1

Self-Harm

0

0

0

0

0

0

0

0

5

0

1

3

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

0

11

Content removed globally

Animal Welfare

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Data Protection & Privacy Violations

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Illegal or Harmful Speech

0

2

0

0

3

0

2

0

0

0

1

5

1

0

0

1

0

0

0

0

0

0

0

0

0

0

3

0

18

Non-Consensual Behaviour

0

3

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

3

Pornography or Sexualized Content

0

1

0

0

0

0

0

0

2

0

5

3

0

1

0

2

0

0

0

0

5

0

0

6

0

0

1

0

26

Protection of Minors

0

5

0

0

0

2

0

0

7

0

0

8

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

0

23

Risk for Public Security

0

0

0

0

0

0

0

0

6

0

3

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

9

Scams and Fraud

0

0

0

0

0

0

0

0

1

0

7

0

0

0

0

0

0

0

0

0

4

0

0

0

0

0

0

0

12

Self-Harm

0

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Unsafe and Illegal Products

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

5

0

0

0

0

0

5

Country withheld Account

Illegal or Harmful Speech

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

0

0

0

0

0

0

0

0

0

0

0

0

2

Pornography or Sexualized Content

0

0

0

0

0

0

0

0

0

0

0

8

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

8

Scams and fraud

0

0

0

0

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

Violence

0

0

0

0

0

0

0

0

0

0

4

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

4

Country withheld Content

Animal Welfare

1

2

0

0

0

0

0

0

5

0

2

6

0

0

0

1

0

0

0

0

0

2

0

0

0

0

8

0

27

Data Protection & Privacy Violations

0

3

1

0

1

3

2

0

30

1

61

91

2

5

24

12

0

0

0

0

24

7

13

5

0

0

46

2

333

Illegal or Harmful Speech

84

92

2

9

2

39

32

4

1,131

21

2,433

2,962

32

5

90

315

9

3

6

0

209

204

138

16

12

2

698

49

8,599

Negative Effects on Civic Discourse or Elections

5

2

0

0

0

1

0

0

26

0

8

88

1

0

2

7

0

0

1

0

6

30

6

0

0

0

7

0

190

Non-Consensual Behaviour

0

2

0

0

0

0

0

2

36

1

29

32

1

0

4

2

0

1

0

0

7

3

0

1

0

0

14

0

135

Pornography or Sexualized Content

6

9

6

2

1

3

4

0

104

3

116

230

3

4

10

23

3

2

1

0

14

24

15

8

1

0

100

6

698

Protection of Minors

1

6

0

2

0

1

2

0

19

2

21

50

2

0

10

5

0

0

1

0

3

5

5

3

0

0

15

3

156

Risk for Public Security

2

8

0

0

1

2

0

0

19

0

46

89

3

0

0

4

1

0

0

0

3

12

2

0

0

0

2

3

197

Scams and Fraud

6

2

0

0

0

6

5

1

29

2

34

54

1

0

3

5

0

2

0

0

16

24

2

3

0

2

16

3

216

Scope of Platform Service

1

0

0

0

0

0

0

0

0

0

0

4

0

0

0

2

0

0

0

0

0

0

1

0

0

0

0

0

8

Self-Harm

0

0

1

0

1

0

0

0

6

0

5

2

0

0

0

0

0

0

0

0

0

0

0

0

0

0

2

2

19

Unsafe and Illegal Products

0

2

0

0

0

3

0

0

16

1

277

33

0

0

2

0

1

0

0

0

13

6

0

1

0

0

8

2

365

Violence

5

6

3

0

0

8

8

0

81

15

518

212

5

1

11

27

3

0

1

0

19

19

18

3

1

0

68

23

1,055

Globally withheld content

Intellectual Property Infringements

4

10

0

0

0

6

5

0

0

31

561

167

17

2

17

84

2

17

0

3

101

450

38

22

0

0

283

9

1,829

Account Suspension

Intellectual Property Infringements

1

1

0

6

5

0

1

0

0

1

28

498

0

0

7

22

1

7

0

0

27

39

88

19

0

0

104

0

855

No Violation Found

Animal Welfare

13

3

1

2

4

2

4

1

61

4

56

45

4

1

9

9

1

1

1

1

10

15

3

1

1

2

42

3

300

Data Protection & Privacy Violations

17

47

5

5

4

10

15

6

452

11

470

443

39

2

63

82

5

0

3

0

135

78

49

12

3

9

632

22

2,619

Illegal or Harmful Speech

280

340

44

20

25

165

133

40

3,729

109

5,460

6,887

154

27

234

872

52

35

33

7

721

645

456

80

16

16

2,356

125

23,061

Negative Effects on Civic Discourse or Elections

21

30

8

1

3

25

13

7

462

15

259

833

14

5

23

129

3

5

1

1

211

496

17

17

15

3

121

9

2,747

Non-Consensual Behaviour

13

10

1

5

2

2

12

2

94

8

102

87

5

15

27

34

0

1

0

0

27

13

15

1

1

0

147

21

645

Pornography or Sexualized Content

16

40

3

0

3

19

10

0

283

3

321

293

41

93

41

103

1

1

1

2

75

69

48

24

6

2

176

12

1,686

Protection of Minors

36

25

8

4

3

15

17

2

229

9

293

198

16

5

39

46

12

0

0

0

55

45

6

2

2

12

185

7

1,271

Risk for Public Security

37

90

8

3

3

42

14

9

363

24

729

756

12

6

19

53

9

5

1

0

115

97

28

9

3

4

148

12

2,599

Scams and Fraud

65

130

11

19

31

76

79

19

730

42

488

646

45

37

184

311

8

63

30

1

365

262

141

61

7

7

660

61

4,579

Scope of Platform Service

2

2

1

0

0

0

0

0

38

1

9

30

0

0

3

10

4

0

0

0

10

3

5

0

0

0

27

0

145

Self-Harm

1

3

0

1

1

4

4

0

57

2

28

61

0

0

3

12

0

0

3

0

7

8

4

2

0

0

37

3

241

Unsafe and Illegal Products

3

16

0

0

1

3

1

2

106

4

179

126

5

1

12

18

7

3

0

1

38

15

11

3

1

4

98

13

671

Violence

39

156

7

4

3

33

27

7

901

25

1,225

1,034

28

8

59

170

5

2

7

11

125

92

63

10

4

5

588

24

4,662

Grand Total

737

1,095

117

90

105

495

424

106

10,066

372

14,866

18,043

462

228

964

2,459

132

154

93

27

2,579

2,813

1,257

344

75

81

6,997

491

65,672

REPORTS OF ILLEGAL CONTENT MEDIAN HANDLE TIME

Reports of Illegal Content Median Handle Time (Hours) - Aug 28 to Oct 20

Enforcement Process

Action Type

Reason Code

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

EU

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Automated Means

Global content deletion based on TIUC Terms of Service and Rules

Illegal or Harmful Speech

34.3

29.5

Non-Consensual Behaviour

92.8

Self-Harm

92.7

Violence

21.4

Country withheld Content

Data Protection & Privacy Violations

117.0

Illegal or Harmful Speech

106.3

No violation found

Animal Welfare

45.7

82.9

26.5

47.9

Data Protection & Privacy Violations

30.1

32.9

50.3

33.7

104.0

39.4

Illegal or Harmful Speech

34.4

33.1

36.6

24.3

27.7

106.3

79.0

43.0

Non-Consensual Behaviour

44.5

29.5

30.5

Pornography or Sexualized Content

25.7

22.6

69.5

48.7

89.2

32.1

22.3

Protection of Minors

22.1

27.0

24.0

Risk for Public Security

96.1

Scams and Fraud

75.6

73.5

102.2

202.8

26.1

91.4

44.3

28.7

73.4

57.8

35.1

223.8

45.2

34.1

52.3

75.4

44.0

27.2

Scope of Platform Service

42.5

85.2

36.0

Self-Harm

52.2

21.3

49.1

Unsafe and Illegal Products

35.4

40.8

29.8

19.2

Violence

70.6

31.0

115.2

47.8

24.1

20.5

28.2

42.8

Manual Closure

Global content deletion based on TIUC Terms of Service and Rules

Animal Welfare

3.0

8.6

8.4

91.0

13.2

Data Protection & Privacy Violations

10.7

0.5

17.2

13.7

12.3

37.7

6.4

3.2

42.3

11.1

14.8

16.0

158.7

3.1

1.8

0.9

50.0

2.2

Illegal or Harmful Speech

13.3

10.5

7.6

15.2

12.6

2.1

2.7

0.2

8.0

9.6

4.8

3.1

11.2

12.6

9.6

45.0

5.4

13.0

12.8

6.3

9.0

22.8

10.4

14.6

Negative Effects on Civic Discourse or Elections

37.3

178.3

13.0

6.3

0.1

5.8

22.5

11.2

Non-Consensual Behaviour

3.1

53.4

33.2

12.0

4.1

10.6

26.9

88.3

15.7

1.3

11.5

Pornography or Sexualized Content

10.8

10.4

11.2

15.3

9.6

3.0

10.8

3.3

7.9

5.1

8.3

11.0

2.8

1.2

19.4

2.4

10.3

14.3

13.6

13.3

11.3

Protection of Minors

9.6

12.4

15.9

75.5

13.8

15.4

20.5

7.7

4.8

3.9

3.8

3.3

28.0

9.7

11.0

8.6

2.1

10.0

14.8

16.5

9.9

12.0

Risk for Public Security

5.6

2.3

0.8

11.8

4.1

1.5

14.5

10.1

0.7

1.4

6.0

4.3

0.3

11.8

Scams and Fraud

2.8

1.2

62.2

1.3

1.4

63.0

180.9

36.7

Scope of Platform Service

62.4

Self-Harm

1.5

3.0

13.5

8.9

10.8

16.9

17.7

0.9

36.8

0.2

Unsafe and Illegal Products

1.2

21.5

13.1

5.2

4.1

1.3

5.3

12.6

1.5

31.7

Violence

11.7

6.3

0.0

22.8

0.5

5.2

12.3

4.3

3.1

2.8

6.8

51.1

14.6

15.7

16.2

11.8

11.3

6.1

0.3

17.6

11.3

1.7

Temporary suspension and global content deletion based on TIUC Terms of Service and Rules

Data Protection & Privacy Violations

4.5

1.1

Illegal or Harmful Speech

1.4

2.7

18.7

Pornography or Sexualized Content

4.6

Protection of Minors

14.4

17.1

Risk for Public Security

26.5

Scams and Fraud

14.1

Violence

4.5

1.3

Offer of help in case of self-harm and suicide concern based on TIUC Terms of Service and Rules

Protection of minors

22.2

Self-harm

12.9

4.5

3.9

8.4

Content removed globally

Animal Welfare

0.0

Data Protection & Privacy Violations

0.0

Illegal or Harmful Speech

74.8

0.4

11.7

34.0

0.1

0.0

24.4

155.3

Non-Consensual Behaviour

102.9

Pornography or Sexualized Content

52.2

88.1

4.0

16.2

0.4

160.4

146.5

23.8

13.3

Protection of Minors

2.0

8.9

23.9

17.2

27.6

Risk for Public Security

32.1

26.5

Scams and Fraud

502.2

19.6

101.8

Self-Harm

1.2

Unsafe and Illegal Products

18.6

Country withheld Account

Illegal or Harmful Speech

51.5

Pornography or Sexualized Content

31.0

Scams and Fraud

33.0

Violence

6.9

Country withheld Content

Animal Welfare

0.0

10.1

18.7

13.0

9.1

6.6

7.5

10.8

Data Protection & Privacy Violations

17.2

157.6

0.4

139.4

0.3

90.6

53.3

6.6

2.2

104.2

33.9

125.9

52.8

10.2

123.5

9.7

26.3

55.4

108.2

Illegal or Harmful Speech

12.9

3.6

4.1

12.4

22.9

14.3

10.3

72.7

126.2

13.3

8.2

3.0

2.7

1.2

11.2

5.8

11.2

21.3

0.3

17.6

11.8

9.9

10.2

1.5

7.4

49.9

11.4

Negative Effects on Civic Discourse or Elections

8.4

17.1

1.8

157.0

15.1

2.8

1.8

35.4

4.7

11.7

9.6

12.7

3.1

47.4

Non-Consensual Behaviour

12.7

110.7

128.0

5.0

10.9

3.0

134.9

38.2

43.0

16.2

5.0

121.6

12.5

46.8

Pornography or Sexualized Content

12.3

17.0

18.8

9.9

47.3

1.8

35.2

88.0

3.8

12.4

5.2

7.9

49.0

10.1

4.1

12.5

51.5

4.3

13.5

19.5

1.5

1.9

77.4

13.8

31.1

Protection of Minors

8.1

12.1

0.6

162.2

28.9

44.2

138.5

12.1

10.2

4.8

9.0

93.0

0.2

11.2

4.1

11.9

4.6

17.2

19.3

Risk for Public Security

96.3

2.8

167.6

73.5

141.0

3.1

6.8

2.1

143.2

7.2

13.0

146.6

27.0

6.6

19.2

2.4

Scams and Fraud

109.1

124.0

161.3

72.8

137.1

141.1

37.3

11.3

128.6

6.0

185.6

150.2

161.2

130.1

71.8

80.3

170.9

79.4

135.4

138.5

Scope of Platform Service

6.3

0.9

23.9

31.2

Self-Harm

1.8

67.2

9.6

0.3

27.8

14.8

Unsafe and Illegal Products

165.7

49.0

111.2

2.0

12.4

0.5

66.9

87.8

18.8

2.5

97.1

90.0

Violence

59.1

1.2

13.3

12.0

51.1

122.0

4.9

2.4

4.7

3.0

0.2

1.6

11.5

18.5

3.1

15.8

16.4

8.9

0.3

20.9

18.8

14.6

Globally withheld Content

Intellectual Property Infringements

5.6

3.1

5.6

6.3

2.8

0.6

2.6

3.6

0.4

2.8

1.5

53.2

0.4

7.6

2.6

0.5

3.8

1.8

2.5

1.6

Account Suspension

Intellectual Property Infringements

81.2

14.7

31.8

58.8

77.7

63.3

81.7

50.7

29.5

66.7

68.6

86.3

94.6

56.5

32.8

43.1

77.4

No violation found

Animal Welfare

53.6

16.2

20.5

14.4

18.6

10.3

9.3

20.5

18.3

16.3

17.2

1.0

40.3

20.4

0.0

20.4

20.4

20.4

20.4

20.4

5.1

13.5

10.4

20.4

20.3

16.5

16.4

20.5

Data Protection & Privacy Violations

3.1

11.1

73.6

8.7

3.3

5.1

24.2

7.6

13.0

9.7

13.5

2.4

1.9

59.8

47.1

15.2

2.1

31.3

12.9

17.8

13.5

12.1

0.4

13.3

16.2

8.3

Illegal or Harmful Speech

9.6

7.6

13.1

12.2

16.7

9.2

4.4

12.8

11.1

14.3

11.0

2.7

8.4

16.4

11.0

8.4

11.8

10.9

2.4

119.7

14.8

12.7

8.0

11.6

2.0

13.9

9.9

4.4

Intellectual property infringements

28.7

4.5

2.1

31.7

101.8

47.2

52.8

6.1

4.5

37.2

33.9

1.1

9.5

25.6

10.4

48.6

36.1

38.4

30.2

53.0

35.4

25.9

43.2

53.5

56.8

Negative Effects on Civic Discourse or Elections

9.2

8.8

14.4

4.4

0.2

2.2

2.0

10.1

7.7

14.3

19.2

2.1

4.3

14.4

10.4

2.8

6.1

12.2

259.9

63.1

10.9

5.9

10.2

27.5

2.3

2.7

11.1

0.7

Non-Consensual Behaviour

4.7

8.1

16.9

63.0

6.5

5.7

17.1

5.9

16.0

45.5

11.8

4.9

12.3

84.0

20.7

14.2

143.8

11.8

17.3

19.1

0.1

0.4

11.4

9.6

Pornography or Sexualized Content

5.4

13.3

13.8

68.7

15.0

4.4

12.9

4.1

11.0

10.6

21.7

15.6

11.8

17.1

12.3

15.5

49.4

2.4

16.3

11.0

14.2

11.2

54.1

11.7

13.7

11.3

Protection of Minors

11.2

8.6

8.6

14.6

8.6

9.0

13.5

8.8

13.7

6.3

12.5

3.7

6.6

11.3

10.2

11.6

11.1

14.0

13.1

13.4

2.2

9.8

17.7

14.2

12.1

Risk for Public Security

5.0

9.0

1.5

15.4

3.6

3.1

2.3

8.2

8.7

3.5

10.9

4.2

16.0

13.2

7.7

6.3

10.3

9.9

4.1

13.0

11.6

11.7

10.6

0.3

5.4

9.9

5.2

Scams and Fraud

17.5

18.7

3.6

3.4

12.3

12.6

14.8

1.5

18.7

88.6

16.0

5.4

20.6

16.8

12.7

20.2

21.4

108.5

3.6

14.0

17.2

13.2

24.2

4.7

16.6

48.6

13.7

75.0

Scope of Platform Service

114.5

73.2

7.5

9.4

4.6

5.9

7.3

15.2

18.5

15.6

4.5

15.8

27.0

23.6

Self-Harm

2.1

13.6

0.4

1.0

41.4

47.4

11.2

1.5

11.6

9.5

15.8

27.8

1.8

23.5

12.7

2.3

23.9

11.1

2.9

Unsafe and Illegal Products

12.5

13.4

77.1

10.7

7.8

6.8

13.4

12.6

18.7

7.4

15.5

12.5

142.9

7.0

13.9

1.7

4.2

4.2

22.5

1.0

10.9

17.1

10.7

20.2

Violence

11.9

9.8

13.1

32.8

89.8

10.3

49.9

83.4

10.8

8.5

10.4

3.7

13.2

7.7

10.4

10.7

2.5

0.9

1.6

2.1

11.0

12.9

10.6

10.3

12.5

1.7

10.9

9.1

Important Notes about Actions taken on illegal content:

  1. Disparity between reports received and reports handled is caused by the pending cases at the end of the reporting period.
  2. We only use automated means to close user reports of illegal content where: (i) reported content is no longer accessible to the reporter following other means/workflows; or (ii) reporter displays bad actor patterns.
  3. The numbers of “Intellectual property infringements” reflect reports instead of individual items of content and accounts. Actions taken against intellectual property infringements are made globally meaning that media that infringes copyright and accounts that infringe trademarks will be disabled globally.
  4. Action Types: actions that do not reference TIUC Terms of Service and Rules have been taken based on illegality.
  5. To improve clarity, we've omitted countries and violation types with zero reports from the tables above.
  6. The tables REPORTS RESOLVED BY ACTIONS TAKEN ON ILLEGAL CONTENT and REPORTS OF ILLEGAL CONTENT MEDIAN HANDLE TIME were updated on 13 November 2023 to replace an undefined description "reported content" with the relevant enforcement method "manual closure".

Complaints received through our internal complaint-handling system.

COMPLAINTS OF ACTIONS TAKEN FOR ILLEGAL CONTENT RECEIVED

Illegal Content Complaints Received - Aug 28 to Oct 20

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

EU

Finland

France

Germany

Greece

Ireland

Italy

Latvia

Luxembourg

Netherlands

Poland

Portugal

Slovenia

Spain

Sweden

Grand Total

Complaints

3

8

1

1

1

1

5

3

33

1

52

33

1

10

15

2

5

5

5

6

1

14

2

208

COMPLAINTS OF ACTIONS TAKEN FOR ILLEGAL CONTENT DECISIONS

Illegal Content Complaints Actioned - Aug 28 to Oct 20

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

EU

Finland

France

Germany

Greece

Ireland

Italy

Latvia

Luxembourg

Netherlands

Poland

Portugal

Slovenia

Spain

Sweden

Grand Total

Overturned Appeal

1

3

2

3

13

1

1

2

1

3

4

1

35

Rejected Appeal

2

5

1

1

1

1

5

3

31

1

49

20

1

9

15

1

3

5

4

3

1

10

1

173

COMPLAINTS OF ACTIONS TAKEN FOR ILLEGAL CONTENT MEDIAN HANDLE TIME

Illegal Content Complaints Median Handle Time (Hours) - Aug 28 to Oct 20

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

EU

Finland

France

Germany

Greece

Ireland

Italy

Latvia

Luxembourg

Netherlands

Poland

Portugal

Slovenia

Spain

Sweden

Complaints

3.8

15.4

329

0.9

0

131.6

1.7

168.4

13.2

68

4.7

2

24.1

3.4

16

71.1

0

4.3

8.2

5.7

199.2

9.4

8.4

COMPLAINTS OF ACTIONS TAKEN FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS RECEIVED

TIUC Terms of Service and Rules Action Complaints - Aug 28 to Oct 20

Category

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

Account Suspension Complaints

1,006

1,758

741

407

242

952

1,010

321

1,101

16,340

24,594

1,067

874

1,456

5,837

318

535

965

122

13,939

6,688

2,457

1,501

333

208

12,365

2,202

99,339

Content Action Complaints

70

149

16

19

15

70

54

10

45

1,296

960

50

27

176

177

10

17

13

8

340

180

135

68

22

12

1,068

108

5,115

Live Feature Action Complaints

1

4

1

2

1

45

32

1

1

2

10

1

1

3

20

8

6

4

1

7

11

162

Restricted Reach Complaints

48

86

21

35

10

57

50

14

66

371

470

41

17

195

145

10

8

12

4

350

217

65

38

15

30

454

188

3,017

Sensitive Media Action Complaints

5

11

1

4

3

12

4

3

7

49

129

4

3

15

22

3

58

21

3

6

3

1

23

21

411

Grand Total

1,130

2,008

780

467

270

1,091

1,119

348

1,219

18,101

26,185

1,163

922

1,844

6,191

339

561

996

134

14,707

7,114

2,666

1,617

374

251

13,917

2,530

108,044

COMPLAINTS OF ACTIONS TAKEN FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS DECISIONS

Decisions

Category

Overturned

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

Account Suspension Complaints

No

928

1,607

699

367

222

847

938

306

1,013

15,047

23,503

969

817

1,333

5,382

283

500

942

108

13,426

6,289

2,166

1,347

311

187

10,784

2,055

92,376

Yes

74

130

37

35

18

92

62

14

74

1,114

948

84

51

97

368

31

31

19

13

443

352

257

138

19

16

1,374

131

6,022

Content Action Complaints

No

60

122

15

14

11

58

38

8

36

968

782

43

16

133

136

9

12

10

5

265

152

92

53

15

12

776

77

3,918

Yes

8

26

1

4

4

11

16

2

8

287

162

6

10

42

40

1

5

2

3

68

28

42

15

7

277

29

1,104

Live Feature Action Complaints

No

1

4

1

2

1

44

30

1

1

2

10

1

1

3

20

7

6

4

1

7

11

158

Yes

1

1

1

3

Restricted Reach Complaints

No

24

38

12

23

5

20

29

5

25

203

232

18

9

96

82

7

5

7

1

176

92

45

18

8

15

231

95

1,521

Yes

24

48

9

12

4

37

21

9

40

168

235

23

8

99

63

3

3

5

3

171

125

20

20

7

15

221

93

1,486

Sensitive Media Action Complaints

No

2

5

2

3

12

4

3

3

32

72

3

3

11

9

2

37

15

2

2

2

10

9

243

Yes

3

6

1

2

4

13

50

1

3

13

16

6

1

4

1

1

12

11

148

Grand Total

Total

1,124

1,986

775

461

267

1,077

1,109

347

1,203

17,877

26,015

1,148

915

1,816

6,103

335

557

990

133

14,622

7,067

2,631

1,601

371

246

13,692

2,511

106,979

COMPLAINTS OF ACTIONS TAKEN FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS MEDIAN HANDLE TIME

TIUC Terms of Service and Rules Complaints Median Handle Time (Hours)

Category

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Account Suspension Complaints

0.14

0.07

0.00

0.17

0.16

0.12

0.12

0.47

0.18

0.07

0.00

0.12

0.08

0.08

0.10

0.23

0.32

0.00

0.34

0.00

0.03

0.08

0.14

0.38

0.25

0.10

0.13

Content Action Complaints

0.25

0.33

0.90

0.16

0.48

0.07

0.05

0.28

0.46

0.55

1.04

0.77

0.48

0.15

0.35

0.13

0.30

1.20

0.27

0.42

0.62

0.30

0.57

0.35

0.30

0.43

0.27

Live Feature Action Complaints

1.21

9.11

4.76

3.71

12.87

5.51

6.40

0.07

4.51

4.30

7.98

8.86

3.58

1.22

4.98

3.74

6.31

11.72

1.72

2.35

7.74

Restricted Reach Complaints

0.08

0.05

0.10

0.17

0.04

0.08

0.03

0.05

0.07

0.08

0.08

0.05

0.05

0.07

0.08

0.05

0.17

0.10

0.15

0.08

0.07

0.07

0.05

0.05

0.08

0.08

0.07

Sensitive Media Action Complaints

5.82

0.13

11.75

0.27

2.78

0.22

4.80

2.18

1.13

0.93

1.88

1.65

3.70

1.17

0.45

0.18

0.82

0.47

0.68

2.68

0.98

4.08

1.30

1.32

Important Notes about Complaints:

  1. Information on the basis of complaints is not provided due to the wide variety of underlying reasoning contained in the open text field in the complaint form.
  2. To improve clarity, we've omitted countries and violation types with zero complaints from the tables above.
  3. The COMPLAINTS OF ACTIONS TAKEN FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS RECEIVED/DECISIONS tables were updated on 1 November 2023 to show additional data regarding complaints of actions taken based on the CSE policy that were not shown in the original version. The table COMPLAINTS OF ACTIONS TAKEN FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS MEDIAN HANDLE TIME has been updated following updates to the tables COMPLAINTS OF ACTIONS TAKEN FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS RECEIVED/DECISIONS.  

INDICATORS OF ACCURACY FOR CONTENT MODERATION

The possible rate of error of the automated means used in fulfilling those purposes, and any safeguards applied

VISIBILITY FILTERING INDICATORS

TIUC Terms of Service and Rules Visibility Filtering Complaints Received -  Aug 28 to Oct 20

Enforcement

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Maltese

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Automated Means

Hateful Conduct

1

6

21

11

84

1,098

18

240

151

4

2

0

56

1

116

17

3

3

187

62

Manual Closure

Abuse & Harassment

0

0

3

0

1

44

0

13

21

0

1

10

0

10

1

0

0

35

0

Hateful Conduct

0

0

0

1

7

64

1

17

18

1

0

7

0

7

7

0

0

24

0

Violent Speech

0

0

0

0

1

4

0

0

3

1

0

1

0

0

0

0

2

0

TIUC Terms of Service and Rules Visibility Filtering Complaint Overturns -  Aug 28 to Oct 20

Enforcement

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Maltese

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Automated Means

Hateful Conduct

1

4

15

3

45

527

13

147

76

2

1

0

20

0

69

6

1

1

87

34

Manual Closure

Abuse & Harassment

0

0

3

0

1

25

0

11

15

0

1

5

0

6

1

0

0

26

0

Hateful Conduct

0

0

0

0

3

31

1

7

5

1

3

0

5

0

0

0

10

0

Violent Speech

0

0

0

0

0

1

0

0

3

1

0

0

0

0

0

0

0

TIUC Terms of Service and Rules Visibility Filtering Complaint Rate -  Aug 28 to Oct 20

Enforcement

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Maltese

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Automated Means

Hateful Conduct

1%

4%

5%

4%

5%

3%

4%

4%

5%

2%

1%

0%

6%

3%

5%

3%

2%

3%

3%

6%

Manual Closure

Abuse & Harassment

0%

0%

4%

0%

1%

1%

0%

1%

6%

0%

3%

2%

0%

2%

1%

0%

0%

3%

0%

Hateful Conduct

0%

0%

0%

3%

4%

1%

2%

1%

6%

2%

0%

2%

0%

2%

5%

0%

0%

2%

0%

Violent Speech

0%

0%

0%

0%

1%

1%

0%

0%

2%

11%

0%

2%

0%

0%

0%

0%

2%

0%

TIUC Terms of Service and Rules Visibility Filtering Complaint Overturn Rate - Aug 28 to Oct 20

Enforcement

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Maltese

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Automated Means

Hateful Conduct

100%

67%

71%

27%

54%

48%

72%

61%

50%

50%

50%

36%

0%

59%

35%

33%

33%

47%

55%

Manual Closure

Abuse & Harassment

100%

100%

57%

85%

71%

100%

50%

60%

100%

74%

Hateful Conduct

0%

43%

48%

100%

41%

28%

100%

43%

71%

0%

42%

Violent Speech

0%

25%

100%

100%

0%

0%

Important Notes:

  1. The tables in the section VISIBILITY FILTERING INDICATORS were updated on 13 November 2023 to replace an undefined description "reported content" with the relevant enforcement method "manual closure".

INDICATORS OF ACCURACY FOR CONTENT REMOVAL

TIUC Terms of Service and Rules Content Removal Complaints Received

Enforcement

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Maltese

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Automated Means

Abuse & harassment

0

0

0

1

0

14

0

2

0

0

0

0

0

0

0

2

0

Counterfeit

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Hateful Conduct

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Illegal or certain regulated goods and services

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Non-Consensual Nudity

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Other

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Perpetrators of Violent Attacks

0

0

0

0

0

2

0

0

1

0

0

0

0

0

0

0

0

Private information & media

0

0

0

0

0

2

0

0

0

0

0

0

0

0

0

0

0

Sensitive Media

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Suicide & Self Harm

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Violent Speech

1

3

7

4

10

655

3

257

44

0

2

5

1

7

6

268

4

Manual Closure

Abuse & harassment

0

0

1

1

8

236

0

65

31

3

0

17

0

12

9

109

0

Child Sexual Exploitation

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Counterfeit

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Deceased Individuals

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

Hateful Conduct

0

0

0

0

0

8

0

15

3

0

0

0

0

0

0

0

0

Illegal or certain regulated goods and services

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Misleading & deceptive identities

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Non-Consensual Nudity

0

0

0

0

0

9

0

2

2

0

0

10

0

1

0

17

0

Other

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Perpetrators of Violent Attacks

0

0

0

0

0

2

0

0

1

0

0

0

0

0

0

1

0

Private information & media

0

0

0

0

3

13

0

4

5

0

1

2

0

0

2

7

1

Restricted Reach Labels

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Sensitive Media

0

0

0

3

13

297

0

79

38

0

0

6

0

3

6

76

2

Suicide & Self Harm

0

0

0

0

0

51

2

17

6

0

0

4

0

2

3

17

0

Synthetic & Manipulated Media

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Violent & Hateful Entities

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Violent Speech

2

2

8

5

29

840

4

310

115

1

1

14

0

14

24

268

7

TIUC Terms of Service and Rules Content Removal Complaint Overturns

Enforcement

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Maltese

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Automated Means

Abuse & harassment

Counterfeit

Hateful Conduct

Illegal or certain regulated goods and services

Non-Consensual Nudity

Other

Perpetrators of Violent Attacks

1

Private information & media

1

Sensitive Media

Suicide & Self Harm

Violent Speech

1

64

19

4

18

1

Manual Closure

Abuse & harassment

8

4

3

Child Sexual Exploitation

Counterfeit

Deceased Individuals

Hateful Conduct

1

1

1

Illegal or certain regulated goods and services

Misleading & deceptive identities

Non-Consensual Nudity

1

5

Other

Perpetrators of Violent Attacks

1

Private information & media

3

1

3

Restricted Reach Labels

Sensitive Media

14

4

2

2

1

Suicide & Self Harm

3

1

Synthetic & Manipulated Media

Violent & Hateful Entities

Violent Speech

1

75

25

10

20

1

TIUC Terms of Service and Rules Content Removal Complaint Rate

Enforcement

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Maltese

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Automated Means

Abuse & harassment

0%

100%

0%

24%

13%

0%

0%

0%

0%

0%

40%

Counterfeit

Hateful Conduct

0%

0%

Illegal or certain regulated goods and services

0%

0%

0%

0%

0%

0%

Non-Consensual Nudity

0%

0%

Other

0%

0%

Perpetrators of Violent Attacks

50%

100%

0%

0%

Private information & media

0%

50%

0%

Sensitive Media

0%

0%

Suicide & Self Harm

0%

Violent Speech

7%

8%

9%

5%

3%

5%

4%

6%

6%

0%

4%

2%

17%

2%

4%

0%

0%

7%

2%

Manual Closure

Abuse & harassment

0%

0%

2%

7%

7%

2%

0%

6%

11%

4%

0%

5%

4%

6%

0%

0%

13%

0%

Child Sexual Exploitation

0%

0%

0%

0%

0%

0%

0%

0%

Counterfeit

0%

0%

0%

Deceased Individuals

3%

0%

0%

0%

0%

0%

Hateful Conduct

0%

5%

0%

13%

20%

0%

0%

0%

0%

0%

0%

0%

Illegal or certain regulated goods and services

0%

0%

0%

0%

0%

0%

0%

Misleading & deceptive identities

Non-Consensual Nudity

0%

0%

0%

2%

4%

7%

0%

0%

29%

11%

0%

0%

0%

16%

0%

Other

0%

0%

Perpetrators of Violent Attacks

0%

20%

0%

100%

0%

100%

0%

Private information & media

0%

0%

0%

33%

6%

5%

15%

100%

18%

0%

100%

20%

100%

Restricted Reach Labels

0%

Sensitive Media

0%

0%

0%

6%

3%

5%

0%

6%

5%

0%

0%

2%

2%

7%

0%

0%

8%

1%

Suicide & Self Harm

0%

0%

0%

0%

0%

5%

15%

12%

6%

0%

0%

4%

2%

4%

0%

0%

7%

0%

Synthetic & Manipulated Media

0%

Violent & Hateful Entities

0%

Violent Speech

13%

5%

7%

5%

5%

6%

5%

6%

9%

1%

1%

2%

0%

3%

6%

0%

0%

7%

3%

TIUC Terms of Service and Rules Content Removal Complaint Overturn Rate

Enforcement

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Maltese

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Automated Means

Abuse & harassment

0%

0%

0%

0%

Counterfeit

Hateful Conduct

Illegal or certain regulated goods and services

Non-Consensual Nudity

Other

Perpetrators of Violent Attacks

50%

0%

Private information & media

50%

Sensitive Media

Suicide & Self Harm

Violent Speech

0%

0%

0%

25%

0%

10%

0%

7%

9%

0%

0%

0%

0%

0%

7%

25%

Manual Closure

Abuse & harassment

0%

0%

0%

3%

6%

0%

0%

0%

0%

0%

3%

Child Sexual Exploitation

Counterfeit

Deceased Individuals

0%

Hateful Conduct

13%

7%

33%

Illegal or certain regulated goods and services

Misleading & deceptive identities

Non-Consensual Nudity

0%

50%

0%

0%

0%

29%

Other

Perpetrators of Violent Attacks

50%

0%

0%

Private information & media

0%

23%

0%

0%

0%

0%

50%

43%

0%

Restricted Reach Labels

Sensitive Media

0%

0%

5%

5%

5%

0%

0%

33%

0%

50%

Suicide & Self Harm

6%

0%

6%

0%

0%

0%

0%

0%

Synthetic & Manipulated Media

Violent & Hateful Entities

Violent Speech

0%

0%

0%

20%

0%

9%

0%

8%

9%

0%

0%

0%

0%

0%

7%

14%

Important Notes:

  1. The tables in the section INDICATORS OF ACCURACY FOR CONTENT REMOVAL were updated on 13 November 2023 to replace an undefined description "reported content" with the relevant enforcement method "manual closure".

INDICATORS OF ACCURACY FOR SUSPENSIONS

TIUC Terms of Service and Rules Suspension Complaints Received - Aug 28 to Oct 20

Detection Method - Enforcement Process

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Maltese

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Own Initiative - Automated Means

Child Sexual Exploitation

7

7

31

8

39

1,933

4

316

218

12

36

106

2

59

35

21

4

170

15

Financial Scam

0

0

0

0

Help with My Compromised Account

0

3

0

10

82

1

32

12

2

1

14

1

15

1

2

1

57

1

Illegal or Certain Regulated Goods and Services

0

0

Other

0

0

2

1

1

Perpetrators of Violent Attacks

8

1

1

Platform Manipulation & Spam

9

10

45

21

73

2,755

12

878

367

32

54

253

4

0

272

159

40

14

0

1,333

25

Violent & Hateful Entities

0

8

8

0

0

2

0

User Report - Manual Review

Abuse & Harassment

1

3

3

9

149

0

41

21

3

2

23

15

7

3

47

2

Ban Evasion

0

0

0

0

0

0

Child Sexual Exploitation

0

0

4

4

14

531

2

89

69

5

5

30

26

5

6

1

70

9

Copyright

0

2

58

1

29

6

2

13

4

4

0

34

Counterfeit

3

4

0

0

Deceased Individuals

0

0

Distribution of Hacked Materials

0

Financial Scam

0

0

1

1

0

0

0

0

0

Hateful Conduct

3

7

1

1

1

0

2

Illegal or Certain Regulated Goods and Services

0

0

1

0

1

0

0

1

0

0

0

0

0

Misleading & Deceptive Identities

1

1

0

1

51

0

7

3

0

0

2

4

1

0

0

9

1

Non-Consensual Nudity

2

0

2

4

45

0

13

8

6

1

6

1

1

3

15

1

Other

0

2

0

0

10

0

3

2

0

0

0

0

0

0

2

0

Perpetrators of Violent Attacks

0

0

0

0

0

0

0

0

0

Platform Manipulation & Spam

1

0

0

0

2

143

0

5

8

0

1

3

0

0

0

2

0

0

0

5

1

Private Information & Media

1

7

0

2

Sensitive Media

1

0

0

0

2

0

1

2

0

1

1

1

0

2

0

Suicide & Self Harm

0

0

2

0

0

0

1

0

0

0

Trademark

2

1

0

0

0

0

Username Squatting

0

0

0

0

0

Violent & Hateful Entities

0

0

0

43

1

6

4

0

0

0

0

1

0

Violent Speech

6

12

53

71

431

4,582

31

1,653

802

35

31

373

3

637

236

14

3

732

102

Own Initiative - Manual Review

Abuse & Harassment

0

TIUC Terms of Service and Rules Suspension Complaint Overturns - Aug 28 to Oct 20

Detection Method - Enforcement Process

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Maltese

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Own Initiative - Automated Means

Child Sexual Exploitation

0

0

0

0

1

75

0

13

7

0

0

6

0

4

0

0

0

7

3

Financial Scam

0

0

0

0

Help with My Compromised Account

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Illegal or Certain Regulated Goods and Services

0

0

Other

0

0

1

0

0

Perpetrators of Violent Attacks

0

0

0

Platform Manipulation & Spam

2

3

16

9

9

630

2

221

89

8

11

65

2

0

54

48

2

3

0

464

3

Violent & Hateful Entities

0

0

0

0

0

1

0

User Report - Manual Review

Abuse & Harassment

0

1

3

5

64

0

12

11

2

1

12

0

8

3

1

22

1

Ban Evasion

0

0

0

0

0

0

Child Sexual Exploitation

0

0

0

1

0

20

0

5

2

0

0

1

0

1

1

0

0

5

0

Copyright

0

0

1

0

0

0

0

0

0

0

0

0

2

Counterfeit

0

0

0

0

Deceased Individuals

0

0

Distribution of Hacked Materials

0

Financial Scam

0

0

1

0

0

0

0

0

0

Hateful Conduct

0

1

0

1

1

0

1

Illegal or Certain Regulated Goods and Services

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Misleading & Deceptive Identities

1

1

0

0

8

0

0

1

0

0

0

0

0

0

0

3

0

Non-Consensual Nudity

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Other

0

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

Perpetrators of Violent Attacks

0

0

0

0

0

0

0

0

0

Platform Manipulation & Spam

0

0

0

0

2

61

0

2

4

0

0

2

0

0

0

1

0

0

0

1

1

Private Information & Media

0

2

0

0

Sensitive Media

0

0

0

0

1

0

1

0

0

0

0

0

0

1

0

Suicide & Self Harm

0

0

0

0

0

0

0

0

0

0

Trademark

0

0

0

0

0

0

Username Squatting

0

0

0

0

0

Violent & Hateful Entities

0

0

0

0

0

1

0

0

0

0

0

0

0

Violent Speech

0

0

0

0

4

69

1

16

7

1

0

4

0

0

5

0

0

0

5

4

Own Initiative - Manual Review

Abuse & Harassment

0

TIUC Terms of Service and Rules Suspension Complaint Rate - Aug 28 to Oct 20

Detection Method - Enforcement Process

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Maltese

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Own Initiative - Automated Means

Child Sexual Exploitation

23%

25%

48%

36%

25%

9%

11%

25%

28%

26%

32%

32%

40%

30%

23%

24%

16%

25%

20%

Financial Scam

0%

0%

0%

0%

Help with My Compromised Account

0%

38%

0%

29%

16%

50%

27%

34%

25%

33%

20%

33%

45%

17%

67%

33%

27%

17%

Illegal or Certain Regulated Goods and Services

0%

0%

Other

0%

0%

100%

17%

100%

Perpetrators of Violent Attacks

12%

100%

50%

Platform Manipulation & Spam

5%

6%

3%

2%

2%

0%

3%

3%

2%

4%

4%

1%

14%

0%

5%

9%

6%

8%

0%

8%

2%

Violent & Hateful Entities

0%

3%

33%

0%

0%

67%

0%

User Report - Manual Review

Abuse & Harassment

50%

60%

43%

60%

3%

0%

49%

40%

25%

33%

52%

45%

33%

75%

43%

40%

Ban Evasion

0%

0%

0%

0%

0%

0%

Child Sexual Exploitation

0%

0%

12%

31%

19%

3%

8%

20%

22%

28%

12%

22%

21%

7%

15%

14%

26%

26%

Copyright

0%

50%

24%

100%

60%

60%

100%

59%

57%

40%

0%

59%

Counterfeit

2%

20%

0%

0%

Deceased Individuals

0%

0%

Distribution of Hacked Materials

0%

Financial Scam

0%

0%

1%

14%

0%

0%

0%

0%

0%

Hateful Conduct

25%

54%

50%

100%

100%

0%

67%

Illegal or Certain Regulated Goods and Services

0%

0%

0%

0%

4%

0%

0%

5%

0%

0%

0%

0%

0%

Misleading & Deceptive Identities

50%

50%

0%

4%

5%

0%

7%

7%

0%

0%

10%

13%

11%

0%

0%

10%

25%

Non-Consensual Nudity

67%

0%

100%

80%

14%

0%

42%

40%

86%

50%

55%

33%

33%

60%

63%

100%

Other

0%

50%

0%

0%

0%

0%

10%

1%

0%

0%

0%

0%

0%

0%

8%

0%

Perpetrators of Violent Attacks

0%

0%

0%

0%

0%

0%

0%

0%

0%

Platform Manipulation & Spam

7%

0%

0%

0%

0%

0%

0%

0%

0%

0%

1%

0%

0%

0%

0%

1%

0%

0%

0%

0%

0%

Private Information & Media

100%

32%

0%

100%

Sensitive Media

100%

0%

0%

0%

1%

0%

4%

18%

0%

33%

20%

9%

0%

20%

0%

Suicide & Self Harm

0%

0%

4%

0%

0%

0%

17%

0%

0%

0%

Trademark

14%

100%

0%

0%

0%

0%

Username Squatting

0%

0%

0%

0%

0%

Violent & Hateful Entities

0%

0%

0%

3%

17%

8%

9%

0%

0%

0%

0%

4%

0%

Violent Speech

60%

40%

67%

67%

73%

52%

60%

66%

65%

65%

49%

67%

60%

66%

71%

50%

43%

72%

68%

Own Initiative - Manual Review

Abuse & Harassment

0%

TIUC Terms of Service and Rules Suspension Complaint Overturn Rate - Aug 28 to Oct 20

Detection Method - Enforcement Process

Policy

Bulgarian

Croatian

Czech

Danish

Dutch

English

Finnish

French

German

Greek

Hungarian

Irish

Italian

Latvian

Lithuanian

Maltese

Polish

Portuguese

Romanian

Slovak

Slovenian

Spanish

Swedish

Own Initiative - Automated Means

Child Sexual Exploitation

0%

0%

0%

0%

3%

4%

0%

4%

3%

0%

0%

6%

0%

7%

0%

0%

0%

4%

20%

Financial Scam

Help with My Compromised Account

0%

0%

0%

0%

0%

0%

0%

0%

0%

0%

0%

0%

0%

0%

0%

0%

Illegal or Certain Regulated Goods and Services

Other

50%

0%

0%

Perpetrators of Violent Attacks

0%

0%

0%

Platform Manipulation & Spam

22%

30%

36%

43%

12%

23%

17%

25%

24%

25%

20%

26%

50%

20%

30%

5%

21%

35%

12%

Violent & Hateful Entities

0%

0%

50%

User Report - Manual Review

Abuse & Harassment

0%

33%

100%

56%

43%

29%

52%

67%

50%

52%

53%

43%

33%

47%

50%

Ban Evasion

Child Sexual Exploitation

0%

25%

0%

4%

0%

6%

3%

0%

0%

3%

4%

20%

0%

0%

7%

0%

Copyright

0%

2%

0%

0%

0%

0%

0%

0%

0%

6%

Counterfeit

0%

0%

Deceased Individuals

Distribution of Hacked Materials

Financial Scam

100%

0%

Hateful Conduct

0%

14%

0%

100%

100%

50%

Illegal or Certain Regulated Goods and Services

0%

0%

0%

Misleading & Deceptive Identities

100%

100%

0%

16%

0%

33%

0%

0%

0%

33%

0%

Non-Consensual Nudity

0%

0%

0%

0%

0%

0%

0%

0%

0%

0%

0%

0%

0%

0%

Other

0%

0%

33%

0%

0%

Perpetrators of Violent Attacks

Platform Manipulation & Spam

0%

100%

43%

40%

50%

0%

67%

50%

20%

100%

Private Information & Media

0%

29%

0%

Sensitive Media

0%

50%

100%

0%

0%

0%

0%

50%

Suicide & Self Harm

0%

0%

Trademark

0%

0%

Username Squatting

Violent & Hateful Entities

0%

0%

17%

0%

0%

Violent Speech

0%

0%

0%

0%

1%

2%

3%

1%

1%

3%

0%

1%

0%

1%

0%

0%

0%

1%

4%

Own Initiative - Manual Review

Abuse & Harassment

0%

Important Notes about indicators of accuracy:

  1. For some official languages we did not collect data, such as Maltese, or very little data, including Irish, Lithuanian, and Slovenian.
  2. The underlying volume of appeals and overturns for the enforcements shown may be low, which can result in relatively high overturn rates.
  3. Overturn rates are calculated by dividing the number of overturned enforcements by the number of enforcement appeals.
  4. For suspensions, appeals, and overturns used, we used the following measurement approach:
  1. The tables in the section INDICATORS OF ACCURACY FOR SUSPENSIONS have been updated on 1 November 2023 following updates to the tables COMPLAINTS OF ACTIONS TAKEN FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS RECEIVED/DECISIONS.

Further Information on Suspensions

During the applicable reporting period (Aug 28 to Oct 20), there were zero actions taken for: provision of manifestly unfounded reports or complaints; or manifestly illegal content. While manifestly illegal content is not a category that we have taken action on during the reporting period, we suspended 60,377 accounts for violating our Child Sexual Exploitation policy and 2,878 for violating our Violent and Hateful Entity policy.

Disputes submitted to out-of-court dispute settlement bodies.

To date, zero disputes have been submitted to the out-of-court settlement bodies.

 

Reports received by trusted flaggers.

To date, we have received zero reports from Article 22 DSA approved trusted flaggers. Once Article 22 DSA awarded trusted flaggers information is published, we are prepared to enrol them in our trusted flaggers program, which ensures prioritisation of human review.  

Human resources dedicated to Content Moderation

Today, we have 2,294 people working in content moderation. Our teams work on both initial reports as well as on complaints of initial decisions across the world (and are not specifically designated to only work on EU matters). 

Linguistics Expertise of our Content Moderation Team

X’s scaled operations team possesses a variety of skills, experiences, and tools that allow them to effectively review and take action on reports across all of our rules and policies. X has analysed which languages are most common  in reports reviewed by our content moderators and has hired content moderation specialists who have professional proficiency in the commonly spoken languages. The following table is a summary of the the number of people in our content moderation team who possess professional proficiency in the most commonly spoken languages in the EU on our platform:

Primary Language

People

Arabic

12

Bulgarian

2

Croatian

1

Dutch

1

English

2,294

French

52

German

81

Hebrew

2

Italian

2

Latvian

1

Polish

1

Portuguese

41

Spanish

20

Qualifications of our Content Moderation Team

Content Moderation Team Qualifications

Years in Current Role

Headcount

7 or more

48

6 to 7

51

5 to 6

131

4 to 5

264

3 to 4

326

2 to 3

443

1 to 2

638

0 to 1

393

Organisation, Team Resources, Expertise, Training and Support of our Team that Reviews and Responds to Reports of Illegal Content

Description of the team

X has built a specialised team made up of individuals who have received specific training in order to assess and take action on illegal content that we become aware of via reports or other processes such as on our own initiative. This team consists of different tier groups, with higher tiers consisting of more senior, or more specialised, individuals.

When handling a report of illegal content or a complaint against a previous decision, content and senior content reviewers first assess the content under X’s Rules and policies. If no violation of X’s Rules and policies is determined warranting a global removal of the content, the content reviewers assess the content for potential illegality. If the content is not manifestly illegal, it can be escalated for second or third opinions. If more detailed investigation is required, content reviewers can escalate reports to experienced policy and/or legal request specialists who have also undergone in-depth training. These individuals take appropriate action after carefully reviewing the report or complaint and available context in close detail. In cases where this specialist team still cannot determine a decision regarding the potential illegality of the reported content, the report can be discussed with in-house legal counsel. Everyone involved in this process works closely together with daily exchanges through meetings and other channels to ensure the timely and accurate handling of reports.

Furthermore, all teams involved in solving these reports closely collaborate with a variety of other policy  teams at X who focus on safety, privacy, authenticity rules and policies. This cross-team effort is particularly important in the aftermath of tragic events, such as violent attacks, to ensure alignment and swift action on violative content.

Content reviewers are supported by team leads, subject matter experts, quality auditors and trainers. We hire people with diverse backgrounds in fields such as law, political science, psychology, communications, sociology and cultural studies, and languages.

Training and support of persons processing legal requests

All team members, i.e. all employees hired by X as well as vendor partners working on these reports, are trained and retrained regularly on our tools, processes, rules and policies, including special sessions on cultural and historical context. Initially when joining the team at X, each individual follows an onboarding program and receives individual mentoring during this period, as well as thereafter through our Quality Assurance program (for external employees), in house and external counsels (for internal employees). 

All team members have direct access to robust training and workflow documentation for the entirety of their employment, and are able to seek guidance at any time from trainers, leads, and internal specialist legal and policy teams as outlined above as well as managerial support.

Updates about significant current events or rules and policy changes are shared with all content reviewers in real time, to give guidance and facilitate balanced and informed decision making. In the case of rules and policy changes, all training materials and related documentation is updated. Calibration sessions are carried out frequently during the reporting period. These sessions aim to increase collective understanding and focus on the needs of the content reviewers in their day-to-day work.

The entire team also participates in obligatory X rules and policies refresher training as the need arises or whenever rules and policies are updated. These trainings are delivered by the relevant policy specialists who were directly involved in the development of the rules and policy change. For these sessions we also employ the “train the trainer” method to ensure timely training delivery to the whole team across all of the shifts. All team members use the same training materials to ensure consistency.

Quality Assurance (QA) is a critical measure to the business to help ensure that we are delivering a consistent service at the desired level of quality to our key stakeholders, both externally and internally as it pertains to our case work. We have a dedicated QA Team within our vendor team to help us identify areas of opportunity for training and potential defect detection in our workflow or rules and policies. The QA specialists perform quality assurance checks of reports to ensure that content is actioned appropriately.

The standards and procedures within the QA team ensure the team’s QA is assessed equally, objectively, efficiently and transparently. In case of any mis-alignments, additional training is scheduled, to ensure the team understands the issues and can handle reports accurately.

In addition, given the nature and sensitivity of their work, the entire team has access to online resources and regular onsite group and individual sessions related to resilience and well-being. These are provided by mental health professionals. Content reviewers also participate in resilience, self-care, and vicarious trauma sessions as part of our mandatory wellness plan during the reporting period.

Training and Support provided to those Persons performing Content Moderation Activities for our TIUC Terms of Service and Rules

Training is a critical component of how X maintains the health and safety of the public conversation through enabling Trust and Safety agents to accurately and efficiently moderate content posted on our platform. Training at X aims to improve the agents’ and X’s policy enforcement performance and quality scores by enhancing agents’ understanding and application of X rules through robust training and quality programs and a continuous monitoring of quality scores.

TRAINING PROCESS

There is a robust training program and system in place for every workflow to provide content moderators with the adequate work skills and job knowledge required for processing user cases. All agents must be trained in their assigned workflows. These focus areas ensure that X agents are set up for success before and during the content moderation lifecycle, which includes:

TRAINING ANALYSIS & DESIGN

Before commencing design work on any agent program or resource, a rigorous learner analysis is conducted in close collaboration with training specialists and quality analysts to identify performance gaps and learning needs. Each program is designed with key stakeholder engagement and alignment. The design objective is to adhere to visual and learning design principles to maximise learning outcomes and ensure that agents can perform their tasks with accuracy and efficiency. This is achieved by making sure that the content is: 

  1. Easy to experience
  2. Easy to understand
  3. Easy to apply

X’s training programs and resources are designed based on needs, and a variety of modalities are employed to diversify the agent learning experience, including:

CLASSROOM TRAINING

Classroom training is delivered either virtually or face-to-face by expert trainers. Classroom training activities can include:

NESTING (ON-THE-JOB TRAINING)

When agents successfully complete their classroom training program, they undergo a nesting period. The nesting phase includes case study by observation, demonstration and hands-on training on live cases. Nesting activities include agent shadowing, guided case work, Question and Answer sessions with their trainer, coaching, feedback sessions, etc. Quality audits are conducted for each nesting agent and agents must be coached for any mis-action spotted in their quality scores the same day that the case was reviewed. Trainers conduct needs assessment for each nesting agent and prepare refresher training accordingly. After the nesting period, content is evaluated on an ongoing basis with a team of Quality Analysts to identify gaps and address potential problem areas. There is a continuous feedback loop with quality analysts across the different workflows to identify challenges and opportunities to improve materials and address performance gaps.

UP-SKILLING

When an agent needs to be upskilled they receive training of a specific workflow within the same pillar that the agent is currently working. The training includes a classroom training phase and nesting phase which is specified above.

REFRESHER SESSIONS

Refresher sessions take place when an agent has previously been trained, has access to all the necessary tools, but would need a review of some ro all topics. This may happen for content moderators who have been on prolonged leave, transferred temporarily to another content moderation policy workflow, or ones who have recurring errors in the quality scores. After a needs assessment, trainers are able to pinpoint what the agent needs and prepare a session targeting their needs and gaps. 

NEW LAUNCH / UPDATE ROLL-OUTS

There are also processes that require new and/or specific product training and certification. These new launches and updates are identified by X and the knowledge is transferred to the agents.

REMEDIATION PLANS

There are remediation plans in place to support agents who do not pass the training or nesting phase, or are not meeting quality requirements.

Monthly Active Recipients

During the period from April 20th, 2023 through October 20th, 2023 there were an average of 115.2M active recipients of the service (AMARS) in the EU.

Logged In Users

Logged Out Users

Total AMARs

Austria

753,735

999,100

1,752,835

Belgium

1,597,896

1,799,541

3,397,437

Bulgaria

450,528

321,878

772,406

Cyprus

180,205

210,831

391,036

Czechia

1,040,762

1,444,542

2,485,304

Germany

8,940,624

7,408,877

16,349,501

Denmark

769,813

613,974

1,383,787

Estonia

161,490

184,943

346,433

Spain

9,783,481

13,197,990

22,981,471

Finland

896,337

1,250,770

2,147,107

France

11,473,346

10,459,939

21,933,285

Greece

986,351

1,689,822

2,676,172

Croatia

291,167

725,785

1,016,951

Hungary

690,582

928,674

1,619,256

Ireland

1,451,149

1,868,565

3,319,714

Italy

5,128,290

4,017,433

9,145,723

Lithuania

385,819

227,342

613,161

Luxembourg

195,112

120,554

315,666

Latvia

228,835

236,542

465,376

Malta

83,311

67,049

150,360

Netherlands

4,011,930

4,917,265

8,929,195

Poland

6,447,687

7,489,627

13,937,314

Portugal

1,634,243

1,401,741

3,035,984

Romania

1,555,457

822,888

2,378,344

Sweden

1,648,209

1,553,716

3,201,925

Slovenia

198,566

499,247

697,813

Slovakia

272,136

405,259

677,395

61,257,062

64,863,889

126,120,951

Important Note: Due to technical issues, for this report we were unable to provide the AMARS for each EU member state over the past six months. Instead, we provided AMARS for each EU member state from 19 September 2023 until 27 October 2023. We have resolved the technical issues for future transparency reports.

The AMARS for the entire EU over the past six months is 115.2M. The difference between the total AMARs for the EU and the cumulative total AMARs for all EU member states is due to double counting of logged out users accessing X from various EU countries within the relevant time period.

- - - - - - - - - - - - - - - - - - Appendix - - - - - - - - - - - - - - - - -

TIUC Terms of Service and Rules Content Removal Actions - Sep 5 to Sep 23

Enforcement Process

Policy

Austria

Belgium

Bulgaria

Croatia

Cyprus

Czechia

Denmark

Estonia

Finland

France

Germany

Greece

Hungary

Ireland

Italy

Latvia

Lithuania

Luxembourg

Malta

Netherlands

Poland

Portugal

Romania

Slovakia

Slovenia

Spain

Sweden

Grand Total

Automated Means

Abuse & Harassment

2

3

2

1

2

1

11

44

1

1

1

17

1

13

3

2

2

1

4

67

3

182

Hateful Conduct

3

3

2

10

3

2

3

1

7

5

10

18

67

Non-Consensual Nudity

1

1

5

2

1

1

2

1

3

1

18

Other

1

1

2

2

1

2

1

2

12

Private Information & media

1

6

1

2

10

Sensitive Media

41

103

54

25

11

68

68

6

46

1,002

723

82

78

61

420

11

25

11

7

268

429

113

134

25

5

632

94

4,542

Violent Speech

114

286

71

64

20

116

151

32

109

3,326

1,054

114

87

311

347

20

62

21

20

632

500

261

229

43

36

2,351

285

10,662

Manual Review

Abuse & harassment

195

124

42

26

41

169

269

4

119

2,169

1,311

65

53

115

366

365

230

14

1

1,621

686

165

229

7

2

1,379

93

9,860

Child Sexual Exploitation

1

1

1

2

15

6

4

16

1

2

8

3

7

1

68

Counterfeit

1

2

8

1

1

2

11

1

5

1

33

Deceased Individuals

2

3

2

1

1

1

10

Hateful Conduct

17

42

4

6

14

16

3

22

678

337

16

8

34

61

6

1

1

3

126

114

23

14

3

131

44

1,724

Illegal or certain regulated goods and services

24

32

10

1

5

55

33

3

414

336

4

71

29

31

282

240

2

2

344

95

29

20

1

130

17

2,210

Misleading & Deceptive Identities

1

1

1

7

8

2

1

2

3

3

3

1

3

1

37

Non-Consensual Nudity

11

11

9

9

16

8

12

112

192

36

23

9

45

2

16

2

108

90

6

63

8

1

105

6

900

Perpetrators of Violent Attacks

2

1

1

4

Private information & media

12

15

1

1

1

4

2

118

66

10

8

12

1

1

26

17

6

12

44

5

362

Sensitive Media

53

180

44

33

21

145

83

20

77

1,226

932

108

110

87

461

21

26

18

7

472

485

176

205

30

13

795

146

5,974

Suicide & Self Harm

7

13

7

5

1

10

11

1

10

107

96

4

25

41

4

2

1

2

42

294

14

17

3

1

71

20

809

Violent & Hateful Entities

1

1

2

Violent Speech

53

47

7

9

6

49

36

7

26

774

1,062

30

12

68

167

4

8

5

176

703

47

33

3

6

245

67

3,650

Grand Total

529

863

254

175

116

650

688

73

436

9,974

6,189

472

445

756

1,995

719

611

77

45

3,861

3,427

848

967

124

78

5,964

801

41,136

Important Note: Due to a data extraction limitation that is currently under review data ranging from Aug 28 to Sept 5 is not included above.