DSA Transparency Report - April 2024

Introduction

This report covers the content moderation activities of X’s international entity Twitter International Unlimited Company (TIUC) under the Digital Services Act (DSA), during the date range 21 October, 2023 to 31 March, 2024.

We refer to “notices” as defined in the DSA as “user reports” and “reports”.

Description of our Content Moderation Practices

Our content moderation systems are designed and tailored to mitigate systematic risks without unnecessarily restricting the use of our service and fundamental rights, especially freedom of expression. Content moderation activities are implemented and anchored on principled policies and leverage a diverse set of interventions to ensure that our actions are reasonable, proportionate and effective. Our content moderation systems blend automated and human review paired with a robust appeals system that enables our users to quickly raise potential moderation anomalies or mistakes.

Policies 

X's purpose is to serve the public conversation. Violence, harassment, and other similar types of behaviour discourage people from expressing themselves, and ultimately diminish the value of global public conversation. Our Rules are designed to ensure all people can participate in the public conversation freely and safely.

X has policies protecting user safety as well as platform and account integrity. The X Rules and policies are publicly accessible on our Help Center, and we are making sure that they are written in an easily understandable way. We also keep our Help Center regularly updated anytime we modify our Rules.

Enforcement 

When determining whether to take enforcement action, we may consider a number of factors, including (but not limited to) whether:

When we take enforcement actions, we may do so either on a specific piece of content (e.g., an individual post or Direct Message) or on an account. We may employ a combination of these options. In most cases, this is because the behaviour violates the X Rules.

To enforce our Rules, we use a combination of machine learning and human review. Our systems are able to surface content to human moderators who use important context to make decisions about potential violations. This work is led by an international, cross-functional team with 24-hour coverage and the ability to cover multiple languages. We also have a complaints process for any potential errors that may occur.

To ensure that our human reviewers are prepared to perform their duties we provide them with a robust support system. Each human reviewer goes through extensive training and refreshers, they are provided with a suite of tools that enable them to do their jobs effectively, and they have a suite of wellness initiatives available to them. For further information on our human review resources, see the section titled “Human resources dedicated to Content Moderation”.

Reporting violations

X strives to provide an environment where people can feel free to express themselves. If abusive behaviour happens, we want to make it easy for people to report it to us. EU users can also report any violation of our Rules or their local laws, no matter where such violations appear.

Transparency

We always aim to exercise moderation with transparency. Where our systems or teams take action against content or an account as a result of violating our Rules or in response to a valid and properly scoped request from an authorised entity in a given country, we strive to provide context to users. Our Help Center article explains notices that users may encounter following actions taken. We will also promptly notify affected users about legal requests to withhold content, including a copy of the original request, unless we are legally prohibited from doing so.

Our Own Initiative Content Moderation Activities

X employs a combination of heuristics and machine learning algorithms to automatically detect content that violates the X Rules and policies enforced on our platform. We use combinations of natural language processing models, image processing models and other sophisticated machine learning methods to detect potentially violative content. These models vary in complexity and in the outputs they produce. For example, the model used to detect abuse on the platform is trained on abuse violations detected in the past. Content flagged by these machine learning models are either reviewed by human content reviewers before an action is taken or, in some cases, automatically actioned based on model output. Heuristics are typically utilised to enable X to react quickly to new forms of violations that emerge on the platform. Heuristics are common patterns of text or keywords that may be typical of a certain category of violations. Pieces of content detected by heuristics may also get reviewed by human content reviewers before an action is taken on the content. These heuristics are used to flag content for review by human agents and prioritise the order such content is reviewed.

Automated enforcements under the X Rules and policies undergo rigorous testing before being applied to the live product. Both machine learning and heuristic models are trained and/or validated on thousands of data points and labels (e.g., violative or non-violative) that are generated by trained human content reviewers. For example, inputs to content-related models can include the text within the post itself, the images attached to the post, and other characteristics. Training data for the models comes from both the cases reviewed by our content moderators, random samples, and various other samples of pieces of content from the platform.

Before any given algorithm is launched to the platform, we verify its detection of policy violating content or behaviour by drawing a statistically significant test sample and performing item-by-item human review. Reviewers have expertise in the applicable policies and are trained by our Policy teams to ensure the reliability of their decisions. Human review helps us to confirm that these automations achieve a level of precision, and sizing helps us understand what to expect once the automations are launched.

In addition, humans proactively conduct manual content reviews for potential policy violations. We conduct proactive sweeps for certain high-priority categories of potentially violative content both periodically and during major events, such as elections. Agents also proactively review content flagged by heuristic and machine learning models for potential violations of other policies, including our sensitive media, child sexual exploitation (CSE) and violent and hateful entities policies.

Once reviewers have confirmed that the detection meets an acceptable standard of accuracy, we consider the automation to be ready for launch. Once launched, automations are monitored dynamically for ongoing performance and health. If we detect anomalies in performance (for instance, significant spikes or dips against the volume we established during sizing, or significant changes in user complaint/overturn rates), our Engineering (including Data Science) teams - with support from other functions - revisit the automation to diagnose any potential problems and adjust the automations as appropriate.

Enforcement Activity Summary Data

ACTIONS TAKEN ON CONTENT FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS

Art. 15.1.c: TIUC Terms of Service and Rules Restricted Reach Labels - 21/10/23 to 31/3/24

Policy

Auto-Enforced

Manually Enforced

Total

Abuse & Harassment

21,853

21,853

Hateful Conduct

437,410

38,298

475,708

Violent Speech

5,359

5,359

Total

437,410

65,510

502,920

Important Note: The table lists actions of visibility filtering on content potentially violative of our rules in accordance with our Freedom of Speech, Not Reach enforcement philosophy. We did not apply any visibility filtering based on illegal content. In cases where we receive a request for illegal content and the post is also found to be in violation of the policy where FOSNR is applied, our enforcement for illegal content will always be applied.

Art. 15.1.c: TIUC Terms of Service and Rules Content & Profile Removal Actions - 21/10/23 to 31/3/24

Policy

Auto-Enforced

Manually Enforced

Proactively Detected, Manually Enforced

Total

Abuse & Harassment

690

91,573

676

92,939

Child Sexual Exploitation

87

574

661

Counterfeit

1

164

165

Deceased Individuals

49

243

6

298

Distribution of Hacked Materials

4

1

5

Hateful Conduct

100

3,473

25

3,598

Illegal or Certain Regulated Goods and Services

2

14,134

226

14,362

Misleading & Deceptive Identities

115

115

Non-Consensual Nudity

2,253

6,678

9

8,940

Perpetrators of Violent Attacks

18

13

31

Private Information & Media

346

1,518

190

2,054

Sensitive Media

69,888

51,709

31,561

153,158

Suicide & Self Harm

3

11,828

535

12,366

Synthetic & Manipulated Media

2

2

Trademark

5

5

Violent & Hateful Entities

17

17

Violent Speech

102,313

91,724

620

194,657

Other

218

218

Total

175,968

273,774

33,849

483,591

ACTIONS TAKEN ON ACCOUNTS FOR TIUC TERMS OF SERVICE AND RULES VIOLATIONS

Art. 15.1.c: TIUC Terms of Service and Rules Suspension Actions - 21/10/23 to 31/3/24

Policy

Auto-Enforced

Manually Enforced

Proactively Detected, Manually Enforced

Total

Abuse & Harassment

3

40,572

2

40,577

Ban Evasion

103

260

363

Child Sexual Exploitation

90,438

76,532

166,970

Copyright

1,241

1,241

Counterfeit

15

540

555

Country Withheld Content for Illegal Activity

20

20

Deceased Individuals

5

5

Distribution of Hacked Materials

1

1

Financial Scams

443

2,645

3,088

Hateful Conduct

714

714

Illegal or Certain Regulated Goods and Services

1,260

16,963

18,223

Misleading & Deceptive Identities

15,381

11,113

26,494

Non-Consensual Nudity

19

1,699

1,718

Perpetrators of Violent Attacks

292

381

673

Platform Manipulation & Spam

10,840,796

393,494

11,234,290

Private Information & Media

85

85

Sensitive Media

8

167

175

Suicide & Self Harm

305

305

Trademark

7

7

Violent & Hateful Entities

1,925

4,006

5,931

Violent Speech

11,543

11,543

Other

46,376

3,248

49,624

Total

10,997,059

565,541

2

11,562,602

Important Notes about Action based on TIUC Terms of Service and Rules Violations:

  1. The categories “Other” refer to cases of workflow exceptions and tooling inconsistencies which prevent a further clarification on the violated policy of TIUC Terms of Service and Rules.
  2. User reports of illegal content which have been actioned under the TIUC Terms of Service and Rules are displayed in the table "Actions Taken on Illegal Content".

ORDERS RECEIVED FROM MEMBER STATES’ AUTHORITIES INCLUDING ORDERS ISSUED IN ACCORDANCE WITH ARTICLES 9 (REMOVAL ORDERS) AND 10 (INFORMATION REQUESTS)

Art. 15.1.a: Removal Orders Received - 21/10/23 to 31/3/24

Member State

Unsafe and/or Illegal Products

Illegal or Harmful Speech

Total

France

8

8

Italy

1

1

Spain

4

4

Total

8

5

13

Removal Orders Median Time To Acknowledge Receipt

X provides an automated acknowledgement of receipt of removal orders submitted by law enforcement through our Legal Request submission portal. As a consequence of this immediate acknowledgement of receipt, the median time was zero hours.

Removal Orders Median Handle Time

The median handle time to resolve removal orders during the reporting period was 4.1 hours.

Important Notes about Removal Orders:

  1. To improve clarity, we've omitted countries and violation types with no legal requests from the tables above.
  2. “Removal Orders Median Handle Time” shows the category which we considered to fit best and under which we handled the order. This category might deviate from the information provided by the authority when submitting the order via the X online submission platform.
  3. In the cases from France, Italy and Spain, we asked the submitting authority to fulfil Article 9 information requirements but did not receive responses in the reporting period.

Art. 15.1.a: Information Requests Received - 21/10/23 to 31/3/24

Member State

Animal Welfare

Data Protection and Privacy Violations

Illegal or Harmful Speech

Intellectual Property Infringements

Negative Effects on Civic Discourse or Elections

Non-Consensual Behaviour

Not Specified /Unknown

Pornography or Sexualized Content

Protection of Minors

Risk for Public Security

Scams and/or Fraud

Scope of Platform Service

Self-Harm

Unsafe and/or Illegal Products

Violence

Austria

14

8

1

1

Belgium

1

9

1

1

108

6

11

Denmark

1

1

2

1

Finland

1

1

1

2

2

France

10

156

1

7

2

7

10

1,985

47

222

Germany

1

5

2,465

8

3

48

59

92

148

74

1

5

260

Greece

1

10

1

4

5

Hungary

1

1

Ireland

1

10

6

1

2

21

Italy

25

1

1

3

3

14

5

18

Latvia

1

Malta

1

1

1

Netherlands

1

12

17

Poland

1

27

1

1

7

12

Portugal

1

3

4

Spain

5

39

3

3

1

10

14

2

13

Total

1

26

2,758

13

5

56

5

73

118

2,287

166

1

1

7

587

Information Request Median Time To Acknowledge Receipt

X provides an automated acknowledgement of receipt of information requests submitted by law enforcement through our Legal Request submission portal. As a consequence of this immediate acknowledgement of receipt, the median time is zero.

Information Request Median Handle Time

The median time to resolve information requests during the reporting period was 74 hours.

Important Notes about Information Requests:

  1. To improve clarity, we've omitted countries and violation types with no legal requests from the tables above.
  2. The content category for each request is determined by the information law enforcement provides when submitting requests through the X online submission platform. If law enforcement does not provide sufficient information during form submission, the category is determined based on the allegations provided in the legal process. Where multiple illegal content categories were provided, only the gravamen offence was included.
  3. The median handling time is the time between receiving the order and either: 1) disclosing information to law enforcement if the order is valid; or 2) pushing back due to legal issues. The median handling time does not include extra time where X pushes back due to legal issues, receives a valid order or additional information later, and disclosure is eventually made.
  4. Due to timelines for case resolution outlined under the “Information Request Median Handle Time” section above, some requests received prior to the 31 March 2024 transparency report cut-off date were not included as a part of the median handling time due to case resolution, as described in Note No. 3 above, not having been reached. However, because these requests were received within the reporting period, they are included in the data for illegal content issue type in the “Information Requests Received” table above.
  5. The “Not Specified/Unknown” category shows cases where the illegal content category could not be determined based on the information law enforcement provided during the submission process and/or in the legal process.

Reports submitted in accordance with Article 16 (Illegal Content)

ACTIONS TAKEN ON ILLEGAL CONTENT:

ACTIONS TAKEN ON ACCOUNTS FOR POSTING ILLEGAL CONTENT: We suspended accounts in response to 11,268 reports of Intellectual Property Infringements. This was the only type of violation of local law that resulted in account suspension as many types of illegal behaviour are addressed in our policies, such as account suspensions for posting CSE.

REPORTS OF ILLEGAL CONTENT

Art. 15.1.b: Illegal Content Reports Received - 21/10/23 to 31/3/24

Member State

Animal Welfare

Data Protection and Privacy Violations

Illegal or Harmful Speech 

Intellectual Property Infringements

Negative Effects on Civic Discourse or Elections

Non-Consensual Behaviour

Pornography or Sexualized Content

Protection of Minors

Risk for Public Security

Scams and/or Fraud

Scope of Platform Service

Self-Harm

Unsafe and/or Illegal Products

Violence

Total

Austria

32

111

922

73

109

31

122

64

105

547

17

22

37

190

2382

Belgium

19

146

834

39

103

54

202

96

69

658

2

10

43

163

2438

Bulgaria

7

18

114

11

25

8

54

38

21

102

3

6

37

444

Croatia

1

18

147

17

6

5

22

8

27

106

1

1

3

66

428

Cyprus

6

17

55

12

13

4

16

8

8

71

1

2

13

6

232

Czechia

7

84

1,083

49

119

7

72

41

211

342

5

10

80

173

2283

Denmark

4

51

465

106

45

38

115

67

447

244

2

3

18

67

1672

Estonia

1

25

111

14

14

6

27

21

36

42

1

2

13

313

EU

255

1,188

14,553

968

363

1,625

1,935

1,132

3,221

112

401

404

3,105

29262

Finland

3

45

369

89

42

23

19

342

42

367

2

5

37

148

1533

France

269

2,408

21,110

4,691

746

958

3,310

5,925

1,764

4,217

105

221

1,366

3,640

50730

Germany

214

2,872

41,324

3,223

5,208

656

2,491

12,753

3,024

2,621

173

358

657

7,965

83539

Greece

9

118

255

80

13

17

118

36

34

159

5

12

77

933

Hungary

7

23

117

19

23

2

115

15

14

268

1

2

8

38

652

Ireland

8

182

1,060

605

165

17

114

90

117

700

20

20

63

204

3365

Italy

29

411

3,440

460

277

80

434

213

231

1,035

30

55

78

1,032

7805

Latvia

1

29

209

8

16

4

22

17

18

30

1

1

2

31

389

Lithuania

1

5

57

489

7

2

16

60

12

30

1

6

4

18

708

Luxembourg

2

8

62

38

7

18

4

8

1

36

3

3

7

197

Malta

4

10

7

1

9

10

10

1

35

7

2

1

97

Netherlands

24

427

1,577

998

546

102

205

2,288

205

964

35

81

133

275

7860

Poland

47

427

2,523

1,462

334

117

367

2,016

352

1,166

4

34

85

378

9312

Portugal

10

164

1,193

581

315

57

151

37

46

511

3

10

48

255

3381

Romania

4

40

176

79

14

3

105

40

16

205

2

6

48

738

Slovakia

26

65

3

2

1

15

15

8

17

1

1

11

165

Slovenia

21

41

3

10

2

5

2

17

54

1

1

10

167

Spain

99

1,453

5,766

3,867

289

152

855

6,388

468

3,324

37

139

309

1,291

24437

Sweden

10

58

567

1,262

34

37

143

75

41

281

7

5

30

96

2646

Total

1,069

10,379

98,205

18,285

9,451

2,773

10,754

32,608

8,467

21,353

574

1,397

3,449

19,344

238,108

REPORTS RESOLVED BY ACTIONS TAKEN ON ILLEGAL CONTENT

Art. 15.1.b & c: Automated Content Deletion Actions Taken on Reported Illegal Content - 21/10/23 to 31/3/24

Category

Global Content Deletion

Country Withheld Content

No Violation Found

Total

Animal Welfare

4

6

10

Data Protection & Privacy Violations

2

94

96

Illegal or Harmful Speech

2

8

152

162

Intellectual Property Infringements

Negative Effects on Civic Discourse or Elections

33

33

Non-Consensual Behaviour

19

19

Pornography or Sexualized Content

101

101

Protection of Minors

27

27

Risk for Public Security

1

30

31

Scams and/or Fraud

1,161

1,161

Scope of Platform Service

7

7

Self-Harm

4

4

Unsafe and Illegal Products

67

67

Violence

46

46

Total

9

8

1,747

1,764

Art. 15.1.b: Manual Content Deletion Actions Taken on Reported Illegal Content - 21/10/23 to 31/3/24

Category

Global Content Deletion

Temporary Suspension and Content Deletion

Global Suspension

Offer of Help in case of Self-Harm / Suicide Concern

Global Content Removal based on Local Law

Country Withheld Content

No Violation Found

Total

Animal Welfare

285

85

704

1,074

Data Protection & Privacy Violations

973

7

1,851

7,650

10,481

Illegal or Harmful Speech

3,619

2

17

36,729

62,747

103,114

Intellectual Property Infringements

11,268

7,976

19,244

Negative Effects on Civic Discourse or Elections

68

1,039

8,829

9,936

Non-Consensual Behaviour

209

12

586

1,832

2,639

Pornography or Sexualized Content

1,480

1

59

3,699

4,522

9,761

Protection of Minors

28,401

4

260

1,153

4,474

34,292

Risk for Public Security

997

1

3

1,286

6,530

8,817

Scams and/or Fraud

63

1

1

3,311

15,407

18,783

Scope of Platform Service

4

1

59

499

563

Self-Harm

186

150

141

1,010

1,487

Unsafe and Illegal Products

305

1,245

1,823

3,373

Violence

3,732

8

3

3,634

12,889

20,266

Total

40,322

1

11,268

166

363

62,794

128,916

243,830

REPORTS OF ILLEGAL CONTENT MEDIAN HANDLE TIME

The median time to resolve illegal content notices during the reporting period was 2.7 hours.

Important Notes about Actions taken on illegal content:

  1. Disparity between reports received and reports handled is caused by the pending cases at the end of the reporting period.
  2. We only use automated means to close user reports of illegal content where: (i) reported content is no longer accessible to the reporter following other means/workflows; or (ii) reporter displays bad actor patterns.
  3. The numbers of “Intellectual property infringements” reflect reports instead of individual items of content and accounts. Actions taken against intellectual property infringements are made globally meaning that media that infringes copyright and accounts that infringe trademarks will be disabled globally.
  4. Action Types: actions that do not reference TIUC Terms of Service and Rules have been taken based on illegality.
  5. To improve clarity, we've omitted countries and violation types with zero reports from the tables above.
  6. The tables REPORTS RESOLVED BY ACTIONS TAKEN ON ILLEGAL CONTENT and REPORTS OF ILLEGAL CONTENT MEDIAN HANDLE TIME were updated on 13 November 2023 to replace an undefined description "reported content" with the relevant enforcement method "manual closure".

Complaints received through our internal complaint-handling system.

ILLEGAL CONTENT COMPLAINTS

Art. 15.1.d: Illegal Content Complaints - 21/10/23 to 31/3/24

Volume

Overturns after complaint

Complaints Rejected

Complaints

667

190

477

Illegal Content Complaints Median Handle Time

The median time to resolve illegal content complaints is 2.8 hours.

TERMS OF SERVICE COMPLAINTS

Art. 15.1.d: TIUC Terms of Service and Rules Action Complaints - 21/10/23 to 31/3/24

Category

Volume

Overturns after complaint

Account Suspension Complaints

285,785

34,986

Content Action Complaints

39,858

3,708

Live Feature Action Complaints

1,147

0

Restricted Reach Complaints

11,650

6,387

Sensitive Media Action Complaints

5,212

2,762

Total

343,652

47,843

TIUC Terms of Service and Rules Complaints Median Handle Time

The median time to resolve TOS complaints is 0.34 hours.

Important Notes about Complaints:

  1. Information on the basis of complaints is not provided due to the wide variety of underlying reasoning contained in the open text field in the complaint form.
  2. To improve clarity, we've omitted countries and violation types with zero complaints from the tables above.

Indicators Of Accuracy For Content Moderation

The possible rate of error of the automated means used in fulfilling those purposes, and any safeguards applied.

Art. 15.1.e: TIUC Terms of Service and Rules Enforcement Indicators of Accuracy -  21/10/23 to 31/3/24

Enforcement

Appeal Rate

Overturn Rate

Automated Means

1.5%

17.07%

Manual Closure

7.22%

9.26%

All Actions

1.91%

14.95%

Important Notes about indicators of accuracy:

  1. Overturn rates are calculated by dividing the number of overturned enforcements by the number of enforcement appeals.
  2. For suspensions, appeals, and overturns used, we used the following measurement approach:

Disputes submitted to out-of-court dispute settlement bodies.

To date, zero disputes have been submitted to the out-of-court settlement bodies.

 

Reports received by trusted flaggers.

We received zero reports from Article 22 DSA approved trusted flaggers during the reporting period. Once Article 22 DSA awarded trusted flaggers information is published, we are prepared to enrol them in our trusted flaggers program, which ensures prioritisation of human review.  

Human Resources dedicated to Content Moderation.

Today, we have 1,849 people working in content moderation. Our teams work on both initial reports as well as on complaints of initial decisions across the world (and are not specifically designated to only work on EU matters). 

LINGUISTICS EXPERTISE OF OUR CONTENT MODERATION TEAM

X’s scaled operations team possesses a variety of skills, experiences, and tools that allow them to effectively review and take action on reports across all of our rules and policies. X has analysed which languages are most common  in reports reviewed by our content moderators and has hired content moderation specialists who have professional proficiency in the commonly spoken languages. The following table is a summary of the the number of people in our content moderation team who possess professional proficiency in the most commonly spoken languages in the EU on our platform:

Art. 42.2: Linguistics Expertise

Primary Language

People

Arabic

32

Dutch

1

English

1,570

French

58

German

61

Hebrew

2

Italian

1

Portuguese

25

Spanish

25

ORGANISATION, TEAM RESOURCES, EXPERTISE, TRAINING AND SUPPORT OF OUR TEAM THAT REVIEWS AND RESPONDS TO REPORTS OF ILLEGAL CONTENT

X has built a specialised team made up of individuals who have received specific training in order to assess and take action on illegal content that we become aware of via reports or other processes such as on our own initiative. This team consists of different tier groups, with higher tiers consisting of more senior, or more specialised, individuals.

When handling a report of illegal content or a complaint against a previous decision, content and senior content reviewers first assess the content under X’s Rules and policies. If no violation of X’s Rules and policies is determined warranting a global removal of the content, the content reviewers assess the content for potential illegality. If the content is not manifestly illegal, it can be escalated for second or third opinions. If more detailed investigation is required, content reviewers can escalate reports to experienced policy and/or legal request specialists who have also undergone in-depth training. These individuals take appropriate action after carefully reviewing the report or complaint and available context in close detail. In cases where this specialist team still cannot determine a decision regarding the potential illegality of the reported content, the report can be discussed with in-house legal counsel. Everyone involved in this process works closely together with daily exchanges through meetings and other channels to ensure the timely and accurate handling of reports.

All teams involved in solving reports of illegal content closely collaborate with a variety of other policy  teams at X who focus on safety, privacy, authenticity rules and policies. This cross-team effort is particularly important in the aftermath of tragic events, such as violent attacks, to ensure alignment and swift action on violative content.

Content reviewers are supported by team leads, subject matter experts, quality auditors and trainers. We hire people with diverse backgrounds in fields such as law, political science, psychology, communications, sociology and cultural studies, and languages.

Training and support of persons processing legal requests

All team members are trained and retrained regularly on our tools, processes, rules and policies, including special sessions on cultural and historical context. Initially when joining the team at X, each individual follows an onboarding program and receives individual mentoring during this period, as well as thereafter through our Quality Assurance program (for external employees), in house and external counsels (for internal employees).

All team members have direct access to robust training and workflow documentation for the entirety of their employment, and are able to seek guidance at any time from trainers, leads, and internal specialist legal and policy teams as outlined above as well as managerial support.

Updates about significant current events or rules and policy changes are shared with all content reviewers in real time, to give guidance and facilitate balanced and informed decision making. In the case of rules and policy changes, all training materials and related documentation is updated. Calibration sessions are carried out frequently during the reporting period. These sessions aim to increase collective understanding and focus on the needs of the content reviewers in their day-to-day work.

The entire team also participates in obligatory X Rules and policies refresher training as the need arises or whenever rules and policies are updated. These trainings are delivered by the relevant policy specialists who were directly involved in the development of the rules and policy change. For these sessions we also employ the “train the trainer” method to ensure timely training delivery to the whole team across all of the shifts. All team members use the same training materials to ensure consistency.

Training and Support provided to those Persons performing Content Moderation Activities for our TIUC Terms of Service and Rules

There is a robust training program and system in place for every workflow to provide content moderators with the adequate work skills and job knowledge required for processing user cases. All agents must be trained in their assigned workflows. These focus areas ensure that X agents are set up for success before and during the content moderation lifecycle, which includes:

X’s training programs and resources are designed based on needs, and a variety of modalities are employed to diversify the agent learning experience, including:

Classroom training is delivered either virtually or face-to-face by expert trainers. Classroom training activities can include:

When agents successfully complete their classroom training program, they undergo a nesting period. The nesting phase includes case study by observation, demonstration and hands-on training on live cases. Quality audits are conducted for each nesting agent and agents must be coached for any mis-action spotted in their quality scores the same day that the case was reviewed. Trainers conduct needs assessment for each nesting agent and prepare refresher training accordingly. After the nesting period, content is evaluated on an ongoing basis with a team of Quality Analysts to identify gaps and address potential problem areas.

When an agent needs to be upskilled, they receive training of a specific workflow within the same pillar that the agent is currently working. The training includes a classroom training phase and nesting phase which is specified above.

Refresher sessions take place when an agent has previously been trained, has access to all the necessary tools, but would need a review of some or all topics. This may happen for content moderators who have been on prolonged leave, transferred temporarily to another content moderation policy workflow, or ones who have recurring errors in the quality scores. After a needs assessment, trainers are able to pinpoint what the agent needs and prepare a session targeting their needs and gaps.

Monthly Active Recipients

During the period from 21 October, 2023 through 31 March, 2024 there were an average of 109,191,304 active recipients of the service (AMARS) in the EU.

Art. 24.2: Average Monthly Active Recipients - 21/10/2023 - 31/3/24

Country

Logged In Users

Logged Out Users

Total

Austria

759,031

678,858

1,437,889

Belgium

1,589,878

1,241,627

2,831,505

Bulgaria

421,437

270,109

691,546

Cyprus

167,615

125,617

293,232

Czechia

1,078,525

1,174,706

2,253,231

Germany

9,956,377

5,719,553

15,675,930

Denmark

750,744

450,946

1,201,690

Estonia

159,850

126,717

286,568

Spain

9,771,626

8,577,285

18,348,911

Finland

899,449

909,541

1,808,989

France

12,117,220

7,767,663

19,884,883

Greece

887,072

987,607

1,874,679

Croatia

264,753

486,135

750,888

Hungary

729,192

645,048

1,374,240

Ireland

1,448,645

1,194,878

2,643,524

Italy

4,915,997

3,100,907

8,016,904

Lithuania

559,731

168,913

728,644

Luxembourg

200,314

90,371

290,686

Latvia

291,787

201,732

493,518

Malta

79,336

47,457

126,794

Netherlands

4,328,159

3,464,877

7,793,036

Poland

7,127,924

4,551,638

11,679,562

Portugal

1,613,616

964,655

2,578,271

Romania

1,721,282

585,185

2,306,467

Sweden

1,692,387

1,016,343

2,708,729

Slovenia

215,546

307,023

522,569

Slovakia

277,545

310,875

588,420

Total

64,025,037

45,166,266

109,191,304

The AMARS for the entire EU over the past six months is 66.1M. The difference between the total AMARs for the EU and the cumulative total AMARs for all EU member states is due to double counting of logged out users accessing X from various EU countries within the relevant time period.

- - - - - - - - - - - - - - - - - - Appendix - - - - - - - - - - - - - - - - -