REPORT SETTING OUT THE RESULTS OF TWITTERINTERNATIONAL UNLIMITED COMPANY RISKASSESSMENTPURSUANT TO ARTICLE 34 EU DIGITAL SERVICES ACT AUGUST 2024 1 TABLE OF CONTENTS I. Executive Summary 3 II. Introduction 5 III. The DSA \& X 6 IV. X Risk Environment: Influencing Factors \& Controls. 7 Zoom in: Community Notes 8 V. X DSA Systemic Risk Governance Framework 14 VI. Methodology 15 A. Walkthrough 15 Phase I: Identification of systemic risks 16 Phase II: Assessment 16 Phase III: Mitigation measures 18 B. Stakeholder engagement and consultation 18 VII. Summary of risk assessments 22 A. Dissemination of illegal content 23 Zoom-in: Israel/Hamas – Crisis Protocol 28 Dissemination of Terrorist Content 29 Dissemination of Illegal Hate Speech 31 Dissemination of Child Sexual Abuse Material (CSAM) 33 Dissemination of IP \& Copyright infringing content 36 B. Exercise of fundamental rights 38 Zoom in: Transparent restricted reach labelling 40 Freedom of expression 42 Consumer protection 44 Protection of minors 47 Protection of personal data 49 Other fundamental rights 51 C. Democratic processes, civic discourse, electoral processes, and public security 54 Zoom-in: EU elections 56 Negative effects to democratic processes, civic discourse, and electoral processes 58 Risks to public security 61 D. Public health, physical and mental well-being, and gender-based violence 63 Zoom-in: GenAI \& Gender-Based Violence – Taylor Swift Deepfake 65 Risks to public health and physical and mental well-being 66 Risks of gender-based violence 68 VIII. Considerations for further mitigations 71 IX. Annex: Matrices 75 2 I. Executive Summary With over 45M monthly active users in the EU, X was designated as a Very Large Online Platform(VLOP) under the EU Digital Services Act (DSA) on April 25, 2023. In accordance with DSA Article34, we have conducted a comprehensive assessment that identifies, analyses and assesses anysystemic risks to the Union stemming from the design or functioning of our service, its relatedsystems (including algorithmic systems) and from the use made of our services. In keeping with our legal obligations under EU law, we have taken into consideration thefollowing factors: the dissemination of illegal content through our service; any actual andforeseeable negative effects to the exercise of fundamental rights; any actual or foreseeablenegative effects in relation to civic discourse, electoral processes, public security; and any actualor foreseeable negative effects in relation to gender based violence, the protection of publichealth and minors and serious negative consequences to the physical and mental well-being ofindividuals. In accordance with Article 34(2), the risk assessment also addresses ourrecommender systems, content moderation systems, applicable terms and conditions, systemsfor the selection and presenting of advertisements and any of X’s data related practices. This riskassessment covers TIUC's designated service1 as of June 30, 2024. In this DSA Risk Assessment summary report, X summarises the outcomes of its second annualsystemic risk assessment exercise. As this exercise builds on the first risk assessment, X uses ‘Y1’to refer back to the risk assessment exercise and report submitted in 2023, and ‘Y2’ to refer tothe risk assessment conducted in 2024 and the current report. This report summarises X’sconsideration of new inherent risks since August 2023, new and improving controls in place, theresidual risk that remains on the platform, and further routes that X could explore to tackle theresidual risk. Our Y1 methodology aimed to serve as a blueprint for future risk assessments. In Y2, we haveenhanced the methodology with further learnings from academia, industry best practices,regulatory guidance, and internal stakeholder feedback. In accordance with DSA Article 34, ourrisk assessment covers the four systemic risk areas, and provides a granular assessment through13 individual assessments. For each identified risk area, we assessed how our platform’s design, functioning, use, orpotential misuse, could result in inherent risks in Y2; mapped existing and new controls andremediations against these inherent risks; and assessed the residual risk that remains on ourplatform in Y2. Following our assessment, we found that our controls bring down the level of riskfor most areas to a low or medium level. We look to improve our existing controls and explorefurther measures, to continue to mitigate this residual risk. Our measures are designed toaddress Article 34 systemic risks and are proportional to X’s capacity, while avoiding unnecessaryrestrictions on service use. Special consideration is given to the impact on freedom ofexpression. Acknowledging that these systemic risks are continuously evolving and can be 1 Twitter International Unlimited Company (TIUC) is the service provider of the X VLOP (X) in the EU.Throughout this report, we will use "X" to refer both to the designated VLOP service and its serviceprovider. 3 impacted by intentional coordinated exploitation, we remain committed to continuing to monitorand mitigate these risk areas. We have conducted this DSA systemic risk assessment utilising our knowledge, resources, andunderstanding of DSA requirements. Internal teams across the globe, including X management,the DSA Leadership team, Safety, Product Engineering, Legal, Privacy \& Data Protection, GlobalGovernment Affairs (GGA), the Independent Compliance Function, the TIUC Board, along withexternal resources, were relied on in this cross-functional exercise. This second assessmentserves as a continuation of our efforts to maintain platform safety in an evolving and iterativeprocess, as envisaged by the DSA. 4 II. Introduction X’s mission is to promote and protect the public conversation, serving as a trusted digital publictown square. With more than 45M monthly active users in the EU, X was designated as a VeryLarge Online Platform (VLOP) under the EU Digital Services Act (Regulation 2022/2065; the DSA)on 25 April, 2023. In 2024, we have seen major European elections, including the EU elections and nationalelections in the Union, alongside emerging public narratives on significant events, such as theIsrael/Hamas conflict post-October 7th. As a platform that facilitates public conversation, X hasresponded to this changing risk environment by addressing the online conversations stemmingfrom these off-platform events in a proportionate manner - balancing freedom of expression whileensuring that our platform and users remain safe. Balancing human rights, including the right tofreedom of speech, are the foundation of how we think about and iterate on policy andenforcement. X’s approach to policy and enforcement factors in potential impacts on humanrights, including negative impacts to physical safety, privacy, and freedom of expression beingmost significant and ones to prevent and mitigate. We believe it is our responsibility to keepusers on our platform safe from content violating our Rules. Last year, we developed our DSA risk assessment methodology with reference to multipleexisting frameworks, including, but not limited to, the UN Guiding Principles on Business andHuman Rights and the DTSP Safe Assessments Framework, and adapted them to the uniqueenvironment of X. In consideration of new guidance, we introduced a more robust methodology for our scorecalculations across the four systemic risks identified by Article 34(1) of the DSA. We furtheridentified subcategories of each risk to facilitate more granular analysis. Additionally, westandardised our evidence base, enabling a more precise scoring system and bettercomparability across risk areas. Notably, we adjusted our scales to consider vulnerable groupsand X users in the EU, providing a more nuanced understanding of how such content manifestson the platform and its reach. These changes are further detailed in VI. Methodology. Our risk assessment, consistent with last year’s approach, involved analysing existing controls toreduce inherent risks and considering additional measures to mitigate systemic risks identified inthe assessment. A summary of the results of this exercise can be found in VII. Summary of riskassessments. In identifying further mitigation measures, we considered the residual risks, oureconomic capacity, and the impact on fundamental rights, particularly freedom of expression.These measures are detailed in VIII. Considerations for further mitigations. We conducted this risk assessment using our expertise, resources, and understanding of the DSArequirements, while also considering established and emerging cross-industry standards. As therisk assessment and management framework is a continuous exercise, we refer back to our Y1report and take into consideration the Y1 scores, in order to track the evolution of risks. 5 III. The DSA \& X With over 111M average monthly users in the EU2, and 250M daily users globally,3 X continues tobe an indispensable platform for the world.4 Since August 2023, we have adopted and reinforceda vast number of measures to improve our safety mechanisms and empower users in the EU. Incompliance with the DSA, this has included a dedicated illegal content reporting form and appealform for users in the EU, updated communications and statement of reasons to users followingenforcement actions, biannual DSA transparency reports, and increased transparency to usersabout our ads and recommender systems. We have also onboarded designated trusted flaggers,and collaborated with civil society organisations in preparation for and during the elections thattook place in the EU over the past year. While balancing freedom of expression, our cooperation with law enforcement for informationrequests, removal orders, and proactive referrals in cases of suspicions of criminal activity isongoing and we have established dedicated points of contact for both EU authorities and usersto contact us with their DSA inquiries. Our Terms of Service and various Help Centre pages havealso been updated following the DSA, to clearly reflect summaries of our terms, as well as newinformation to help our users understand our recommender systems and give them more controlover their experience on X. Our ads transparency center also provides EU users a look into all advertisements andcommercial communications present on the platform with instructions on how to get started. Wehave also opened an application process for qualified researchers to apply for X API access toconduct research related to DSA systemic risks, separate to our subscriptions for generalacademic research. Our product development process has been enhanced to consider dark patterns in a broadercontext, having historically focussed on dark patterns arising in a data protection context. We alsoconduct assessments of products that may have a critical impact on systemic risks in the EU, bothat a pre-deployment stage and throughout the product’s lifetime. This is also core to our riskassessment and risk management process, which we see as a continuous effort over time tomitigate potential risks on X. Although many of these risks may be manifestations on the platform of existing offline issues, werecognise the role that online platforms may play in disseminating and potentially exacerbatingthe harms. This is why we continue to invest resources into the DSA risk assessment, an exerciseconducted and overseen by a cross-functional team including Safety, Product Engineering, Legal,Privacy \& Data Protection, Global Government Affairs (GGA), the Independent ComplianceFunction, and the TIUC Board. 4https://blog.x.com/en_us/topics/company/2023/an-update-on-our-work-to-tackle-child-sexual-exploitation-on-x 3 https://x.com/XData/status/1769826435576037702 2 https://transparency.x.com/en/reports/amars-in-the-eu 6 IV. X Risk Environment: Influencing Factors \& Controls. We are constantly improving our rules, processes, technology, and tools to ensure that all of ourusers can participate in public conversation freely and safely. X’s mission has guided ourapproach to navigating the multi-platform risk environment in which we exist, aiming to provide aservice where all users have the power to create and share ideas and information. Our approachto assessing and mitigating risks associated with harmful content continues to be based on aframework that considers physical, psychological, informational, economic and societal harms,allowing us to analyse the potential real-world harm of content and behaviour that may occur onX. Although the factors listed in Article 34(2) were considered in the context of each systemic risk(captured in VII. Summary of risk assessments), many of these factors pose similar risks, and aremitigated by controls, in a horizontal manner - i.e, acting across all systemic risks. As such, theyhave been explained below, drawing upon the conclusions from the Y1 exercise and providinginsights into changes in risk and corresponding controls in Y2. Risk of misuse and inauthentic use of X X is situated in a multi-platform risk environment and bad actors can misuse the service in thesame way they misuse other social media platforms. Many risks and harms that manifest on Xappear as extensions of often already rapidly evolving offline risks. These risks interact incomplex and novel ways across the online platform ecosystem. While our controls are constantlyworking to reduce harm, we recognise that bad actors may stay a step ahead, and our platform isnot invulnerable to manipulation. Between October 2023 to June 2024, almost of our total enforcement action5 for X Rulesviolations was under our Platform Manipulation and Spam policy, indicating the high volumes ofsuch risk on X, as well as X’s efforts to mitigate it. Forms of inauthentic behaviour may include,but are not limited to, financially motivated spam, inauthentic engagements, as well ascoordinated activity to artificially amplify hashtags, trends, and other conversations. In April 2024,we initiated additional proactive measures to eliminate accounts that violate our PlatformManipulation and Spam rules to ensure that X remains secure and free of bots.6 These measuresresulted in a significant decline in violative accounts, and we continue to iterate on thesemeasures to continue catching pivoting threats. Design and functionality We offer a variety of features for users to engage with on the platform through different mediumsand formats, such as posts, Spaces, Communities, and X Live, as well as via subscription throughX Premium. To learn more about our suite of product-level safety features as well as user controlsthat allow users to have a safe and meaningful experience on X, please refer to our Y1 report. 6 https://x.com/Safety/status/1775942160509989256 5 Total enforcement data was calculated by taking the sum of total suspensions, total content removals, andan extrapolated total restricted reach labelled posts for the time period of October 2023 to June 2024. Forthe extrapolated total restricted reach labels, an estimate for the time period was used, as due to dataretention issues, real figures are only available for . As such, these values should beunderstood to be estimates. 7 Over the last year, we have rolled out new updates to our existing features – such asimprovements to Community Notes – as well as new features such as making likes private7, tocontinue our work in creating a safe experience for our users. Zoom in: Community Notes This year, Community Notes has more than 100K contributors across the EU, and has beenlaunched on media and videos as well. Posts that have a note on it are demonetised, ensuringthat there is no revenue generated from false or misleading information. External researchers found that users repost 61% less often after a post gets a CommunityNote, while another study found around a 50% drop in reposts and 80% increase in postdeletions after a post received a Community Note. This aligns with our own research that founda large causal drop in reposts, quotes, and likes on noted posts in an A/B test. This reduction isentirely due to organic user behaviour, since X does not rank posts differently when they arenoted. Another recent study found that, across the political spectrum, Community Notes wereperceived as significantly more trustworthy than traditional, simple misinformation flags. It alsofound that Community Notes had a greater effect on improving people’s identification ofmisleading posts. A key driver is believed to be the detailed context that notes provide, rightwhere people can see it. Speed is important in addressing misleading information — the sooner people see addedcontext, the better. In the past year we’ve seen that notes can respond quickly at critical times.In the first few days of the Israel-Hamas conflict, notes appeared at a median time of just 5hours after posts were created. This calculation does not even include notes on images/videos— over 80% of noted posts are showing media notes, which appear instantly on new posts thatinclude previously noted media. It’s also common to see Community Notes appearing daysfaster than traditional fact checks — which is possible because of the collective intelligence ofthe contributor community. In the past year, we’ve shaved 3-5 hours off the typical time it takesfor notes to be scored, andOn top of this, people who engage with a post before it receives anote get a notification about it. Updates and improvements to our notes are regularlycommunicated via our X Community Notes handle8. Request a Community Note: As of July 2024, users can request a Community Note on a postthey believe would benefit from one. This is both a way for everyone on X to help, and it allowsCommunity Notes contributors to see where help is wanted, potentially helping to acceleratetheir work in proposing new notes. This feature is in pilot testing, and currently only availableon the browser version9. 9 While this feature was only available on the browser version as of the date of conducting the riskassessment, this was expanded to iOS and Android on .Aug 20, 2024 8 https://x.com/CommunityNotes/status/1788617818784792880 7 https://twitter.com/XEng/status/1800959499932496139 8 Prior to deployment, all products go through safety checks to ensure a scaled and monitoredapproach to launching products. X has incorporated and followed an evaluation process toidentify and assess products, features, and functionalities that are likely to have a critical impacton the systemic risks identified under Article 34, in line with the pre-deployment risk assessmentduties in Article 34(1). Beyond products, we strive to give more control to users to control their experience on theplatform through features such as block/mute, hide, and unfollow. Likes were also made privatein June to better protect our user’s privacy10. This means that users can no longer see who likedsomeone else’s post. Only a post’s author can see who liked their posts. This also protectsfreedom of expression as public likes may have resulted in self-censorship for fear of reactionfrom viewers. Recommender systems (Article 34(2)(d)) Our recommendations are based upon a variety of signals, including, but not limited to, interestsyou choose during onboarding, accounts \& Topics you follow, posts you’ve liked, reposted, orotherwise engaged with, and content that is popular in your network. Recommendations mayamplify content and can unintentionally elevate specific sources and may reduce the reach ofpluralistic sources of information. Until our systems have flagged an account or content asviolative or potentially violative, they remain eligible for amplification and recommendation by oursystems. During that time, such accounts and content may continue to receive engagement, thuscontributing to their distribution and reach. In an attempt to create personalised experiences, oursystems may also run the risk of limiting pluralistic sources. To mitigate this risk, recommender system controls include safety models to prevent violativeaccounts and content from being recommended, implementing eligibility requirements for therecommender system, ensuring that sensitive content or inappropriate advertising is not shownto accounts of known minors, and blocking violative keywords from showing up on searchautocomplete and trending. Content that is labelled under relevant policies is ineligible forrecommendation, which further reduces the spread of such content. Over the past year, usersalso have the option for each recommender system to engage with non-profiled content. Thecontent shown to users under these options is typically the most recent or popular contentwithout factoring personalised information, or strictly content from accounts that a user haschosen to follow. Further, user controls tools - such as unfollow, mute, block, report, show lessoften, and more - are designed to help users control what they see and what others can seeabout them. Recommender systems are thus influenced by such user choices – for example,recommendations delivered to users will not suggest content that includes their muted words orhashtags. Our approach to recommender systems, along with the parameters used in these systems andhow users can influence them are explained in the following blogs: About our approach torecommendations, Communities Recommendations, Conversations Recommendations, SpacesRecommendations, Trends Recommendations, Search Recommendations, and For You HomeTimeline Recommendations. 10 https://x.com/XEng/status/1800634371906380067 9 Policies and enforcement (Article 34(2)(c)) Our aim is for our policies and enforcement measures to be consistent, reasonable,proportionate, and effective. To achieve that, we have built a policy development processfocused on balancing the safety and freedom of expression of our users. Our operations andpolicy functions work together to identify limitations and update policies and enforcementguidelines, as part of our incident responses. To learn more about our policy developmentprocess, please refer to our Y1 report. Over the past year, as part of our ongoing commitment to refine our policies and enforcement,we have conducted a comprehensive review of our existing guidelines and workflows. This hasled to improvements in X media policies, particularly around consensual adult content and violentmedia. By separating our Sensitive Media policy to Adult Content and Violent Content11, we’veaccomplished the following: ● User transparency with enhanced and distinct Help Center articles, and reportingexperience; ● Clearer data on the prevalence of adult versus violent content on our platform. Previously,such content was grouped under the broad category of Sensitive Media, which did notallow for nuanced analysis; and ● Operational efficiency with clearer guidelines and training/onboarding expectations. We employ a range of enforcement options, either on a specific piece of content (e.g., anindividual post or Direct Message) or at an account-level through suspensions. In determiningwhat enforcement option to apply, we carefully consider that activity on X is largely reflective ofreal life conversations, events, and social movements that may include perspectives that could beperceived as offensive or controversial by our users. To learn more about our approach toenforcement, please refer to our Y1 report. For more information on our approach to restrictingreach of content, please refer to B. Exercise of fundamental rights. Content moderation systems (Article 34(2)(b)) X takes seriously its commitment to being a safe platform for all people who use it in a mannerconsistent with our Rules, and strives to ensure that our Rules are not implemented in adiscriminatory manner with respect to protected characteristics. However, as with all moderationsystems, there remain inherent risks of false positives and false negatives, for example due tomoderator bias, language specialisation, resource allocation, or potential limitations of automatedtools. Over the last year, we have been moving towards an information-first approach for moderatingcontent, which reduces the risk of moderator bias in decision making. Historically, a decision firstapproach has been employed – which means that a moderator analyses content against policycriteria, to then decide if it is a violation or not. However, this risks subjectivity, notably if thecriteria is inconsistently applied by different people. An information-first approach aims to reducepotential bias and increase enforcement consistency by having moderators get to anenforcement decision by answering a set series of questions, rather than having themimmediately make a decision. For more information on our own initiative content moderation 11 Note that the Sensitive Media policy previously included consensual adult content and violent mediawithin it. As such, allowing consensual adult content on the platform is not an enforcement change, as Xhas always permitted consensually produced and shared adult content. 10 efforts as well as on our human resources dedicated to content moderation, please refer to ourtransparency reports. Our human review efforts are led by an international, cross-functional team with 24-hourcoverage and the ability to cover multiple languages. We provide our reviewers with a robustsupport system to ensure that they are prepared to perform their duties. Each reviewer goesthrough extensive training and refreshers, and they are provided with a suite of wellnessinitiatives. Manual content moderation resourcing requirements can experience fluctuationsbased on a variety of challenges such as trending issues and product feature changes. Toaddress this, weekly operational capacity review meetings are held that consider incomingvolumes, our meet rate against service legal agreements, any case backlog accumulation, andassessment of risk. As a result of this analysis, moderation resources may be reallocated,removed or reserves committed to address emergent crises and opportunities Automated enforcements for X Rules undergo testing before being applied to the live product tomitigate the above. Both machine learning and heuristic models are trained and/or validated ondata points and labels (e.g., violative or non-violative) that are generated by trained humancontent reviewers. We have feedback loops for our automated detection systems to monitor theirperformance using the rate at which human content reviews agree with the automated systemdecision. Reviewers have expertise in the applicable policies and are trained by our policyspecialists to ensure the reliability of their decisions. Human review helps us to confirm that theseautomations achieve a level of precision, and sizing helps us understand what to expect once theautomations are launched. In addition, humans proactively conduct manual content reviews for potential X Rules violations.We conduct proactive sweeps for certain high-priority categories of potentially violative contentboth periodically and during major events, such as elections. Agents also proactively reviewcontent flagged by heuristic and machine learning models for potential violations of otherpolicies, including our Violent Content, Child Sexual Exploitation and Violent and Hateful Entities policies. Once reviewers have confirmed that the detection meets an acceptable standard ofprecision, we consider the automation to be ready for launch. Once launched, automations aremonitored dynamically for ongoing performance and health. If we detect anomalies inperformance, our Engineering teams - with support from other functions - revisit the automationto diagnose any potential problems and adjust the automations as appropriate. Systems for selecting and presenting ads (Article 34(2)(d)) As with all online platforms, there is an inherent risk that violative ads could be posted on ourplatform. While our moderation systems and human moderators work to identify such ads, theymay not catch every violation, potentially leading to missed violations or uneven enforcement.Additionally, advertisers may attempt to target minors based on profiling and using personal data.Users might also face challenges in understanding ad targeting, their privacy options, or theprocess for reporting ads that violate our policies. At the ad creation time, our system is set up to proactively detect violative ads by employingmachine learning models and business logics such as denylist terms so as to mitigate this risk.Denylist terms restrict content from appearing on promoted posts. When a term is added to thead review denylist, any promoted content mentioning the term or phrase will automatically put 11 the advertisement into a review hold state, requiring a human review before proceeding. Thereremains a possibility that some ads may bypass our detection methods. We also leverage humanreviews to verify system detections, which can also be initiated due to user reports. Detected adsare halted or restricted per our X Ads policies. As an additional control, Community Notes can beadded to X ads, to help ensure the veracity of the advertiser’s claims and allow access to moreinformation. Further, since August 2023, X does not present ads to minors in the EU. Finally, X does not allow political ads in the EU. A recent study by Global Witness on how socialmedia platforms treat election disinformation, notably in ads, showed that X halted all ads andsuspended the creation of accounts for violating X Ads policies, indicating a well functioningpolicy and enforcement mechanism compared to VLOP peers. Finally, in our efforts to protectminors, we have turned off advertisements to minors in the EU. Data related practices (Article 34(2)(e)) As discussed in the Y1 report, to embed privacy throughout our organisation, X conducts legaland privacy reviews for all new projects involving personal data. Our most recent privacy andsecurity external audit conducted in 2023, for the purpose of assessing the establishment,implementation, and maintenance of X’s Privacy and Information Security program, showed thatour Privacy and Information Security Program is comprehensive, provides sufficient coverageacross all relevant privacy and information security domains, and is in alignment with ISO 27701and ISO 27001/02 frameworks, upon which the Program is based. The audit findings also statedthat our privacy and information security risk management strategies, monitoring, and mitigationapproach highlights that we continue to prioritise privacy and information security as foundationalwithin the organisation. Please note, a 2024 privacy and security audit is currently underway. Asin Y1, X conducted a dedicated risk assessment for data related practices and protection ofpersonal data, under the systemic risk of negative effects to fundamental rights. Cooperation with law enforcement X cooperates with law enforcement authorities in the EU. Law Enforcement can issue X contentremoval requests, information requests, emergency disclosure requests or data preservationrequests. We have dedicated online guidelines and a portal available for law enforcement to use,which our teams monitor 24/7. Requests from governments and law enforcement authorities arereviewed for compliance with international human rights and legal standards. Our DSAtransparency reports provide more information around our collaboration with law enforcement inthe EU. Other continuous mitigation measures At the end of our first DSA risk assessment cycle, our cross-functional risk assessment teamconsidered our risk profile and identified areas where further mitigations could be explored. Inour Y1 report, we outlined these measures, in compliance with Article 35(1). Many of thesemitigations were described in the III. The DSA \& X section above, and others require continuousefforts. The following are the Article 35 mitigation measures enacted between August 2023 and June2024: ● Our Civic Integrity policy was launched in mid September 2023, to address voterintimidation and suppression during elections (Article 35(1)(b)); 12 ● We continued to conduct comprehensive policy reviews, which has led to improvementsin our policies. Notably, disassociating consensual Adult Content and Violent Media fromthe existing Sensitive Media12 policy has helped with establishing clearer definitions andenforcement guidelines (Article 35(1)(b)); ● We made changes to our global list of designated violent entities and expanded it, as partof our continuous work to carry our comprehensive assessments. We also increasedproactive monitoring and enforcement for violent entities (Article 35(1)(f)); ● We built out our Misuse of Reporting Features policy that provides an objective, effectiveand transparent procedure to mitigate the potential misuse of X’s reporting mechanismsfrom users of the X platform (Article 35(1)(b) ; ● Restricted reach labels can now be applied by content moderators to content that usersreport for violating the X Rules. This allows for more proportionate enforcement action onuser reports as well as more consistent application (Article 35(1)(c)); ● We continue to take proactive efforts to mitigate online abuse. These measures aretailored to global events and crises, and deployed as needed. Over the last year, this hasincluded the use of heuristic rules for sporting events such as the Euros as well as alertsfor additional detection for targeting of politicians during the EU elections. (Article35(1)(f));. ● We updated the reporting flow to ensure users take fewer clicks to report harassment.This eases the burden on the user to ensure a swift and seamless reporting experience (Article 35(1)(a)); ● We improved our internal workflows to ensure more accurate routing of user reports tothe correct teams for reviews – this has resulted in swiftly addressing any instances ofharassment (Article 35(1)(c)); ● We scaled the option for X Premium users to verify their accounts through identificationwith a 3rd party partner globally (Article 35(1)(a)); ● We expanded Community Notes to Media and weekly updates are rolled out andcommunicated via our X handle (Article 35(1)(a)); ● Designated trusted flaggers in the EU, alongside X Trusted Partners, are able to use ourreporting channels and escalate content to us that will be reviewed in a prioritised timelymanner (Article 35(1)(g)); ● We continued to enhance feedback mechanisms with post-incident reviews and regularsyncs to ensure that enforcement aligns with the spirit and purpose of the policies (Article35(1)(c)); ● We have continued to enhance our privacy program with regular updates to leadership,as well as set up the process for privacy reviews on recommender systems (Article35(1)(d)\&(f)); ● We have continued and expanded our engagements with civil society organisations. Newengagements include involvement with Project Lantern, Jugenschutz, Global projectagainst hate and extremism, INACH and Search for Common ground (Article 35(1)(f)); ● We have supported the dissemination of media literacy campaigns that fostered thespread of reliable information on the electoral process. For instance, we supported theEDMO “Be elections smart” campaign and the ERGA campaign to prevent the spread of 12 Note that the Sensitive Media policy previously included consensual adult content and violent mediawithin it. As such, allowing consensual adult content on the platform is not an enforcement change, as Xhas always permitted consensually produced and shared adult content. 13 misleading information on elections. We also supported campaigns to stop violenceagainst women (Article 35(1)(i)). A number of the mitigations also are in progress and require continuous work. These include: ● Our operational overhaul, where continuous work is being done to make our operationsmeasurable and implement built-in feedback loops. So far, completed work includes thestreamlining of user reports, improving efficiency of review processes, and updatingguidelines to follow an objective, information-first approach. (Article 35(1)(c)); ● Reinforcing our internal monitoring and data extraction systems for risk assessments andtransparency reports, to showcase trends and regional visualisations. (Article 35(1)(f)); ● We continue to expand our Global Government Affairs team and increasing resourcesallocated to ensuring elections integrity is an ongoing process (Article 35(1)(f)). V. X DSA Systemic Risk Governance Framework Our risk governance framework, as described in our Y1 report, has been revised and improved ata regular cadence throughout the last year. In accordance with Article 34, we annually report onsystemic risks with the involvement of a cross-functional team that comprises Safety, ProductEngineering, Legal, Privacy \& Data Protection, Global Government Affairs (GGA), the IndependentCompliance Function, and the TIUC Board. Our DSA Systemic Risk Governance Framework alsoforesees, in accordance with Article 34(1), the process for risk assessments prior to deployingfunctionalities that are likely to have a critical impact on the EU systemic risks. Furthermore, in line with Article 41 and X’s continuous risk management duties, the IndependentCompliance Function, the DSA Leadership team, and the TIUC Board work together with X’scross-functional risk assessment team to ensure systemic integrity risks are properly identified,mitigated and managed. These frameworks collectively inform X leadership’s understanding andcommitment to meeting its Article 41 management body obligations, with respect to governancearrangements and overseeing, monitoring, and mitigating systemic risks under Article 34 and 35. The Independent Compliance Function Policy outlines the Independent Compliance Function’sspecific duties. Specifically, the Independent Compliance Function is involved in reviewing themethodology of the risk assessment, ensuring its adequacy and completeness, communicatingany updates to the TIUC Board and other relevant leaders, and reviewing the results of the riskassessment. All key stakeholders are involved in ensuring that reasonable, effective andproportionate mitigations are implemented in respect of all systemic risks identified, inobservance of fundamental rights. X acknowledges that the Commission can require VLOPs to take action under Article 36 in caseswhere extraordinary circumstances lead to a serious threat to public security or public health inthe Union or in significant parts of it. Our framework accordingly sets out a process forresponding to requirements under the crisis response mechanism. The Independent ComplianceFunction Policy establishes the Independent Compliance Function’s role in monitoring TIUC’scompliance with commitments made under the codes of conduct or crisis protocols, whenactivated. 14 VI. Methodology In accordance with DSA Article 34, we have conducted a comprehensive assessment thatidentifies, analyses and assesses any systemic risks to the Union stemming from the design orfunctioning of our service, its related systems (including algorithmic systems) and from the usemade of our services. In keeping with our legal obligations under the DSA, we take into consideration the followingsystemic risks: the dissemination of illegal content through our service; any actual andforeseeable negative effects to the exercise of fundamental rights; any actual or foreseeablenegative effects in relation to civic discourse, electoral processes, public security; and any actualor foreseeable negative effects in relation to gender based violence, the protection of publichealth and minors and serious negative consequences to the physical and mental well-being ofindividuals. The assessment addresses, in accordance with Article 34(2), our recommendersystems, content moderation systems, applicable terms and conditions, systems for the selectionand presenting of advertisements and any of X’s data related practices. The following recitalscomplementing Article 34 were also consulted: 12, 79, 80, 81, 82, 83, 84, 85, 89, and 90. In 2023, we developed our DSA risk assessment methodology with reference to multiple existingframeworks, including, but not limited to, the UN Guiding Principles on Business and HumanRights as well as the DTSP Safe Assessments Framework, and adapted them to the uniqueenvironment at X. As part of continuous risk management, our methodology was reviewed andupdated to consider any new guidance on the topic, including Ofcom’s consultation13. Thisupdate allowed us to create a more nuanced and evidence-driven assessment of risks. Our risk assessment reflects X’s services at and around 30 June 2024. A. Walkthrough To streamline the risk assessment process further, we adopted a three-phased approach to theexercise. Fig.1: Three phase process to risk assessment. 13 ‘Protecting people from illegal harm online 15 Phase I: Identification of systemic risks The four systemic risks, as defined in Article 34(1), were assessed. We have streamlined theunderlying assessments, recognising the overlaps between certain risk areas and in ourapproach towards mitigating them. As such, the assessment for the risk of sale of illegal goodsand services was considered alongside the risks to consumer protection, and the assessment forthe risk to the fundamental right of respect for private \& family life was considered alongsidegender-based violence. Phase II: Assessment This assessment of risk analyses (1) the inherent risk, then (2) the control strength and finally (3)the residual risk. The visual below indicates how residual risk acts as a function of inherent riskand control strength; how inherent risk is a function of probability and severity; and finally howseverity can be decomposed into scope, scale, and remediability. Fig.2: The risk assessment methodology Inherent risk Inherent risk is understood as a function of probability and severity, where the assessment ofseverity considers scope of harm, scale of harm, and remediability of harm. The definition of ‘scope’ was updated to better reflect the gravity of harm when it impactsvulnerable groups, to reinforce our understanding of severity. Further, our definition of ‘scale’ wasstandardised to refer to the reach of the harmful content to users in the EU. This definitionallowed teams to clearly identify how certain risks were disseminated in the Union, as well asdelineate between the inherent risk of certain harms on the platform compared to how usersexperience them. 16 Fig.3: User reports under TIUC Terms of Services and Rules The visual above, depicting volume of user reports between October 2023 to June 2024, can beused as a proxy to understand our users’ perceptions of prevalence on the platform. The chartshows that the majority of user reports in the time period were for violations of the HatefulConduct, Abuse and Harassment, and Violent Speech policies, which overlap with the illegal hatespeech risk area. While this is not a perfect measure (e.g., users may not report content violativeof different policies at the same rate of impressions), it can indicate that hate speech may reachusers more than other risks, such as Child Sexual Exploitation content (overlapping with the riskof Child Sexual Abuse Material, ‘CSAM’) or violations of Violent and Hateful Entities policy(overlapping with the risk of Terrorist Content). As such, the inherent risks were recalibrated to align with the standardised data. However, it isimportant to emphasise that this does not indicate that there was an increase in one systemic riskover another on the platform between Y1 and Y2, but rather, our update to the methodology hasprovided more robust understandings of how such content manifests on the platform andattempts to understand to what extent it reaches our users. Assessment of controls As a platform that strives to protect its community, which includes respecting the right to FreeExpression and Information, we have a number of controls in place that mitigate systemic risks onour Platform. We evaluate control measures on their operationality, effectiveness, proactivity, andimprovement processes. We are continually improving our testing methods and effectiveness ofcontrols. 17 Identification of residual risk and tiering Residual risks are calculated by multiplying inherent risk scores by control strength scores. Weassessed the residual risk by mapping our existing mitigation measures against the identifiedinherent risk to showcase how these controls can, and have, already mitigated the assessedrisks. Regardless of the effectiveness of our controls, certain risks will remain, and it is a complex,ongoing and multistakeholder challenge to continuously evolve our control measures andrespond to emerging threat patterns. In many of the assessed systemic risks, negligible residualrisk level is potentially impossible to reach without unnecessarily restricting the use of our serviceand infringing on our users’ fundamental rights. Finally, we assigned risks into different tiers according to their residual risk score. We considercritical or high residual risk areas to be Tier 1 risks, medium residual risk areas to be Tier 2 risks,and low or negligible residual risk areas to be Tier 3 risks. These tiers help us prioritise ourapproach to future mitigations and also provide insights on areas where our current efforts areeffective. For further information on the identification of systemic risks and a detailed methodology, pleaserefer to the Y1 summary report as well as the Annex. Phase III: Mitigation measures Similar to our approach in Y1, based on the results of the risk assessment, we consideredmeasures that could be improved on, or new measures that could be implemented to reduce theresidual risk of harm. As a first step, our teams took stock, among other factors, of theimplementation status of all existing measures, including Y1 Article 35 mitigations and any newcontrols implemented over the last year, to highlight areas where work has been completed orcontinuous efforts are ongoing. Then, the teams identified forward-looking mitigations they couldexplore in order to further reduce or manage the risk areas identified in DSA Year 2. Thisapproach is in line with the core assertions of the DSA that mitigation measures need to bereasonable, proportionate and effective, acknowledge X’s economic capacity, and give specialconsideration to the impact on freedom of expression. As a platform dedicated to protecting our community while respecting free speech, we haveimplemented several controls to mitigate systemic risks. It is important to note that we continuallyupdate and improve these measures to adapt to our growing user base. This methodology is specific to the DSA’s second risk assessment under Article 34. The results ofthis assessment should not be used for other regulatory or litigation purposes. Inherent andresidual risk scores should be understood in context and not in isolation. B. Stakeholder engagement and consultation We regularly engage with stakeholders and partners in the EU as part of our continuous riskmitigation cycle. Leading up to this year’s risk assessment, we consulted external and internalexperts and sought input from our policy and cross-functional teams to develop a proportionate 18 and adequate assessment, keeping in mind the special consideration to the right to freedom ofexpression. Our internal stakeholder engagement included awareness sharing, training, consultations andreviews. Globally based teams involved in this process included Safety, Product Engineering,Legal, Privacy \& Data Protection, Global Government Affairs (GGA), the Independent ComplianceFunction, and the TIUC Board. X management reviewed and approved the assessment strategy,and was actively involved in the decisions related to the risk management. Our external stakeholder engagement – involving collaboration with governmental organisations,law enforcement authorities (LEAs), NGOs, and civil society organisations (CSOs) – takes multipleforms, including: ● Training: Our GGA team provides training sessions for government and non governmentactors. This includes presentations on the safety features of the platform, targeted trainingfor LEAs on the functionalities and systems available to them, as well as training for NGOsand CSOs on reporting illegal or harmful content; ● Ads credits: This is a way for government and non-government bodies to run campaignson X via ads. X donates a certain number of free ads credits, which can be used by theentity to ensure that their campaign reaches users. This acts as a mitigation for the spreadof misinformation by promoting posts by vetted organisations and by supporting thespread of media literacy among our users; ● Information exchange: This is useful for notifications about threats, such as LEAshighlighting evolving threats from bad actors and campaigns, notably in the context ofelections, as these are societal and multiplatform risks. For example, information receivedfrom French and German foreign ministries, following meetings prior to the elections,informed our Safety team’s actions; ● Partnerships and integrations: Launching formalised partnerships and integrations withCSOs is a key mitigation to target cross-platform harms and improve proactivity; ● Combating serious crime: Engagements with EU LEAs (including Europol) have helpedcombat serious crime. In response to key societal events over the last year, GGA has worked closely with governmentsand NGOs to mitigate systemic risks on the platform: ● Following the October 7th attacks, and the rise of antisemitic hate speech, GGAparticipated in meetings organised by the EU Internet Forum to prevent the spread ofterrorist content related to the conflict, meetings by the Conseil Représentatif desInstitutions juives de France (CRIF), Délégation interministérielle à la lutte contre leracisme, l'antisémitisme et la haine anti-LGBT (DILCRAH), and other NGOs. X provided adcredit grants to CRIF, allowing them to run campaigns on X to combat hate speech andantisemitism in France. X also held two roundtables with members from American JewishCongress (AJC) and European Jewish Congress (EJC) in November 2023 and January2024 in Brussels and in the US to establish a cooperation for any content escalations andto exchange information on keywords, behaviours, and patterns that our moderationteams should be aware of; ● X assessed, planned for, and enforced multiple elections in the EU this past year - mostnotably large scale elections such as the EU elections and France Legislative elections. 19 ○ In preparation for the EU elections, GGA proactively engaged information with theEuropean Commission, the European External Action Service, the EuropeanParliament, and key authorities of the 27 EU Member States. X’s work onprotecting the EU elections was appreciated by the European Parliament’scommunication service and the EU’s External Action Service (EEAS) ascommunication was effective during the election and escalations were promptlydealt with. X also supported media literacy campaigns with trusted partners andrecognised experts in the EU, such as the European Parliament, European DigitalMedia Observatory (EDMO), the European Regulators Group for European MediaServices (ERGA) that aimed at providing reliable information on the EU elections.GGA provided crisis response contact points to DSCs, European Commission, andEuropean Parliament. X also presented its election’s approach to Coimisiún naMeán and other DSCs and provided an overview of X’s election integrity efforts.Additionally, X gave a safety training to more than 60 EU-based NGOs on how tomaximise use of safety tools on the platforms and report hate speech related toelections. X also shipped product interventions in the form of home and searchtimeline prompts to direct people to key and official resources on how to registerto vote and reminders to vote in order to encourage civic participation, as well aselection hashmojis. ○ In the context of France’s Legislative elections, GGA consulted X’s NGO partnersfor updated lists of terms that could be considered racist or antisemitic in France.This was taken into account by internal teams in their moderation work during theelections. Viginum and Quai d’Orsay were also able to submit leads on foreigninfluence and attempts to impact civic processes to X’s Safety team. X alsoprovided ads credits for media literacy campaigns in the context of the electionsto Generation Numerique. ○ Ahead of the 2023 elections in Slovakia and Poland, X proactively met with theSlovak Government, electoral commission, and law enforcement authorities inBratislava, as well as the Polish government and electoral commission to discussthe elections. ● Recognising that major sports events have resulted in increases in abuse and harassmenton online platforms, during the 2024 UEFA European Football Championship, Xparticipated in a proactive program with UEFA to monitor, report, and remedy cases ofonline abuse against players. X also collaborated to expedite key copyright reportsthroughout the games, and worked with UEFA to address possible violations in theplatform. Following a training session with law enforcement bodies in Europe (includingEuropol, Interpol, Italian, French, Spanish, German, and Irish bodies) where theyrequested more support during the Olympics, X increased its staff to respond to theprojected increase in volume of reports during the games. Further, X also cooperated withthe International Olympic Committee and e-Enfance to preserve the safety of athletesonline. In this context, X also provided ads credits for a public health campaign (the“manger-bouger” campaign) to the Red Cross in partnership with the French governmentto encourage people to practise sport 30 minutes a day to stay in good health. We also continuously engage with stakeholders to target the following: ● Risk of illegal content: In February 2024, X conducted operational meetings with NGOson how to use X’s EU illegal content form. This resulted in the correction of certain 20 technical issues that were flagged by the NGOs. X also participated in the EU InternetForum Ministerial on the impact of generative AI (GenAI) on terrorism and child sexualexploitation. Further, X took part in the Christchurch Summit as part of the ChristchurchCall for Action on Fighting Terrorism on the margins of the Paris Peace Forum; ● Risk of hate speech: In May 2024, X provided a training session for over 60 CSOs ononline hate speech and violent content, which was attended by DG JUST. X also remainsan industry member of the Online Hate Observatory in France. Further, X provides adscredits to INACH and Search for Common Ground for campaigns against hate speechand violence. Finally, X remains an industry member of the EU Code of Conduct onCountering Illegal Hate Speech and has recently signed its membership to the new Codeof Conduct +, which is becoming a voluntary code of conduct under DSA Article 45;. ● Risks to minors: X is an active participant in the Child Protection Laboratory and attendedmeetings organised by the Lab in the margins of the Paris Peace Forum. X also providesads credits to the InSafe Network, which works on the prevention of online childexploitation, and to Point de Contact and e-Enfance, which work in child protection. Thepartnership with e-Enfance was also for a campaign against harassment in schools. InJune 2024, X also provided ads credits to Cybersmile, in the context of StopCyberbullying Day; ● Risks of harassment and gender-based violence: X provided ad credits to The Sororityfor safety of women campaigns in France, as well as to GIP-ACYMA for a campaign oncyberharassment. For further information on other stakeholders we have continued to work with, please refer to ourY1 report. As we continue to develop our process and risk management cycle, we hope toexplore further stakeholder consultations to inform our risk assessment work. 21 VII. Summary of risk assessments Our teams referred to EU-specific data that extended from October 1 2023 to June 30 2024, andconsidered enforcement on TIUC Terms of Service and X Rules violations (from here on ‘X Rules’or ‘Rules’) 14 as well as on Article 16 DSA notices (referred to as ‘Article 16/DSA user reports’ fromhere on) to draw consistent conclusions across the risk assessment. Moving forward, as thetiming of the risk assessment cycles align with the DSA transparency report, teams will be able touse the transparency report for consistency. The visuals below were built using the October 2023to June 2024 data, and form the basis of our assessments. Enforcement actions: Probability To estimate probability, we looked into total enforcement actions15, both automated and manual,across policy areas that aligned with the underlying assessments. Fig.4: Total enforcement for TIUC Terms of Service and Rules violations This allowed us to understand the volume of violative content and behaviour that existed on theplatform and was actioned. As the pie chart shows, almost of enforcement action is takenunder the Platform Manipulation and Spam policy, indicating high volumes of inauthentic 15 Total enforcement data was calculated by taking the sum of total suspensions, total content removals,and an extrapolated total restricted reach labelled posts for the time period of October 2023 to June 2024.For restricted reach labelling, an estimate for the time period was used, as due to data retention issues,real figures are only available for an . As such, these values should be understood to beestimates. 14 Note that while Adult Content and Violent Content policies were rolled out prior to the completion of thisassessment, there was not sufficient data to be pulled from these enforcement actions. As such, data fromenforcement on Sensitive Media and Violent Speech has been used for this assessment. 22 Copyright, the inherent risk and the residual risk remain the same, at a low inherent risk level16.The following graph shows the inherent and residual risks for this area in Y2. Fig.6: Comparison of inherent and residual risk for dissemination of illegal content Inherent risks Over the last year, political, social and cultural events have had an impact on the risk of illegalcontent being disseminated on X. For instance, the October 7th attacks resulted in an influx ofharmful content being disseminated across social media platforms, particularly regarding terroristcontent and hate speech. Further, the uptake in use of GenAI has also increased the likelihood ofcreation and dissemination of AI-generated content. As discussed in the Y1 Risk Assessment report, there is always an inherent risk of bad actorsmisusing platforms like X and its functionalities to disseminate illegal content. We recognise thatour systems are not immune to manipulation. Furthermore, features such as posting/reposting,tagging, the ability to build anonymous profiles, expanding user networks, and live streaming maybe misused by actors to disseminate illegal content. Controls to mitigate the risk of dissemination of illegal content Policies and enforcement (Article 35(1)(b)) 16 Note that there is no separate inherent risk and residual risk marking in Figure 6 as the low inherent riskof this area has been mitigated by defined controls, and remains a low inherent risk. 24 Product-level controls (Article 35(1)(a)) While all social media platforms are vulnerable to being misused for dissemination of illegalcontent, we recognise that certain product functionalities may pose higher inherent risks. X has anumber of standing measures in place to combat this: ● X Live: In addition to safety detections such as media-based models for Adult Content and Child Safety (detection of the presence of a minor in live videos), there are a numberof product-level protections in place to limit the risk of X Live being abused. Thesefeatures allow the owner of a live video to block anyone that posts abusive or violentcomments, and viewers to report abusive or violent comments allowing a reactive humanreview to take place. ● Spaces: For Spaces, controls include proactive machine learning detections for toxicSpace titles, toxic content in transcription text, and Spaces associated with usersdetermined to be high risk, in addition to reports by speakers or listeners. Spacesdetected or reported are sent to manual review by content moderators to determine ifthey contain any violative content. Hosts and co-hosts of Spaces can block or removeabusive speakers from a Space. ● Communities: Posts in Communities are subject to our Safety post-level controls. In somecases, these controls are stronger in Communities. For example, Sensitive Media postsare hidden using machine learning if the Community did not correctly label themselves as Adult Content or Violent/Graphic Content. Communities also have admins andmoderators who enforce Community rules and use moderator tools to maintain healthyconversations. Furthermore, any X user, whether a member of the Community or not, canreport potential violations to X. Further illegal content controls (Article 35(1)(c)\&(g)) Since August 2023, X has also operated its DSA illegal content report form as well as its appealsform. 26 Fig.7: Enforcement action in the context of DSA user reports Between October 2023 to June 2024, X has received approximately user reports in total,and has actioned of them, with most of the actions being geo-blocking content (known as“country withheld content”) and content removals. X assesses all user reports of illegal contentagainst its own X Rules and if there is no violation of the X Rules warranting removal of thecontent, X then assesses the content for illegality under the law(s) designated by the user in theirreport. X also continues its work regarding trusted flaggers in the EU, including to receive andaction prioritised reports. In X’s second DSA transparency report, which looked at the period of 21 October 2023 to 31March 2024, we found that the median time to resolve illegal content reports was 2.7 hours.Furthermore, in the same time period, of a total of 238K illegal content reports received, 115Kwere found to be violative – approximately 48%. Following this, X received only 667 appeals to itsdecisions taken on illegal content, and 190 decisions were overturned. As such, the decisionstaken on illegal content have around a 0.58% appeal rate, and only 0.17% of the decisions takenare overturned, indicating a high level of accuracy in X’s determinations. The DSA transparency report also provides insights into removal orders and information requestsreceived by Member States’ authorities. Between 21 October 2023 and 31 March 2024, wereceived 13 removal orders, from France, Italy, and Spain, for unsafe and/or illegal products andillegal or harmful speech. The median handle time to resolve these orders was 4.1 hours. Withregards to information requests, we received 6K requests, with the most requests concerningillegal or harmful speech (from Germany), followed by risks for public security (from France). Themedian time to resolve these requests was 74 hours. 27 Dissemination of Terrorist Content The inherent risk of dissemination of terrorist content on X arises from the potential forindividuals or groups who use the platform to disseminate terrorist and extremist propaganda,recruit followers, facilitate or coordinate violent attacks, solicit funds from sympathisers, andpraise, support, or glorify terror attacks. External events and conflicts, such as the October 7th attacks and ongoing conflict in Gaza, hasincreased the inherent risk of terrorist content on online platforms. Probability Between October 2023 to June 2024, X suspended accounts across its Violent andHateful Entities and Violent Speech policies, and removed posts for the same policies.These suspensions amount to only of suspensions on the platform. While the number ofcontent removals that violate these policies comes up to of the total post removals in thetime range, it is worth noting that all Violent Speech removals do not directly correlate toterrorist content. Based on this distinction, the probability of dissemination of terrorist contenton the platform has been assessed to be likely. Severity ● Scope: Acts of violence which may have been coordinated via online platforms,alongside the glorification of terror attacks, may result in psychological harm, potentiallyinducing anxiety, fear, or panic17. Inauthentic accounts may rapidly disseminate terroristand extremist information, and artificially amplify hashtags, trends or messages thatalign with their narratives. This leads to a very high scope of harm; ● Scale: Although the reach of this harm is comparatively lower when considered againstviolations related to hate speech, user reports for Violent and Hateful Entities and Violent Speech comprised almost of user reports between October 2023 - June2024, indicating that the scale of this harm remains high; ● Remediability: Given that a remedy in this situation can rarely restore the individualwho experienced the harm to their state before the impact, this risk has been assessedto be rarely remediable; ● Based on the assessments above, the dissemination of Terrorist Content on theplatform is assessed to have a very high severity. Inherent risk Based on the probability of terrorist content existing on the platform, along with the highseverity, the dissemination of terrorist content on the platform is a critical inherent risk, whenassessed as a hypothetical scenario without considering the existing controls that reduce therisk. 17 Protecting people from illegal harm online, p.27. 29 Control strength In addition to the global controls targeting illegal content described above, specific controlstargeting this risk include: ● Article 35(1)(b) - Policies \& enforcement: X’s Violent and Hateful Entities, Perpetratorsof Violent Attacks, and Violent Speech policies define the enforcement of terroristcontent. Our Perpetrators of Violent Attacks policy is implemented followingescalations; ● Article 35(1)(f) - Crisis response: Our crisis response protocol is led by our StrategicResponse Team, which has protocols for operating under a structured incidentprioritisation plan and crisis assessment framework; ● Article 35(1)(f) - Global Internet Forum to Counter Terrorism (GIFCT): Through GIFCT,X is able to collaborate with industry to identify and resolve challenges, share trendsand analysis, hear from civil society about their concerns and engage with experts fromacademia and governments; ● Article 35(1)(f) - Christchurch Call: X is a signatory of the Christchurch Call, andcontinues to collaborate with governments and civil society to fulfil the commitmentsmade in 2019 and engages directly with the Christchurch Call’s crisis protocol. ● Article 35(1)(f) - Screening prior to monetisation: X screens all verified Premium usersenrolled in the revenue sharing program, against lists of sanctioned entities, to ensurethat X does not disburse payments to individuals on sanctions lists. If any users areconfirmed to be sanctioned, X implements an indefinite restriction on their access to allmonetisation features. Over the last year, further controls have been implemented and existing controls improvedupon, that align with Article 35, to target this risk: ● Article 35(1)(c) - Reporting of illegal content in the EU: Users in the EU can reportposts through a separate DSA report form accessible to all EU users, not just registeredplatform users. These reporting channels assist us in combatting content that violatesX’s Rules or is illegal in the EU; ● Article 35(1)(b) - Policies \& enforcement: Following a policy audit, we have launched a Violent Content policy that improves upon the existing Violent Speech and SensitiveMedia policies to enforce on content that threatens, incites, glorifies, or expressesdesire for violence or harm, as well as visual material depicting graphic, violent, orexcessively gory content including sexual violence; ● Article 35(1)(f) - Proactive monitoring: The number of violent entities that areproactively monitored has increased; ● Article 35(1)(f) - Crisis response: Our crisis response was triggered following theOctober 7th attacks. For more information, please refer to Zoom-in: Israel/Hamas –Crisis Protocol. Overall, the controls for this risk are assessed to be defined. The measures are formalised,documented, and repeatable. Quality assurance frameworks are being implemented andprocesses tend to be more proactive than reactive. They are well characterised andunderstood across all organisation verticals. 30 Tier 1 priority Due to the critical inherent risk of this area, which is mitigated by controls of a defined nature,the residual risk of the dissemination of terrorist content remains a high risk item, making it a Tier 1 priority. While the control measures are robust, the nature of the risk itself requiresvigilance. We will continue to evaluate these risks and our controls as they may continue to evolve. Our efforts to continue addressing residual risk are detailed in VIII. Considerations forfurther mitigations. Dissemination of Illegal Hate Speech Given that X is a public platform, we are sensitive to the inherent risks that hate speech can poseboth at an individual and a societal level. Hate speech is often targeted towards people based ontheir protected characteristics, and can manifest on online platforms in multiple ways, includingdehumanising speech, calls for discrimination, exclusionary speech, slurs, tropes, and hatefulstereotypes, and celebrating or glorifying hate crimes. Features such as Spaces and Communities, anonymous profiles, direct messaging, and usertagging; as well as external events such as the October 7th attacks, can increase the inherent riskof hate speech on X. Probability Between October 2023 to June 2024, X suspended accounts across its Abuse andHarassment, Hateful Conduct and Violent Speech policies and removed posts for thesame. Further, in the same time period, X took actions for Illegal or Harmful Speech,following DSA user reports, which is the category with the highest enforcement within theillegal content reporting workflow. As such, we have concluded that the probability ofdissemination of illegal hate speech content on the platform is almost certain. Severity ● Scope: Acts of hate speech may lead to targeted abuse, harassment and hate speechbased on protected characteristics. While there is some potential for this to result inpsychological harm, research shows mixed results when trying to identify thecorrelation between online hateful language and specific offline crimes.18 Overall scopeis considered to be moderate; ● Scale: User reports for Hateful Conduct, Abuse and Harassment, and Violent Speech together resulted in almost of user reports between October 2023 - June 2024,indicating the wide reach of this harm. In the same period, X received user reportsfor Illegal or Harmful Speech, which is of all DSA reports, and the highest volumeof user reports within the DSA categories. Hence, the scale of this harm is very high; 18 Cahill, M, Migacheve, K, Taylor, J, Williams, M, Burnap, P, Javed, A, Liu, H, Lu, H. and Sutherland, A, 2019.Understanding online hate speech as a motivator and predictor of crime 31 ● Remediability: If illegal hate speech is disseminated, the platform’s redressmechanisms, such as suspending accounts and removing posts, can curb thedissemination. However, users who witness such illegal hate speech, especially thosebelonging to the targeted group, may experience some psychological distress. Despitethis, platform action may mitigate most of the harm done by reducing the presence ofthe content. Therefore, remediability is considered to be likely remediable. ● Based on the assessments above, the severity of illegal hate speech is high. Inherent risk Based on the probability and severity of this risk, the dissemination of illegal hate speech onthe platform is assessed to be a critical inherent risk, when assessed as a hypothetical scenariowithout considering the existing controls that reduce the risk. Control strength In addition to the global controls targeting illegal content described above, specific controlstargeting this risk include: ● Article 35(1)(b) - Policies \& enforcement: X’s Abuse and Harassment, Hateful Conduct,and Violent Speech policies are used to enforce on instances of harmful speech on theplatform, and illegal hate speech is enforced upon following illegal content EU userreports; ● Article 35(1)(c) - Proactive moderation for violative speech19: X’s automated contentdetection tools for X Rules violations can act on both text and media, and thosedetections may or may not overlap with illegal hate speech laws in respective EUmember state countries. We use combinations of natural language processing models,image processing models, and other sophisticated machine learning methods, as wellas heuristic-based rules, to detect potentially X Rules violating content. ● Article 35(1)(c) - Training: We actively provide ongoing training support and mandatoryrefresher requirements for our frontline moderators to educate them about differenttypes of hate speech and how they may manifest on X; ● Article 35(1)(c) - Understanding Context: Due to the fact that “hate speech” is verycontextual and language-based, X hires content moderators with a variety of languageskills to provide a comprehensive and thorough review of probable hate speechcontent that is reported from our users. Teams also maintain a live resource ofnon-English hate speech related terms and slurs in various European languages. Over the last year, further controls have been implemented, in alignment with Article 35, thattarget this risk: ● Article 35(1)(c) - Reporting of illegal content in the EU: Users in the EU can reportposts as illegal hate speech through a separate DSA report form accessible to all EU 19 Note that automated content moderation tools enforce against our X Rules related to harmful orhateful speech. There can be an overlap with our Rules and the definitions of illegal hate speech. 32 users, not just registered platform users. These reporting channels assist us incombatting content that violates X’s Rules or is illegal in the EU; ● Article 35(1)(c) - Improving moderation and tooling: On an ongoing basis, we add newslurs, harmful terms, and phrases to our operational handbook and proactive heuristicsto ensure we are capturing the evolving landscape and use of language to targetmembers of protected categories; ● Article 35(1)(f) - Partnerships: During the 2024 Euros, X participated in a proactiveprogram with UEFA to monitor, report and remedy cases of online abuse. We were ableto effectively review hundreds of posts throughout the tournament and take furtheraction where needed; ● Article 35(1)(h) - Stakeholder engagement: X remains an industry member of the EUCode of Conduct on Countering Illegal Hate Speech and just signed its membership tothe new Code of Conduct +, which is becoming a voluntary code of conduct under DSAArticle 45. X is also an industry member of the Online Hate Observatory in France.Further, X provides ads credits to the INACH, and Search for Common Ground forcampaigns against hate speech and violence on the platform. Overall, the control suite is managed, as the control methods are repeatable and are operatingeffectively. Policies and guidelines are well defined, formalised and regularly managed. Weprovide clear guidelines to our enforcement teams and are constantly updating our policiesand guidelines to reflect changes in trends. Processes are proactive, where possible. Tier 2 priority Due to the critical inherent risk of this area, which is mitigated by controls of a managed nature,the residual risk of the dissemination of illegal hate speech content is a medium risk item,making it a Tier 2 priority. We continue to evaluate these risks and evolve our controls. Ourefforts to address residual risk are detailed in VII. Considerations for further mitigations. Dissemination of Child Sexual Abuse Material (CSAM) CSAM is an ever-evolving issue and can manifest in a myriad of ways online. All users, butespecially children, may be impacted by the production, distribution and consumption of CSAM,or they may be groomed for sexual exploitation. It is also possible for a minor to be coerced ordirected to produce self-generated CSAM or indecent imagery. Features such as anonymousprofiles, direct messaging and encrypted messaging can increase the likelihood of this riskmanifesting on X. Inauthentic accounts create an additional vector of harm through CSAM spamthat either redirects to off-platform content or uses CSAM terms/media to get users to click linksor gain followers. Over the last year, there has been no particular incident or external circumstance that haschanged the risk profile for CSAM. X enforces on CSAM under its Child Sexual Exploitation policy,and maintains a zero tolerance policy towards CSAM content, including sexually exploitativecontent, sexual solicitation, sex trafficking, and sexual child abuse. 33 Probability CSAM is a highly adversarial area where bad actors have strong monetary incentives and areconstantly probing our defences to try and redirect traffic off-site, or more rarely, postingcontent directly on X. Between October 2023 to June 2024, X suspended accounts violating our Child Sexual Exploitation policy, such as by engaging with such content, and removed posts for the same policy. As this area considers both the risk of grooming as wellas of child sexual abuse, the probability ranges from likely to almost certain. Severity ● Scope: The exploitation of minors coordinated through online platforms can causesevere physical and psychological harm. Additionally, sharing such content andenabling contact between perpetrators and victims can lead to psychological traumaand retraumatisation. This content can also impact adults who view the content. Thisleads to a very high scope of harm from this risk on the platform; ● Scale: The reach of this harm is comparatively lower when considered against othertypes of violations, indicated by the number of user reports for Child Sexual Exploitation ( of all user reports). Therefore, this is assessed to have a moderate reach; ● Remediability: Since it is rarely possible to restore a minor's mental and physicalwell-being after the harm has taken place, this risk is considered not remediable. ● Based on the assessments above, the severity of CSAM content is high. Inherent risk Based on the probability and severity assessments the dissemination of CSAM on the platformis assessed to be a high inherent risk, when assessed as a hypothetical scenario withoutconsidering the existing controls that reduce the risk. Control strength In addition to the global controls targeting illegal content described above, specific controlstargeting this risk include: ● Article 35(1)(b) - Policies \& Enforcement: X’s Child Safety policy captures itsenforcement on Child Sexual Exploitation, which may include real media, text,illustrated, or computer-generated media - including GenAI media. In the majority ofcases, users are immediately and permanently suspended. ● Article 35(1)(f) - Hash-sharing: Content surfacing for human review includes leveragingthe hashes provided by NCMEC and industry partners. We scan media uploaded to Xfor matches to hashes of known CSAM sourced from NGOs, law enforcement and otherplatforms. Users posting known content are suspended and reported to NCMEC; ● Article 35(1)( j) - PhotoDNA and internal proprietary tools: A combination oftechnology solutions are used to surface accounts violating our Rules on Child SexualExploitation (which includes CSAM); 34 ● Article 35(1)( j) - Reporting to NCMEC: We continue to report accounts to NCMEC whenappropriate; ● Article 35(1)( j) - Media Risk Scanning: as well as filter false positive hash matches. proactivelyidentifies, based on the context of the conversation, possible discussions of childaccess, child sexual abuse, CSAM, self-generated CSAM, and sextortion. This allowsour platform to identify, remove and report child sexual abuse material at scale; ● Article 35(1)( j) - Language coverage: Our media detection is language agnostic, whichminimises this risk when considering CSA media; ● Article 35(1)( j) - Restricted high-risk terms: X maintains a list of related keywords andphrases that are blocked from Trending and/or are blocked entirely from search results.We have since added more than CSA keywords and phrases; ● Article 35(1)( j) - Controls in DMs: Content moderators are instructed to review DMs whenever there are signs of potential Child Sexual Exploitation violations happening inDMs (such as information from law enforcement or user profile signals) and mediashared in DMs is proactively scanned for matches against known CSAM databases; ● Article 35(1)(a) - Controls in encrypted messaging: Currently, encrypted DMs are onlyavailable to users that have a Premium subscription, and Premium subscriptions are onlyavailable to users that have provided payment details. Although encrypted DMs onlyinclude text and links, and not media, there is a potential risk of grooming behaviour andsharing links to CSA material via encrypted DMs. Users can report messages forgrooming/abuse, where a cryptographically validated excerpt of the text is sent to theagent for review. Over the last year, further controls have been implemented and existing controls improvedupon, in alignment with Article 35, that target this risk: ● Article 35(1)(c) - Reporting of illegal content in the EU: Users in the EU can reportposts through a separate DSA report form accessible to all EU users, not just registeredplatform users. These reporting channels assist us in combatting content that violatesX’s Rules or is illegal in the EU; ● Article 35(1)(f) - Proactive detection: Improvements to our hashing detection. We nowhave our own internal hash list that content moderators can add media to from withinour review tools. This allows us to take down content that we've seen immediatelywithout waiting for it to make its way to shared hash libraries provided by NCMEC andindustry partners. Our blog also provides a comprehensive update on the work undertaken to tackle CSA on X.Overall, the controls for this risk are assessed to be managed. Our measures are well defined,formalised, and regularly managed, with repeatable quality assurance in place. There is anestablished process for integrating feedback to mitigate process deficiencies, and processesare proactive, where possible, for all forms of content and behaviour. 35 Tier 3 priority Due to the high inherent risk of this area, which is mitigated by controls of a managed nature,the residual risk of the dissemination of CSAM is a low risk item, making it a Tier 3 priority.Nevertheless, we continue to improve our controls to protect minors and minimise harm donewithin the platform, especially since these bad actors are actively adversarial and constantlyshift their behaviours. Our efforts to continue to address residual risk are detailed in VII.Considerations for further mitigations. Dissemination of IP \& Copyright infringing content X’s Terms of Service explicitly require that users agree not to post content that is subject tocopyright or other proprietary rights unless they have the right holder's permission or areotherwise legally entitled to share the content. However, users may - in violation of our policies -share content on our services without the appropriate legal permissions. Recently, with the ability of users utilising GenAI to produce content that may resemble existingworks, it has become easier for users to post content that may incorporate the intellectualproperty rights of creators, including, for example,copyright rights. Probability Between October 2023 and June 2024, X suspended accounts and removed postsfor intellectual property infringements. Although this is a small in scale compared to otherviolations, it is important to note that the features of the platforms (posts, long form posts,media sharing, and long video sharing for X premium users), mean that uploading of IP contentis a risk that is likely to occur regularly, making the probability possible. Severity ● Scope: Intellectual property infringements result in remediable economic harm and donot necessarily target vulnerable groups, making the scope of such harm low; ● Scale: Between October 2023 to June 2024, X received reports for intellectualproperty infringements, which is around of the total user reports received in thistime. Further, this harm primarily impacts the poster and certain rights owners. As such,the scale is assessed to be low; ● Remediability: Since the content can be removed and X can take appropriate actionsto restore intellectual property rights to the owners, it is likely that owners’ rights can berestored before the infringement expands. Therefore, this risk is considered to be likely remediable.Based on the assessments above, the severity of this harm is assessed to be low. 36 Inherent risk Based on the probability and severity of this harm, the inherent risk of disseminating contentinfringing on intellectual property rights, including, for example, copyright, is assessed to be a low inherent risk, when assessed as a hypothetical scenario without considering the existingcontrols that reduce the risk. Control strength In addition to the global controls targeting illegal content described above, specific controlstargeting this risk, : ● Article 35(1)(b) - Diligent enforcement: We ensure diligent and consistent enforcementof Copyright and Trademark policies to apply to content on the platform. If an X agentneeds additional information when reviewing a case, they will send a message to thereport(er) asking for more information, thereby ensuring that the agent has all relevantdata points when reviewing the report and committing a final action on the case. ● Article 35(1)(b) - Repeat Infringer: The Repeat Infringer sub-policy under X’s Copyrightpolicy takes valid retractions and counter reports into account; ● Article 35(1)(b) - Weekly policy enforcement calibration: The Copyright agent andCopyright legal teams meet on a weekly basis to review examples of the previousweek's cases for noticeable trends, discuss unique cases to ensure a standardisedprocess of review/action, and potential policy updates; ● Article 35(1)(c) - Notice-and-takedown process: X has a notice-and-takedown processfor copyright issues that is actively enforced for both report(er)s and the report(ed); ● Article 35(1)(c) - Prioritised reports: ● Article 35 (1)(c) - Escalations: X has built out an internal escalation process that isbased on specific variables of the user and the content being reported, to enableadditional review of content flagged as violative that may warrant more added risk; ● Article 35 (1)(c) - Preparation for risk events: X maintains a revolving up-to-datecalendar of future popular sporting/TV events to ensure sufficient agent coverage andsupport when applicable (i.e. additional agents during the peak hours of the event) inanticipation of potential spikes in copyright infringement caseload; ● Article 35(1)(f) - Expert consultations: X has copyright and trademark policy expertsresponsible for identifying abusers and making recommendations regarding trends ofcontent being reported and user behaviour, in addition to having legal guidance andconsultations when applicable. Over the past year, the above controls have been continuously monitored and managed toensure that the risk continues to be effectively mitigated. Overall, the controls for this risk areassessed to be defined. Mitigation measures are sufficiently defined, documented, and 37 regularly managed. There is a set process for integrating feedback to mitigate processdeficiencies. Tier 3 priority Due to the low inherent risk of this area, which is mitigated by controls of a defined nature, theresidual risk of the dissemination of IP, including, for example, copyright infringements remainsa low risk item, making it a Tier 3 priority. The control measures are robust, however wecontinue to evaluate and improve them to ensure their continued effectiveness given moderntrends, patterns, and user behaviour. Our efforts to continue to address residual risk aredetailed in VII. Considerations for further mitigations. B. Exercise of fundamental rights This section considers the risk of negative effects to the exercise of the following fundamentalrights: freedom of expression, consumer protection, protection of minors, personal data, andother fundamental rights. The assessment of fundamental rights considers the rest of the rightsenshrined in the Charter, paying special consideration to the right to life, human dignity, andequality, right to liberty and security of a person, right to non-discrimination, and freedom ofpeaceful assembly and association. We believe that X is a platform where users can express their opinions and ideas freely withoutfear of censorship. Simultaneously, it is our shared responsibility to ensure the safety of our usersfrom content that violates our Rules. Therefore, as we develop our enforcement strategies, westrive to balance the protection and freedom of our users. The inherent risk for some of these areas increased this year, whereas improvements in ourcontrols resulted in a reduction in the residual risk. The following graph shows the inherent andresidual risks for this area in Y2. 38 Fig.8: Comparison of inherent and residual risk for fundamental rights Inherent risks As a digital public town square, users come to the platform everyday to discuss and engage inconversation. However, there is always an inherent risk, on X as with other platforms, that actorsor users can intentionally or unintentionally infringe on other individuals’ fundamental rights.Although X as a platform is not directed to minors, minors over the age of 13 are allowed on theservice and there remains an inherent risk that they may be exposed to harmful content. Notingthat minors are more vulnerable than adults, features such as DMs, user network expansionrecommendations, a recommender feed and anonymous profiles may act to exacerbate certainrisks. For more information on the inherent risk to fundamental rights, please refer to our Y1report. Controls to mitigate the risk to fundamental rights Policies \& enforcement (Article 35(1)(b)) X enforces on a range of violative content, which spans across content that could hinder anotheruser’s free expression (such as abuse-related content); harm consumers (such as the selling ofdrugs or firearms on the platform); suicide or self harm related content; as well as content andconduct that could harm minors. With regards to personal data, X has robust internal policies toensure that user data is protected, in compliance with the EU GDPR. These policies are enforced using a wide range of measures, including content labelling,restrictions, removals, and account suspensions for severe violations or repeat infringements. 39 Aligned with the DSA, we value diligent, objective, proportionate and reasonable procedures,offering users the right to appeal content moderation decisions. Our amnesty policy occasionallyreinstates accounts suspended for a specific subset of low-severity violations (e.g., we wouldnever provide amnesty for accounts suspended for Child Sexual Exploitation), balancing usersafety with freedom of expression. This aligns with the DSA’s focus on avoiding unnecessaryservice restrictions and considering the impact on freedom of expression and information whenmaking enforcement decisions. Requests from governments and law enforcement authorities arereviewed for compliance with international human rights and legal standards. Zoom in: Transparent restricted reach labelling We have invested in developing a broader range of remediations, with a particular focus oneducation, rehabilitation and deterrence through implementing the freedom of speech notreach approach - our enforcement philosophy which means, where appropriate, restricting thereach of posts that violate our policies by making the content less discoverable - usingtransparent restricted reach labels. All content moderation systems are susceptible to certain inherent risks, as outlined in IV. XRisk Environment: Influencing Factors \& Controls. As such, false positives and false negativesmay occur with restricted reach labelling, which forms a part of our suite of remediationsalongside suspensions and content removals. In the case of fundamental rights, false positives\- where an action is taken when it should not be - could result in unfair restrictions onnon-violating users. Expanding our enforcement options to include this restricted reach labelling has allowed us tomake progress in balancing the safety of users while protecting freedom of speech and beingtransparent in our enforcement actions. We strive to strike this balance by continuing toremove posts that harass, abuse or share hateful content directed towards specific individualsand protected groups, as we believe such targeted harassment violates individual fundamentalfreedoms. Our community has provided valuable feedback to help us make meaningful changes to theaccuracy of our label application, such as identifying instances where reach was notappropriately restricted and improving recognition of context in our detection. We proactivelyseek to prevent ads from appearing adjacent to content that we label. Users are also madeaware of any restricted reach implemented against their content and are given the ability tosubmit an appeal if they disagree with our enforcement decision. Regular studies conducted over the past year have shown consistent results when looking atimpressions on content with restricted reach labels versus healthy posts from the same author.The restricted reach posts have had a reduction in impressions and analysis overtime has shown the impression reduction consistently stays in this range. 40 Data from April 2024 to June 202420 shows that of the posts that received a restricted reachlabel, only were appealed. Less than half of these appeals were overturned, indicatingthat approximately of these labels were incorrectly applied. We continue to work towardsimproving the accuracy of our labelling, and communicate to users when such labels areapplied for X Rules violations to ensure that they can seek redress effectively. Fig. 9: Comparison of enforcement action for TIUC Terms of Service and Rules As seen in the visual above, our restricted reach labelling is primarily used for Hateful Conduct.This is in line with our belief that users have the right to freedom of expression, and wecontinue to restrict the reach of toxic content to maintain a healthy community online. Nevertheless, we recognise that certain behaviours are unacceptable and use otherenforcement measures in those cases. In instances where content or conduct is consideredabuse, harassment, and violence, we remove content or suspend accounts, depending on theseverity of the violation. We have policies in place to take strong enforcement action againstand remove illegal content, including CSAM and terrorism content. Production and publicationof such content results in suspension from the platform following the first offence. Product-level controls (Article 35(1)(a)) At a product level, X provides a suite of tools designed to help our users control what they see onX and what others can see about them on X, so that they can express themselves on X withconfidence. Find out more about how to control your X experience here and our safety and 20 Due to data retention issues, we are only able to extract data for restricted reach for .To show a comparison on real figures, all policies here are compared on the same time frame. 41 security tools here. X continues to be a leading player in the industry by open-sourcing itsrecommendation algorithm to allow feedback from the community. Controls for minors (Article 35(1)( j)) X is rated for ages 17+ in iOS App Store, meaning that children with the correct date of birth intheir App Store will not be able to download the X app. We prohibit content jeopardising minors'safety. We use content labels and interstitials to minimise exposure to sensitive content. We havealso implemented age-gating mechanisms and age-appropriate reporting channels for underageusers. For further information on our controls for this systemic risk, please refer to our Y1 report. Thefollowing sections provide insight into our assessments for each risk area related to fundamentalrights and provide a summary of the results. Freedom of expression Abuse and harassment, hateful conduct, violent speech and privacy violations can result in risksto freedom of expression, through harms such as censorship resulting from enforcement ofplatform policies as well as self-censorship from users who experience abuse and harassment onthe platform. Further, inauthentic manipulation of information by government and non-state actorswith the intention to control the information space, off-platform coordination to boostengagement and manipulate organic trends, as well as instances of mass reporting with theintention to trigger disproportionate enforcement can increase this risk. Probability Between October 2023 and June 2024, X suspended accounts for violations related to Abuse and Harassment, Hateful Conduct, and Violent Speech policies, accounting for ofall suspensions. Additionally, X removed posts for the same violations, representingof all removed posts. Although not all these actions directly relate to freedom of expression,they may be understood as offences that could result in self-censorship or other kinds ofsuppression of speech. Consequently, the probability of this harm has been deemed almostcertain. Severity ● Scope: The scope is considered moderate as there is no clear risk of physical and/orpsychological harm. However, this harm may impact vulnerable groups; ● Scale: Over the past year, X has made changes to its enforcement policies to ensurethat mitigations are proportionate and that X is not unnecessarily suspending accounts.Between October 2023 to June 2024, excluding Child Sexual Exploitation and PlatformManipulation and Spam related violations21, account suspensions accounted for 21 For CSAM, given the severity of the violation and X’s zero tolerance policy, suspensions are used. For Platform Manipulation and Spam, given that it is a behaviour related violation rather than a content relatedviolation, suspensions are used. Platform Manipulation and Spam suspensions are mainly directed atinauthentic accounts. As such, these two policies were excluded from the calculation. 42 ● Article 35(1)(i) - Improved transparency: We aim to provide meaningful transparencyon our enforcement policies and actions, including through notice to our users on ourenforcement actions, when and how policies are updated through our Help Centrearticles and @Safety handle, how potential violations can be reported and reviewed,when enforcement actions happen, and pathways for user appeals. We produce globaltransparency reports, alongside biannual DSA transparency reports, that cover a widerange of metrics. We do this so that our stakeholders can understand how X’scommitment to safety has evolved over time and to shine a light on the areas wheredifferent governmental agencies may be infringing on users rights to free expression. ● Article 35(1)(a) - Improvements to Community Notes. We have invested in tools such as Community Notes, which allow people on X to collectively add helpful, informativecontext to potentially misleading posts. This is an opportunity for our users to providemore information rather than removing the content that may be considered to bemaking a misleading claim. For information on improvements over the past year, refer to Zoom in: Community Notes. ● Article 35(1)(c) - Proportionate enforcement: Restricted reach labels (under ourfreedom of speech not reach enforcement philosophy) can now be applied by contentmoderators following user reports. This allows for more proportionate enforcementaction on user reports as well as more consistent application. X users have the right toexpress their opinions and ideas without fear of censorship. Overall, the controls for this are assessed to be managed. Our policies and enforcementprotocols have been created in a manner that prioritises protecting physical safety as the mostimportant consideration. We strive to strike an appropriate balance between safeguardingprivacy and enabling free expression. The measures are well defined, documented, andregularly managed. There is an established process for integrating feedback to mitigateprocess deficiencies, and processes are proactive, where possible. Tier 3 priority Due to the medium inherent risk of this area, which is mitigated by controls of a managed nature, the residual risk to freedom of expression is a low risk item, making it a Tier 3 priority.However, we continuously evaluate the situation to adapt to changing risks. Consumer protection Risks to consumer protection can stem from the sale of illegal goods and services, counterfeitsand brand impersonations, financial scams and deceptive, misleading or harmful ads. Illegalgoods and services can range from sales of drugs and firearms, to sexual solicitation. Certainfeatures such as anonymity, the potential to reach or connect with wide audiences, directmessaging and Communities, can be leveraged by bad actors to increase this risk. Given that badactors in this space are engaged in this behaviour with intent, tools, tactics, and given thatprocedures can change at any time, X’s external facing policies are potentially susceptible tobeing intentionally circumvented. 44 content in ads. When advertisers opt to promote their content using X Ads on theplatform, their accounts and content undergo a review process to ensure quality andsafety standards. We utilise a combination of machine learning algorithms and humanreviews to verify that advertisers adhere to our advertising policies; ● Article 35(1)(c) - Proactive and reactive moderation on ads: X’s Ads policies areenforced both proactively and reactively by human reviewers who conduct proactivesweeps for violative content, review potentially violative content flagged by automatedsystems, and assess user and Article 16 reports; ● Article 35(1)(c) - Market-specific language resources for enforcements: Forlanguage-related issues that come up during responses to reported content, contentmoderators have guidelines they can follow to provide answers in line with linguisticand cultural standards and norms; ● Article 35(1)(a) - Consumer protection features: X has features that aim to protectusers from harm, such as authenticity challenges; ● Article 35(1)(c) - Restricted reach, rate limiting and unsafe URL detection: Thesefeatures work to reduce the impact of misleading activity, including malicious URLs, onthe platform by reducing impressions and limiting user exposure to such content; ● Article 35(1)(c) - Reporting mechanisms for ads: Users can report ads for deceptiveand fraudulent content and illegal products and services through in-app reporting or XAds web form; ● Article 35(1)(c) - Reporting of illegal content in the EU: Users in the EU can reportposts through a separate DSA report form accessible to all EU users, not just registeredplatform users. These reporting channels assist us in combating content that violatesX’s Rules or is illegal in the EU; ● Article 35(1)(c) - Country-withheld content: Following an DSA user report in the EU, ifthe report If we receive is not a violation of our Rules but is illegal in a certainjurisdiction, the content may be withheld in the relevant jurisdiction, limiting its reach. Over the last year, further controls have been implemented and existing controls improvedupon, that align with Article 35, to target this risk: ● Article 35(1)(a) - Community Notes: Users can help provide context and warnings toother users if they identify misleading information or third-party links that may beunsafe, including those that may attempt to scam users. For information onimprovements over the past year, refer to Zoom in: Community Notes. ● Article 35(1)(f) - Interdepartmental cooperations: Safety has established a cooperationwith the Global Content Partnerships team (X team that acts as consultants for majorpublishers on the platform) to initiate tickets when high profile events that will likelyinclude digital counterfeit campaigns are coming up; Overall, the strength of the controls for this risk are assessed to be managed. For counterfeitand financial scams violations, there are functioning enforcement capabilities, with well definedand documented policies. Additionally, there are avenues to escalate edge cases and adjusttraining materials and policies based on those escalations. There is an established process forintegrating feedback. Based on operations feedback, how to action the selling of counterfeitcurrencies was included in training materials as being a likely scenario to take place on theplatform. 46 Tier 3 priority Due to the high inherent risk of this area, which is mitigated by controls of a managed nature,the residual risk to consumer protection is a low risk item, making it a Tier 3 priority. Consumerprotection necessitates constant supervision and adaptable measures due to the evolvingnature of the offence. Our ongoing efforts to address the residual risk are outlined in VII.Considerations for further mitigations. Protection of minors X is not a service that is directed primarily to children and it is listed as an app for 17+ on the iOSApp Store, meaning known minors will not be able to download the app. According to our Termsof Service, an individual must be at least 13 years old to create an account, and a date of birth isrequired to access certain content. For users who are over the age of 13 but under the age ofGDPR consent in the member state where they reside, X has built an additional workflowpermitting such users to create an account with their parent or guardian’s consent. However, X is a real-time global information service, with some users (including minors) accessingthe platform without logging into an account or by circumventing the age gate with falseinformation. For online platforms, there are inherent risks that minors become exposed to harmfuland violative content including bullying, harassment, non-sexual abuse, graphic violent and/orsexual content, as well as content about self-harm, eating disorders, and suicide. Over the lastyear, there has been no particular incident that has changed the risk profile of this harm. Probability As of metrics from August 2024, X’s internal figures showed that 0.98% of EU-based X accountholders were minors. As a result of mandatory age gates, the proportion of EU account holderswithout an age attributed to their account stands at 6.3%.22 However, given that this is basedon self-declaration, it is possible that the number of minors on the platform are higher. BetweenOctober 2023 to June 2024, X actioned user reports for ‘Protection of Minors’ under theDSA illegal content reporting. This comes up to around of all actions taken following DSAillegal content reports. Based on this, the probability of minors encountering such content hasbeen assessed as possible. Severity ● Scope: As minors are a vulnerable group, they are more likely to experience anynegative or potentially harmful content or behaviour on the platform in a more severemanner. Exposure to content encouraging or promoting self harm, violent or graphicmedia, and non-sexual abuse may result in physical harm and psychological distress. 22 Based on logged-in average monthly active recipients of the service (“AMARS”). This is directionallyaligned with external figures, which suggest that minors 13-17 represent 2.4% of global account holders. 47 Self-harm content, even if it is recovery focused content, may be upsetting or triggering.As such, the scope of harm is assessed to be high; ● Scale: Between October 2023 to June 2024, under the DSA illegal content reporting, Xreceived reports for Protection of Minors, which constitutes of the totalreports under Article 16. The reach of this item is comparatively lower as children arenot X’s primary demographic. Therefore, the scale is assessed to be moderate; ● Remediability: Given that a remedy in this situation typically cannot restore the minor totheir previous state, this risk has been assessed as not remediable. ● Based on the assessments above, the severity of the risk is high. Inherent risk Given the probability and severity of this harm, this offence is assessed to have a mediuminherent risk, when assessed as a hypothetical scenario without considering the existingcontrols that reduce the risk. Control strength In addition to the global controls to protect fundamental rights described above, specificcontrols targeting this risk include: ● Article 35(1)(b) - Comprehensive abuse policies: Our dedicated Child Safety policycovers content and behaviour that impacts minors the most, such as Child SexualExploitation, Physical Child Abuse Media, and Media of Minors in Physical Altercations.Policies to protect rights to privacy and prohibitions on content that encourages suicideand self-harm are also applicable to protection of minors. ● Article 35(1)(a) - Default settings for logged-out users: Permitting users to access Xcontent without logging into an account is fundamental to X’s mission to help ensurefreedom of expression and access to information of its users. To mitigate risksstemming from this, X sets high privacy, safety and security settings for users whoaccess X without an account, including the inability to view sensitive media and onlydisplaying ads that have been tagged as “family safe”. Attempting to view non-verifiedaccounts or accounts under a threshold level of engagement while logged out redirectsusers to the login screen. Content that can be accessed is age-gated with anon-dismissable interstitial if it has been labelled as sensitive by the account or oursystems; ● Article 35(1)(a) - Default privacy and security settings: All new EU users signing up tothe service for the first time have, by default, personalisation turned off (includingpersonalisation of ads, personalisation based on inferred identity, personalisation basedon places you’ve been). All users also have direct messages defaulted to protected,meaning that only accounts that follow them can message them; ● Article 35(1)(a) - Encrypted DMs: Encrypted DMs are only available to X Premiumusers, who mainly have a paid subscription, meaning that minors are less likely toaccess them. Furthermore, Encrypted DMs can only include text and links; media andother attachments are not supported yet, meaning that it is less likely to be used forsextortion or other behaviour that is harmful to minors; 48 ● Article 35(1)( j) - Security features for minors: We age-gate sensitive content to limitexposure to minors and allow users to report suspected underage accounts. We alsohave parental reporting, minimum age, and GDPR consent features that apply tominors; ● Article 35(1)(d) - Restricted recommendations: X implements eligibility requirementsbefore it recommends content and accounts. Neither the Following nor the For YouTimelines permit sensitive content or inappropriate advertising to be surfaced foraccounts of known minors; ● Article 35(1)( j) - Age inference: For user accounts without an assigned age, age isinferred to help prevent minors seeing inappropriate ads; ● Article 35(1)(i) - Support messages: X prompts safety resources and support messageswhen users search for content related to self harm and suicide. Over the last year, further controls have been implemented and existing controls improvedupon, that align with Article 35, to target this risk: ● Article 35(1)(e) - Limits to targeted advertisement: X does not serve any ads to usersunder the age of 18 in the EU, as of August 2023. Logged-out users are also not servedads. Overall, the control strength is assessed to be managed. We have sufficiently comprehensivecontrol measures. There are usable reporting mechanisms, enforcement teams and proactiveefforts for all X Rules at work here. X’s policies and enforcement guidelines are clearly definedand thorough. Policies address key risks that harmful content poses on the platform, and havebeen drafted after careful deliberation with internal and external stakeholders. We provideclear guidelines to our enforcement teams when it comes to the content review process. Thisarea (similar to all other policies) often requires further clarification from our agents and we areconstantly updating our policies and enforcement guidelines to reflect changes in trends. Tier 3 priority Due to the medium inherent risk of this area, which is mitigated by controls of a managed nature, the residual risk of protection of minors is a low risk item, making it a Tier 3 priority. Aswith other risks, this risk necessitates constant supervision and adaptable measures. Ourongoing efforts to address the residual risk are outlined in VII. Considerations for furthermitigations. Protection of personal data X is a platform that aims to foster communication all around the world. As a result, it processespersonal data. This may entail potential inherent risks for the protection of personal data and theexercise of the right to privacy. This could include, for example, users’ personal data beingprocessed in ways that exceed their expectations, private information being published on theplatform without proper authorisation or X being subject to security incidents that couldpotentially expose users’ private information. 49 A failure to maintain products, too ls, and processes t hat promote user privacy and enable usersto exercise t heir privacy rights could create inherent risks for this fundamental right. Over t he lastyear, the re has been no particular incident that has changed the risk profile of t his harm. Probability Between October 2023 to J une 2024, X suspended - accounts for violations relating toPrivate Informat ion and Media and removed posts for the same policy. This amount s to- of all removed posts. Additionall y, between October 2023 and J une 2024, X hasconduct ed privacy reviews and data protect ion impact assessment s (DPIAs) to ensureprivacy and dat a protect ion is upheld across the platform. Without any privacy and dataprotect ion controls, the probability of this harm is assessed to be likely. Severity • Scope: Without effective risk management, data could be processed in a manner thatdoes not ensure appropriate security and confident iality, leading to data loss and/or adat a breach. This would lead to critical privacy risks and have a significant impact onusers and their trust in X to handle their personal data, which could result inpsychological dist ress. As such, the scope of the risk is dete rmined t o be high; • Scale : Between October 2023 to June 2024, X received - reports for DataProtection \& Privacy Violations through the DSA illegal content reporting channel, andar ound ■ reports for violations of the Private Information and Media po licy. Thesecorrelate to- of all DSA reports, andi.l of all policy violation reports respect ively.As such, the reach of harm is assessed to be moderate; • Remediability: Given that a r emedy in t his situation can often restore the individual whoexperienced t he harm to t heir st at e before the impact, this has been assessed to be possibly remediable. • Based on the assessments above, the severity of the risk to pe rsonal data is high. Inherent risk Based on the probability and severity assessments, the risk to the protect ion of personal datahas a high inherent risk, when assessed as a hypothetical scenario wit hout considering theexist ing controls that reduce the risk. Control strength In addition to the global controls to protect fundamental rights described above, specificcontrols target ing this risk include:• Article 35(1)(b)\&(d)) - Compliance with privacy laws: We uphold user rights incompliance with EU privacy laws and have a comprehensive privacy, data prot ectionand security program. In compliance wit h bot h the GDPR and t he DSA, our privacyprogram ensures that recommender system parameters - and how to modify t hem - are 50 clearly explained to users; ● Article 35(1)(f) - Data Protection Impact Assessment (DPIA): In the instances where aproject is deemed high-risk to the rights and freedoms of individuals, X conducts aDPIA, which requires the completion and sign off from the Global Data ProtectionOfficer (DPO) prior to its launch; ● Article 35(1)(f) - Regular privacy audits: We conduct risk assessments and biannualexternal audits on our privacy and data protection related control environment; ● Article 35(1)(e) - Ads: X does not present ads to users based on profiling using specialcategories of data; ● Article 35(1)(d) - Privacy reviews on recommender systems: We have continued toconduct privacy reviews to ensure that recommender systems remain compliant withpersonal data requirements. Over the past year, the above controls have been continuously monitored and managed toensure that the risk continues to be effectively mitigated. Notably, our 2023 privacy audit foundthat our Privacy and Information Security Program is comprehensive in that it provides sufficientcoverage across all relevant privacy and information security domains and is in alignment withthe ISO 27701 and ISO 27001/02 frameworks upon which the Program is based. Overall, the control strength is assessed to be managed. X maintains a comprehensive andeffective set of technical, administrative and operational privacy and data protection controls.There is an established process for integrating feedback and processes are proactive, wherepossible. Tier 3 priority Due to the high inherent risk of this area, which is mitigated by controls of a managed nature,the residual risk to protection of personal data is a low risk item, making it a Tier 3 priority.Nevertheless, we will continue to evaluate these risks and our controls as they may continue toevolve. Our efforts to continue to address residual risk are detailed in VII. Considerations forfurther mitigations. Other fundamental rights Content moderation on online platforms can inadvertently replicate and amplify offline biases andpatterns of discrimination based on protected characteristics. Additionally, exposure to contentrelated to self-harm, violence and its glorification may cause psychological harm, impacting theright to life, human dignity, and equality. Features of the platform can be leveraged to infringe onthese rights, including mass reporting accounts to trigger disproportionate enforcement as wellas using direct messaging to harass users. Following the October 7th attacks, there was an increase in antisemitic, Islamophobic, andanti-Arab sentiments worldwide. Such content has the potential to infringe on the right tonon-discrimination of users. While all fundamental rights can be considered equal, we are awarethat these rights may sometimes be in conflict. In such cases, we prioritise protecting physical 51 safety as the most important consideration and strive to strike an appropriate balance betweensafeguarding privacy and enabling free expression. In alignment with the fundamental rights considered, this assessment pays particularconsideration to the risks of encouraging or assisting suicide, harms related to unlawfulimmigration and human trafficking, and harassment, stalking threats, and abuse offences. Probability Between October 2023 to June 2024, X suspended accounts and removed posts forviolations related to Abuse and Harassment, Hateful Conduct, Suicide and Self-harm, Violentand Hateful Entities, and Deceased Individuals policies. While these violations also overlap withother risk areas, they may directly or indirectly pose a risk to user’s fundamental rights. Severity ● Scope: The possible harms of the sub-risks included here encompass physical,psychological, and societal harms. For example, advocacy of hatred could incitehostility and violence resulting in coordinating physical or psychological harm on theplatform. Content shared on X may exacerbate, encourage or coordinate discriminationagainst specific individuals, vulnerable groups or businesses. Exposure to suchdiscriminatory content can indirectly harm an individual's physical or psychologicalsafety. As such, the scope of harm ranges from high to very high; ● Scale: Between October 2023 to June 2024, X received more than user reports for Abuse and Harassment ( of all reports), indicating the high reach of this content.However, of all reports received in this time, only around related to Suicide andSelf-harm. As such, the scale of harm here ranges from low to high; ● Remediability: While for certain sub-risks, such as online harassment, the victim may beable to be restored to state before impact; for more serious offences, especially thosecausing physical or psychological harm, this is not possible. As such, the remediabilityof this harm ranges from likely remediable to not remediable; ● Based on the assessments above, the severity of the risk to fundamental rights is high severity. Inherent risk Based on the probability and severity assessments, the inherent risk for this harm is mediuminherent risk, when assessed as a hypothetical scenario without considering the existingcontrols that reduce the risk. Control strength In addition to the global controls to protect fundamental rights described above, specificcontrols targeting this risk include: 52 ● Article 35(1)(b) - Policies \& enforcement: X has a range of policies that relate to protecting fundamental rights, including but not limited to Abuse and Harassment, Hateful Conduct, and Suicide and Self-Harm. These policy domains are considerablycomplex, often requiring further clarification from our content moderators. The policy,operations and product functions work together to simplify and train our contentmoderators to ensure we’re taking action accurately and in a consistent manner. ● Article 35(1)(c) - Doxxing: X takes proactive measures for doxxing – this includes aheuristic rule that continuously searches for potential instances of doxxing in content,such as addresses and phone numbers, that are shared with abusive intent. Theheuristic rule surfaces for review and action globally.Our escalations team also proactively searches for violative content on the platformwith certain keywords and hashtags within a given period; ● Article 35(1)(e) - Ads: X ensures that ads are not presented to users based on profilingusing special categories. We also provide transparency about how ads are selectedand delivered to users with our “why this ad?” functionality; Over the last year, further controls have been implemented and existing controls improvedupon, that align with Article 35, to target this risk: ● Article 35(1)(c) - Proactive enforcement: We continue to take proactive efforts tomitigate online harassment. These measures are tailored to global events and crises,and deployed as needed. Over the last year, this has included the use of heuristic rulesfor sporting events as well as alerts for additional detection for targeting of politiciansduring the EU elections. ● Article 35(1)(f) - Partnerships: Our collaboration with UEFA during Euro 2024, whichwas a mitigation for illegal hate speech, also acts as a mitigation to protect otherfundamental rights such as the right to non discrimination. ● Article 35(1)(a) - Streamlined reporting flows: We have updated the reporting flow toensure users take fewer clicks to report harassment. This eases the burden on the userto ensure a swift and seamless reporting experience . ● Article 35(1)(c) - Improved moderation workflows: We have improved our internalworkflows to ensure more accurate routing of user reports to the correct teams forreviews – this has resulted in swiftly addressing any instances of harassment. Overall, mitigation measures are assessed to be defined. Measures are documented,formalised and repeatable. Processes are proactive, well characterised and understood acrossall organisation verticals. The rights included in this assessment cover a wide range of issuesand policy areas. We believe that we have the necessary and proportionate policies andenforcement protocols in place to address the risks and impact. Tier 3 priority Due to the medium inherent risk of this area, which is mitigated by controls of a defined nature,the residual risk to other fundamental rights is a low risk item, making it a Tier 3 priority.However, we continually monitor the situation and adjust our controls as needed. Our ongoingefforts to address residual risks are detailed in VII. Considerations for further mitigations. 53 C. Democratic processes, civic discourse, electoral processes, and public security This systemic risk area considers the risk of negative impact to democratic processes, civicdiscourse, electoral processes and public security. X provides opportunities for participation in democratic processes by allowing people to accessinformation, express their views and organise within civil society. X also enables people todirectly engage on important topics with their elected representatives, candidates, and fellowcitizens. Nonetheless, the influence of social media platforms also means that they may poserisks if they affect public trust in institutions, the ability for people to participate freely in the publicsquare, organise peacefully, or generally exercise their fundamental and political rights. Thesevalues and capabilities are the bedrock of any democracy. Broadly defined, the public securityrisk includes threats that have the potential to undermine social order, disrupt civil harmony, andcompromise the safety of individuals and communities. That said, the relationship betweenharmful messaging on the platform and offline action is complex and causation is difficult toascertain. In comparison to Y1, the inherent risk for this area has not changed, however, the residual risk hasdecreased, as a result of improvements in the control strength. The following graph shows theinherent and residual risks for this area in Y2. Fig. 10: Comparison of inherent and residual risk for democratic processes, civic discourse, electoral processes, and public security 54 Inherent risks: This year has seen key elections in Europe - notably the EU elections along with other nationalelections such as the France Legislative elections. The ongoing Israel-Hamas conflict followingthe October 7th attacks has also raised the likelihood of threats to public security in Europe. Asdiscussed in our Y1 report, such external events may result in bad actors misusing X to spreadfalse or misleading information, as well as conduct coordinated attacks to target public security.The risk environment is heightened by the potential for echo chambers to form, where users maybe exposed to information that aligns with their existing beliefs, which can reinforce biases andmay stifle healthy debate. Controls to mitigate the risk to democratic processes, civic discourse, electoral processes,and public security Policies \& enforcement (Article 35(1)(b)) As discussed in our Y1 report, we have robust policies with dedicated teams in place to prohibitharmful behaviours. To learn more about how our Synthetic and Manipulated Media and our Violent Speech policies mitigate this risk, please refer to the Y1 report. In August 2023, welaunched our updated Civic Integrity policy, which addresses four categories of misleadingbehaviour and content: (i) misleading information about how to participate in an election or othercivic process, (ii) suppression, (iii) intimidation, and (iv) false or misleading affiliation. Postsenforced under this policy will receive a label informing both authors and viewers that the post’svisibility has been restricted. This enforcement makes the post less discoverable on X, such asexcluding it from search results, trends, recommended notifications, For You and Followingtimelines, and downranks the post in replies. This policy is activated leading up to, during, andafter an election for a certain period of time. Any attempt to undermine the integrity of civicparticipation undermines our core tenets of freedom of expression, and as a result, we use labelsto inform users that the content is misleading. As mentioned in the section dedicated to our risk environment and controls, we also launched a Violent Content policy in May 2024, which consolidates two major policies: Violent Speech and Violent Media. Through this policy, X allows users to share graphic media if it is properly labelled,not prominently displayed, and is not excessively gory or depicting sexual violence. Enforcementtaken under this policy is proportionate to the harm. For example, violent threats, wish of harm,incitement of violence, glorification of violence, violent sexual conduct, gratuitous gore,beastiality and necrophilia is removed from the platform and further violations may result in theaccount being suspended or placed on read-only mode. Lower severity harms, such as any minoror non-deliberate instances of violent speech, depictions of physical fights, or bodily fluids, arelabelled and consequently have their reach restricted, ensuring that users who do not wish to seeit can avoid it and that minors are not exposed to it. Any attempt to undermine the integrity ofcivic participation through violent speech also undermines our core tenets of freedom ofexpression, and as a result, we action this content. Product-level controls (Article 35(1)(a)) At a product level, the Community Notes function remains a leading mitigation for the risk ofmisinformation, relating to both public security and civic integrity. For more information, pleaserefer to the Zoom in: Community Notes. Additionally, we have product interventions to directpeople to key resources on how to register to vote and reminders to vote in order to encourage 55 civic participation. These take the form of election prompts on the home timeline and searchtimeline, which display official voting information, along with hashmojis on common electionhashtags. At the time of this report, X does not allow political ads in the EU. The effectiveness of thismeasure has been evidenced by a study conducted by the organisation Global Witness, whosubmitted test ads containing false information about polling stations, incorrect ways to vote andincitement to violence against immigrant voters to the platform. On X, all ads were halted, andaccount level action was taken due to repeat offences.23 Partnerships (Article 35(1)(f)) As part of a multi-risk environment, we recognise the importance of collaborating with partnersand sharing information to take down bad actors and threats to civic integrity. In our Stakeholderengagement and consultation section we have discussed the range of engagements we haveundertaken this year. Specifically to mitigate this systemic risk, we cooperated with the GermanMinister of Foreign Affairs, French Viginum and the French Ministry of Foreign Affairs, as well aswith the Polish government's cybercrime centre, exchanging leads and information oninvestigations into coordinated networks on the service. Cooperation with Germany, France andPoland on this front are ongoing and framed under the “Weimar group” established between thethree EU Member States to tackle foreign interference online. X teams were also in contact withthe European External Action Service (EEAS) and European Digital Media Observatory (EDMO)during the EU Elections to exchange alerts and relevant information on potential threats to theplatforms’ integrity during the elections. We also visited Slovakia ahead of their election to meetrelevant agencies. At a more global level, X is also in contact with NATO to allow the agencyto share information related to misleading information and foreign interference, We also have escalation paths established between X andthe Access Now Digital Helpline and Article19 to provide support as needed to civil societygroups. We have continued to collaborate with our existing partners as well as law enforcementauthorities, notably in the context of threats to public security. For more information, please referto our Y1 report. Zoom-in: EU elections As we build the most trusted global town square, we know that the public debate aroundelections happens on X. We are proud that our platform powers democratic discourse and lifearound the world, and for us, authenticity, accuracy, and safety are fundamental to how weapproach elections. Our consideration of authenticity has two principal dimensions: accountsand conversation. Our Safety team is constantly monitoring the service for action under our 23 https://www.globalwitness.org/en/campaigns/digital-threats/ticked-tiktok-approves-eu-elections-disinformation-ads-publication-ireland/ 56 policies around Platform Manipulation and Spam. Our teams consistently thwart and disruptthreat campaigns designed to degrade the integrity of the platform. We strongly believe that freedom of speech and safety must coexist, and the election contextbrings with it a diverse set of challenges that may include abuse and harassment, violentcontent, deceptive identities and impersonation, violent hateful entities, hateful conduct,synthetic and manipulated media, political advertising (where applicable), and misleadinginformation about how to participate and vote. Our EU elections response involved a cross company effort, with multiple teams providingadditional monitoring to identify potential violations of X Rules on top of Safety’s existingenforcement mechanisms and other mitigation measures, such as 24/7 escalations support. Inthe months before the elections, we participated in a series of events organised by theEuropean Commission (DG CNECT), such as: stress tests on platforms’ preparedness toprevent and tackle threats to elections integrity, and election roundtables to share informationon identified potential harms, as well as on platforms’ and EU institutions and member states’initiatives to protect civic integrity. We also presented our elections approach and an overviewof X’s election integrity efforts to Coimisiún na Meán and other Digital Services Coordinators.Ahead of, during, and after the EU elections, we activated a comprehensive set of measuresand engagements to protect civic processes, which included: ● Proactive engagement: We proactively engaged and exchanged information with theEuropean Commission, the European External Action Service (EEAS), the EuropeanParliament, and key authorities of the 27 Member States. As a part of this engagement,we provided crisis response contact points to the European Commission, EuropeanParliament, and DSCs and gave a safety training to more than 60 EU-based NGOs onhow to maximise use of safety tools on the platform. X also proactively cooperated withthe European Commission and Member States on identifying and disrupting networksof inauthentic profiles that were posing a threat to elections integrity. We are proud thatour work on elections was praised by the European Commission, a number of MemberStates, the European Parliament’s communication service and the EEAS, ascommunication moved smoothly during the election and escalations were dealt withpromptly; ● Media literacy campaigns: To promote civic engagement, we supported media literacycampaigns with trusted partners and recognised experts in the EU, such as theEuropean Parliament, European Digital Media Observatory (EDMO), and the EuropeanRegulators Group for European Media Services (ERGA) that aimed at providing reliableinformation on the EU elections. Specifically, we supported media literacy campaignsvia ads credits and received positive feedback from ERGA on the reach obtained bytheir campaign thanks to the credits. ; ● Election enforcement period: Leading up to EU elections, we activated our CivicIntegrity policy, conducted additional monitoring on top of Safety’s existing enforcement 57 Severity ● Scope: The amplification of false or misleading information on X, combined withharassment and intimidation of people, notably vulnerable groups, related to electoralprocesses can have a significant impact on civic participation. As a multi-dimensionalharm that also impacts vulnerable groups, this was assessed to have a high scope; ● Scale: DSA illegal content user reports under ‘Negative Effects on Civic Discourse orElections’ accounted for less than of the total user reports received betweenOctober 2023 and June 2024. However, conversations regarding politics are amongthe top items discussed on X globally and receive significant engagement.24 As such,this risk was assessed to have a high reach; ● Remediability: Risks related to false and misleading information can be remedied by providing users with additional context, such as a Community Note or Synthetic andManipulated Media label. As such, this has been assessed to be possibly remediable; ● Based on the assessments above, the risks to democratic processes, civic discourse,and electoral processes are assessed to have a high severity. Inherent risk Based on the probability of risks to democratic processes, civic discourse and electoralprocesses on the platform, along with the high severity of such a risk, this area has a highinherent risk, when assessed as a hypothetical scenario without considering the existingcontrols that reduce the risk. Control strength In addition to the global controls for risks to democratic processes, civic discourse, electoralprocesses, and public security described above, specific controls targeting this risk include: ● Article 35(1)(b) - Policies \& enforcement: Our Civic Integrity, Synthetic and ManipulatedMedia, and Platform Manipulation and Spam policies primarily cover this area and arewell-defined. X has effective means of removing bad actors, including actors attemptingto inauthentically manipulate user conversations, at scale, through enforcement of the Platform Manipulation and Spam policy. ● Article 35(1)(f) - Elections playbooks and ‘retros’: Election-specific processes toprepare for and during elections are in place and well documented, such as ourelection playbooks. Following an election, the cross-functional election working groupbuilds a retrospective analysis of the enforcement taken during the relevant time frame.This ‘retro’ acts as a feedback loop to inform the working group in future efforts. Over the last year, further controls have been implemented and existing controls have beenimproved upon, that align with Article 35, to target this risk: ● Article 35(1)(b) - Policies: Our Civic Integrity policy was launched mid September 2023,to address voter intimidation and suppression during elections. In preparing for each 24 https://x.com/XData/status/1764757748707672167 59 election and the enforcement of the Civic Integrity policy, teams prepare guidelines toensure reviewers have relevant information and regional and linguistic context of thecountry in question. ● Article 35(1)(f) - Election risk assessments: For each national election, X conducts anassessment to evaluate the election’s potential risk to civic discourse and electoralprocesses on X, which allows us to determine what services or additional mitigations toactivate on top of our already existing and comprehensive policies and enforcementprocesses. ● Article 35(1)(a) - Community notes: This feature is now live in 72 countries, including allEU member states, and over 30% of ratings come from EU contributors, indicatinginterest and engagement from users in the EU. For further data on this feature, pleaserefer back to Zoom in: Community Notes. ● Article 35(1)(f) - Partnerships: Over the past year, X has cooperated with both theFrench VIGINUM Taskforce, , and, more recently,with the Weimar Triangle (consisting of the French, German and Polish Ministries ofForeign Affairs), exchanging leads and information on investigations into coordinatedinauthentic networks on the service. Additionally, and especially during the EUelections, X teams have also actively cooperated with the European External ActionService (EEAS), the European Parliament and the European Commission’sCommunication teams, as well as other key stakeholders like the European DigitalMedia Observatory (EDMO) and EUDisinfolab, and key authorities from the 27 MemberStates. ● Article 35(1)(f) - Election integrity: We have a cross-functional working group focusedon elections integrity, and increasing resources allocated to ensuring elections integrityis an ongoing process. ● Article 35(1)(i) - Election product interventions: In both the EU elections and theFrance Legislative elections, we launched Home and Search timeline prompts, whichsurfaced official information from the European Parliament and the French Ministry ofthe Interior, respectively, to users. This has received positive feedback from the Frenchgovernment, which attributes 45% of the total traffic on their interministerial webpageon the France Legislative elections to X. This activity was cited as a direct consequenceof trend takeovers and election day and reminder prompts. Additionally, we launchedmultiple hashmojis for election-related hashtags for both the European and FranceLegislative elections. This control is assessed to be defined. Over the past year, we have made efforts to expand anddevelop measures and policies specific to elections, as outlined above. Robust qualityassurance frameworks will be implemented and processes will continue to be improved.Generally, processes tend to be more proactive than reactive, and they are well characterisedand understood across all organisation verticals. Tier 2 priority Due to the high inherent risk of this area, which is mitigated by controls of a defined nature, theresidual risk for negative effects to democratic processes, civic discourse and electoralprocesses is a medium risk item, making it a Tier 2 priority. As our controls have improved, the 60 rarely remediable; ● Based on the above, the risk of public security on the platform is assessed to have a high severity. Inherent risk Based on the probability of risks to public security on the platform, along with the high severityof such a risk, this area has a high inherent risk, when assessed as a hypothetical scenariowithout considering the existing controls that reduce the risk. Control strength In addition to the global controls to risks to democratic processes, civic discourse, electoralprocesses, and public security described above, specific controls targeting this risk include: ● Article 35(1)(b) - Policies \& enforcement: Public security risks are enforced upon underthe Violent Content, Illegal or Certain Regulated Goods or Services, Violent and HatefulEntities, Perpetrators of Violent Attacks, and to a lesser extent Abuse and Harassment and Impersonation policies. The DSA reporting form also has a category dedicated to‘risk for public security’; ● Article 35(1)(c): Consistent moderation: The above policies are accompanied bycohesive, consistent processes that enable agents to make risk-informed decisions,allocate resources and apply timely and appropriate remediation measures. For the Violent Content, Violent and Hateful Entities, and Abuse and Harassment policies, Xemploys both automated and manual enforcement mechanisms. Over the last year, further controls have been implemented and existing controls improvedupon, in alignment with Article 35, that target this risk: ● Article 35(1)(b) - Policies \& enforcement: We have conducted a comprehensive policyreview, which has led to improvements in X policies, particularly around Violent Media; ● Article 35(1)(f) - Violent entities: We have made changes to our global list ofdesignated violent entities and expanded it, as part of our continuous work to carry ourcomprehensive assessments. We have also increased proactive monitoring andenforcement for violent entities; ● Article 35(1)(c) - Incident response and post-incident reviews: We have continued toenhance feedback mechanisms with post-incident reviews and regular syncs to ensurethat enforcement aligns with the spirit and purpose of the policies. We continue to haveinternal incident response protocols in place when a high-visibility event occurs andvirality triggers rapid and widespread proliferation of various content types on theplatform. Even if the incident does not reach the ‘crisis’ level, our escalations team maydirect resources toward an immediate response. The current mechanisms in place are defined, scalable, and operating effectively. X haswell-developed policies to moderate content that promotes or celebrates violence orendangers public security across corresponding teams (enforcement/operations, training,engineering, data analytics, and external engagement) and ensures policy development, 62 enforcement and maintenance is up to date. As a result, the control strength is assessed as defined. Tier 2 priority Due to the high inherent risk of this area, which is mitigated by controls of a defined nature, theresidual risk to public security is a medium risk item, making it a Tier 2 priority. As our controlshave continued to evolve, the residual risk remains managed; nevertheless, we will continue toevaluate these risks and our controls as they may continue to evolve. Our efforts to continue toaddress residual risk are detailed in VII. Considerations for further mitigations. D. Public health, physical and mental well-being, and gender-based violence This systemic risk area considers the risk of negative effects to public health, including harms tophysical and mental well-being and gender-based violence (GBV). As discussed in our Y1 report,the discourse around the usage of social media and its impact on health remain varied. While allonline platforms may be misused as a vector for risks, there are notable positive influences onpublic health, mental and physical well-being as well as the rights of vulnerable populations. Incomparison to Year 1, the inherent risks and residual risks for this systemic risk area stayed thesame, indicating that this continues to be a managed risk area. The following graph shows theinherent and residual risks for this area in Y2. Fig. 13: Comparison of inherent and residual risk for public health and gender-based violence 63 Inherent risks Our analysis suggests that, globally, X users spend an average of 30 minutes a day on theplatform.25 The full extent of the effects of negative interactions and exposure to graphic contentmay harm users’ psychological well-being is yet to be determined. Similarly, misuse of theplatform to promote dangerous activities or misleading information may be detrimental to publichealth. Although there has been no public health crisis declared in the EU or globally in the pastyear, this risk area may still be present at a societal level through users amplifying misleadinginformation related to public health, and at an individual level through users who may sharesensitive or harmful media such as self-harm content and discussions promoting eating disorders. GBV may result in risks to physical safety, especially when it involves non-consensual intimateimage sharing or outing of a victim’s identity. Such abuse may further result in impactedcommunities self-censoring their voice. The use of AI tools can exacerbate the risks ofdissemination of GBV content, for example, as seen in the sharing of non-consensual nudity(NCN) imagery related to Taylor Swift. Although X allows consensual adult content on theplatform, there is a risk of illegal pornographic content being disseminated, this may includeCSAM, NCN and intimate imagery either shared or produced without consent of the persondepicted in the image. Controls to mitigate the risk to public health, physical and mental well-being, andgender-based violence Policies \& enforcement (Article 35(1)(b)) In order to mitigate the identified inherent risks, we have developed a comprehensive andtargeted set of policies that capture all our services and features. X’s Rules and revenue policiesgovern what can be shared and advertised or promoted on the platform, prohibiting illegalcontent and limiting content that could potentially be harmful. X has multiple policies that capture this risk area. For risks to public health, this includes Abuseand Harassment, Platform Manipulation and Spam, Suicide and Self-harm, Child Safety, and Illegal or Certain Regulated Goods or Services, as well as Self-Harm and Unsafe and IllegalProducts under DSA reporting categories. For risks of gender based violence, this includes Abuse and Harassment, Sensitive Media, and Non-Consensual Nudity as well as Non-ConsensualUser Behaviour and Pornography or Sexualised Content under DSA reporting categories. Thesepolicies are enforced using a wide range of measures, including content labelling, restrictions,removals, and account suspensions. Over the past year, as part of our ongoing commitment to refine our policies and enforcement,we have conducted a comprehensive audit of our existing guidelines and workflows. Asmentioned in IV. X Risk Environment: Influencing Factors \& Controls, this audit led toimprovements in X policies, particularly around consensual Adult Content and Violent Media. Asbefore, X takes a nuanced approach to sexual content whereby we allow space for consensualsharing and self-expression, but at the same time, draw a clear line when it comes tonon-consensually shared nudity or sexual content. Users are allowed to post Adult Content -which includes adult nudity and sexual behaviour - provided that it is properly labelled with a 25 https://x.com/XData/status/1769826435576037702 64 content warning so that users who do not wish to see it can avoid it. However, this content is notallowed on highly visible areas including live videos, profile pictures, header, banners, orCommunity cover photos. As minors’ accounts are defaulted to protected, they are not exposedto such labelled content either. Product-level controls (Article 35(1)(a)) X has a suite of product-level features to mitigate against potential harms related to public health,physical and mental well-being and GBV that may manifest on the platform, which includesCommunity Notes and content warning labels. Content warning labels can be proactively addedby users or reactively added by our content moderators. User safety features such as block/mute,account filters, and protecting posts/controlling replies, also limit exposure to harmful content. If a user searches for terms related to self-harm or suicide in certain countries, X guides the usertowards resources with expertise in crisis intervention and suicide prevention that the user cancontact. Users can also alert the X team focused on handling reports associated with accountsthat may be engaging in self-harm or suicidal behaviour. For further information on our controlsand enforcement in this area, please refer to our Year 1 report. Zoom-in: GenAI \& Gender-Based Violence – Taylor Swift Deepfake At the beginning of 2024, X became aware of AI generated Non-Consensual Nudity (NCN)being spread of the singer Taylor Swift. Immediately on being alerted to this trend, X initiatedits incident response protocol, allowing it to take prompt and comprehensive steps to stop thespread of these images. Working around the clock, teams from across the company carried out proactive sweeps toremove violative content and to suspend the accounts of bad actors and repeat offenders. Oursweeps were escalated as the incident progressed and the volume of violative contentincreased. Ad-hoc guidance was issued and further training provided to our enforcementteams at short notice to respond to the incident. A statement was published on Safety, sendinga clear signal regarding our zero-tolerance approach to Non-Consensual Nudity. As atemporary safety measure, searches for “Taylor Swift” were blocked on the platform. The enforcement numbers from the incident as of Feb 21, 2024, when the sweeps wereceased, are provided below. Account Suspension Post removal Post removal (one-off) Content warning label 65 The actions we took are a testament to the flexibility and robustness of our incident responsemechanisms, and are in line with our zero-tolerance approach to non-consensual nudity. At thesame time, the event proved to be a valuable opportunity for X to improve our products andpolicies. Efforts include: ● Following a post-incident review, we conducted a policy-mapping exercise and clarifiedwith our operational teams how to enforce our rules on AI-generated deep fakes; ● A tooling exercise was conducted to improve our automated systems and theirrecognition of various hashes related to Non-Consensual Nudity. Risks to public health and physical and mental well-being Unprecedented use of social media can negatively impact users' mental health and, in severecases, their physical health. On a societal level, risks include the dissemination of harmful or falsehealth information, particularly during public health emergencies, and content that underminestrust in health institutions and professionals. There is also a risk to fundamental rights of freeexpression when discussing public health topics as we’ve seen in the past with examples such asthe Covid-19 pandemic that there can be significant public discussion on public health measuresthat evolve over time. On an individual level, users may encounter harmful content such asbullying, harassment and self-harm, or develop issues like addiction and reduced attention spandue to the platform's design and functionality. A recent study by Internet Matters has shown thatchildren aged 9-15 and their parents found that active users were more likely to encounter harmonline. At the same time, this age group experienced more positives across all the dimensions ofwellbeing - developmental, emotional, physical, and social - compared with their less activecounterparts.26 Over the last year, there has been no particular incident that has changed the riskprofile of this harm. Probability Between October 2023 to June 2024, X has actioned posts and accounts for violationsof Abuse and Harassment, Suicide and Self-Harm, and Sensitive Media27. However, there is noclear correlation between some of the sub-harms that can trigger the enforcement of the listedviolations and an impact to public health. For example, enforcement for Abuse and Harassment could be a result of a slur being targeted at a user, however, there is no direct indication thatthis may have impacted the user’s mental health. As such, while we recognise the risks topublic health stemming from our platform, the full effects remain unknown, as they are relatedindividual determinants of wellbeing. The probability for this risk is possible. 27 It should be noted that Sensitive Media included both adult content and violent content. Following thepolicy update, this policy has now been separated. However, for the purpose of this risk assessment, weare unable to provide data that is specific to adult content. This will be updated in next year’s assessment. 26 https://www.internetmatters.org/wp-content/uploads/2023/02/Internet-Matters-Childrens-Wellbeing-in-a-Digital-World-Index-report-2023-2.pdf 66 Severity ● Scope: Users amplifying false and misleading information about public health relateditems, or promoting the sale of counterfeit documentation, may result in societal harmand has the potential to cause physical harm. Furthermore, risks to physical and mentalhealth inherently constitute physical and/or psychological harm, and may targetvulnerable groups. As such, the scope is assessed to be very high; ● Scale: of user reports received by X were for X Rules violations that overlappedwith this risk area. This indicates that the reach of this type of content on the platform iswide, putting the scale at high; ● Remediability: Although mitigation measures could potentially help limit the extent ofthe harm, the remediability for negative health outcomes that have already occurred islimited, especially when it comes to the impact of public health crises. As such,remediability for this harm is possibly remediable; ● Based on the assessments above, the risk to public health on the platform is assessedto have a high severity. Inherent risk Based on the probability of risks to public health on the platform, along with the high severity ofsuch a risk, the inherent risk of this area is a medium inherent risk. That is when assessed as ahypothetical scenario without considering the existing controls that reduce the risk. Control strength In addition to the global controls to risks to public health and negative effects to physical andmental well-being, described above, specific controls targeting this risk include:● Article 35(1)(b) - Policies \& enforcement: X has a suite of policies to enforce againstrisks to public health, as well as negative effects to physical and mental well-being, such as Abuse and Harassment, Sensitive Media, and Suicide and Self-harm policy. Thelatter prohibits users from promoting or encouraging suicide or self-harm content. ● Article 35(1)(i) - Mental health prompts: X has product features in place with suicideand self-harm resources, such as mental health prompts in certain countries that appearwhen users search for words related to suicide and self-harm. ● Article 35(1)(c) - Restricted reach and rate limiting: These features work to reduce theimpact of misleading activity on the platform by reducing impressions and limiting thenumber of actions an account can take; ● Article 35(1)(a) - Safety features: X has content warning labels on graphic and adultmedia and sensitive content settings; ● Article 35(1)(f) - Crisis response: X’s crisis response protocol is based on a tieredapproach that assesses risk of harm, business risks, and urgency. This informs the crisisactivation procedure, and assigned ratings allow X to deploy an appropriate responsebased on the level of risk and prioritisation of each crisis; ● Article 35(1)(c) - Reporting workflows: Reporting mechanisms are in place for users tosubmit reports on rules violations, particularly Suicide and Self-harm, with ability to 67 appeal if they feel the wrong action was taken; ● Article 35(1)(i) - Resources: If a user is thinking about engaging in self-harm or suicidalbehaviour, we have resources available that allow people to contact services withexpertise in crisis intervention and suicide prevention. Users can also alert the X teamfocused on handling reports associated with accounts that may be engaging inself-harm or suicidal behaviour if they encounter this type of content on X. Over the last year, further controls have been implemented and existing controls improvedupon, that align with Article 35, to target this risk: ● Article 35(1)(i) - Partnerships: X provided ads credits for a public health campaign tothe Red Cross in partnership with the French government to encourage people topractise sport 30 minutes a day to stay in good health. ● Article 35(1)(a) - Community notes: This feature has proven helpful to people fromdifferent points of view, and significantly reduces sharing of potentially misleadingposts. For more information on improvements to this feature, please refer Zoom in:Community Notes. The current mitigation measures are defined, well-documented and repeatable. Additionally,most of our mechanisms are proactive, which allows us to limit the misinformation within theplatform. There is an established process for integrating feedback to mitigate processdeficiencies. As such, the control strength is defined. Tier 3 priority Due to the medium inherent risk of this area, which is mitigated by controls of a defined nature,the residual risk to public health as well as negative consequences to a person’s physical andmental-well being is a low risk item, making it a Tier 3 priority. As our controls evolve and publichealth conditions change globally, we continuously assess these risks and refine our measures.Notably, there may be product solutions that can support individuals’ mental health, such asmore curated support for victims of self-harm and cyberbullying. Such considerations are detailed in VII. Considerations for further mitigations. Risks of gender-based violence Due to similarities in the harms and controls, this year our assessment for GBV also consideredthe risk to the fundamental right of ‘respect for private and family life’. As such, this risk areaincludes cyberviolence - such as sexual harassment, violent speech, gendered hate speech -sexual exploitation, non-consensual nudity (NCN), intimate imagery, disclosure of privateinformation, sharing images of one’s likeness without their permission and threats to exposeindividuals’ private information or media. Over the last year, there have been a few incidents that have shown how the use of AI tools maybe used to exacerbate the risks of dissemination of content that may constitute GBV. The mostknown case is the Taylor Swift NCN incident which primarily affected the US (discussed above). 68 Probability Between October 2023 to June 2024, following DSA illegal content reports, X actioneditems under the categories of Non-Consensual Behaviour and Pornography or SexualisedContent, although this is only a small fraction of the total enforcement on illegal contentreports. However, X took a total of actions for violations of Non-Consensual Nudity, Abuse and Harassment, and Sensitive Media – i.e, approximately of all its X Rules enforcement actions at this time (excluding Platform Manipulation and Spam enforcement).Although not all of this may have been gender-specific (for example, Abuse and Harassment violations may go beyond gendered harassment) an overlap nevertheless exists. As such, theprobability of this risk on X is assessed to be likely. Severity ● Scope: The scope of the sub-risks within gender-based violence span across physical,psychological, economic, societal, and informational harms, and they impact vulnerablegroups. For example, dissemination of non-consensual nudity may pose significant risksto physical safety in countries where women and marginalised groups aredisproportionately vulnerable to violence and reputational harm. Exposure of privatecontent may impact an individual’s financial security, be a reason forsextortion/blackmail, and result in the loss of further economic opportunities. As such,the scope of harm of this risk is very high; ● Scale: The sub-risks within gender-based violence have a range of reach, dependingon the nature of the risk. For example, between October 2023 to June 2024, Xreceived approximately user reports for the Sensitive Media policy, which accountsfor only of total user reports.28 X received only user reports for NCN in the sametime period. Under DSA illegal content reports, X received user reports across thecategories of non-consensual behaviour and pornography or sexualised content;however, this also accounts for only of the total DSA user reports received betweenOctober 2023-June 2024. As such, the reach of this harm ranges from low to moderate; ● Remediability: When considering GBV, remediation is unlikely to restore the individualto their state prior to the impact. As a result, the sub-risks within this range from possibly remediable (e.g. respect for private and family life) to not remediable (e.g.gender-based violence and NCN); ● Based on the assessments above, the risk of gender based violence on the platform isassessed to have a high severity. 28 It should be noted that Sensitive Media included both adult content and violent content. Following thepolicy update, this policy has now been separated. However, for the purpose of this risk assessment, weare unable to provide data that is specific to adult content. This will be updated in next year’s assessment. 69 Inherent risk Based on the probability of risks of GBV on the platform, along with the high severity of such arisk, this area has a high inherent risk, when assessed as a hypothetical scenario withoutconsidering the existing controls that reduce the risk. Control strength In addition to the global controls for risks to public health, negative effects to physical andmental well-being and gender-based violence, described above, specific controls targeting thisrisk include: ● Article 35(1)(b) - Policies \& enforcement: X enforces on GBV via Abuse andHarassment, Hateful Conduct, NCN, Illegal or Certain Regulated Goods or Services (including sexual services) and media policies relating to Violent Content and AdultContent. We provide clear guidelines to our enforcement teams and we regularlyupdate our policies and guidelines to reflect changes in trends; ● Article 35(1)(c) - Training: In order to sensitise our enforcement teams, we have alsocreated cultural abuse training to help teams better understand how vulnerable groupstend to be targeted. We have regular meetings with agents to go through edge-cases.We also provide detailed guidance to agents when they’re reviewing cases in differentlanguages; ● Article 35(1)(c) - Moderation: Both proactive and reactive enforcement is used for thisrisk area with tight feedback loops; and ● Article 35(1)(a) - Safety features: Features such as block/mute, account filters, andcontrolling replies allow users to protect themselves from potential GBV; Over the last year, further controls have been implemented and existing controls improvedupon, that align with Article 35, to target this risk: ● Article 35(1)(b) - Policies \& enforcement: We have conducted a comprehensive policyreview, which has led to improvements in X policies, particularly around consensual Adult Content. We have also updated our Abuse and Harassment guidelines to accountfor unwanted sexualisation and objectification using AI-generated content. ● Article 35(1)(c) - Incident response: Following the Taylor Swift NCN incident, apost-incident report was created with a number of suggested improvements for thefuture. For further detail, please refer to Zoom-in: GenAI \& Gender-Based Violence –Taylor Swift Deepfake. ● Article 35(1)(f) - Partnerships: X has recently partnered with StopNCII to work towardsmitigating the risks of NCN. For more information on this, refer to VII. Considerations forfurther mitigations.The current mechanisms in place are defined, repeatable and operating effectively. Processesare well characterised and understood. While many of the controls in this area may beconsidered to be ‘managed’, there is no proactive enforcement for NCN. As such, the overallcontrol strength is defined. 70 Tier 2 priority Due to the high inherent risk of this area, which is mitigated by controls of a defined nature, theresidual risk of gender-based violence is a medium risk item, making it a Tier 2 priority. As ourcontrols have continued to evolve, the residual risk remains managed; nevertheless, we willcontinue to evaluate these risks and our controls as they may continue to evolve. Our efforts to continue to address residual risk are detailed in VII. Considerations for further mitigations. VIII. Considerations for further mitigations Despite an increase in political and societal risks in 2024, over the last year, the residual riskshave reduced in several areas in comparison to Y1. Notably, the residual risk has improved acrossfive areas – illegal hate speech, CSAM, freedom of expression, other fundamental rights anddemocratic processes, electoral processes, and civic discourse. For risks to consumer protection,due to the expansion of this assessment to also consider the risk of sale of illegal goods andservices, the residual risk has marginally increased from Y1, while still remaining a low risk. Fig. 12: Comparison of residual risk between Y1 and Y2 This improvement in residual risk comes both as a result of a more refined evaluation of the riskson the platform, based on the more data-driven approach, as well as improvements in ourcontrols over the last year. Notably, for illegal content, several measures put in place to complywith the DSA have increased our suite of controls tackling illegal content in the EU. Similarly,improvements to our restricted reach labelling, the launch of our Civic Integrity policy as well ascollaboration with external stakeholders, such as EDMO and other government bodies, has 71 improved our controls and overall reduced the assessed risks t o fundamental rights anddemocratic processes. The following prioritisation der ives from the residual risk calculation, and informs the VII.Consjderatjons for further mitigations in Y2: Ultimately, we recognise that these syst emic risks continue to evolve and as such we remaincommitted to our vigilance in managing these risks. It is important to not e that we diligentlycont inue t o monitor and mitigat e the risk areas considered as Tier 3 priorities so that they remainat a low residual risk, however, this tiering allows us t o prioritise our efforts over the next monthsto t ackle the highest risk areas on our service first. In line with Article 35, the fo ll owing table outlines further reasonable, proport ionate and effectivemi tigation measures X plans to explore in Y2, with particular consideration given to the impacts ofsuch measures on fundamental right s. These measures are additional improvement s andavenues t o consider, st emming as a result of this risk assessment, and will be considered inconjunction with our current suite of controls. Systemic risk Considerations for further mitigations Measures thattargetsyst emic riskshorizontally • Article 35(1)(a): X will continue to improve on Community Notes. As ofJuly 202 4, users can request a Community Note on a post theybelieve wou ld benefit from one . We also aim to continue makingimprovements to application speed;• Article 35(1)(c): X wi ll continue efforts to ensure that reporting optionsare better targeted and more effective across all policy areas;• Article 35(1)(b): X w i ll continue t o conduct policy reviews for potentialimprovements and simplification; 72 ● Article 35(1)(c): X will continue to iterate and improve uponautomated moderation techniques for improved detection of violativecontent before it is reported. Risk ofdisseminationof illegalcontent Risks ofnegativeeffects tofundamentalrights 73 IX. Annex: Matrices 1\. Probability matrix Fig.13: Probability scale for the purpose of the DSA risk assessment 2\. Severity matrix Fig.14: Severity scale for the purpose of the DSA risk assessment 3\. Inherent risk matrix Fig.15: Residual risk matrix for the purpose of the DSA risk assessment 75 4\. Control strength matrix Fig.16: Control strength scale for the purpose of the DSA risk assessment 5\. Residual risk matrix Fig.17: Residual risk matrix for the purpose of the DSA risk assessment 76