{ "version": "https://jsonfeed.org/version/1.1", "title": "To Read - Research Papers", "description": "Academic papers from Paperpile enriched with metadata", "home_page_url": "https://github.com/user/toread", "feed_url": "https://github.com/user/toread/feed.json", "language": "en-us", "authors": [ { "name": "ToRead Bot", "url": "https://github.com/user/toread" } ], "items": [ { "id": "bibtex:Balluff2026-if", "title": "Newer, larger, better? A critique of the unreflective LLM adoption in communication research", "content_text": "Published in Polit. Commun. | Year: 2026 | Authors: Balluff, Paul, Ho, Justin Chun-Ting, Gruber, Johannes B, Palicki, Sean, Palmer, Alexis, Rossi, Luca,...", "date_published": "2026-02-20T00:00:00Z", "_discovery_date": "2026-02-20T13:52:33.451450Z", "url": "https://doi.org/10.1080/10584609.2026.2618486", "external_url": "https://doi.org/10.1080/10584609.2026.2618486", "authors": [ { "name": "Paul Balluff" }, { "name": "Justin Chun-ting Ho" }, { "name": "Johannes B. Gruber" }, { "name": "Sean Palicki" }, { "name": "Alexis Palmer" }, { "name": "Luca Rossi" }, { "name": "Irina Shklovski" }, { "name": "Chung-hong Chan" } ], "tags": [ "Article", "Political Communication" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1080/10584609.2026.2618486", "citation_count": 0, "reference_count": 38, "type": "article", "publisher": "Informa UK Limited", "pages": "1--10", "metadata_source": "crossref", "confidence_score": 0.8157894736842105, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Farkas2026-lr", "title": "Defending fact-checking partnerships with platform companies: ‘We can't fight alone against disinformation’", "content_text": "This article investigates how professional fact-checkers defend collaborations with major platform companies such as Meta, Alphabet, and ByteDance. Drawing on 12 qualitative interviews with European fact-checkers, the study applies rhetorical apologia theory to analyse recurring justificatory arguments. We identify four modes of differentiation and three modes of transcendence employed by fact-checkers. Arguments of differentiation involve distancing fact-checking from platform company partners ...", "date_published": "2026-02-16T00:00:00Z", "_discovery_date": "2026-02-20T06:11:27.219105Z", "url": "https://doi.org/10.1177/02673231261422085", "external_url": "https://doi.org/10.1177/02673231261422085", "authors": [ { "name": "Johan Farkas" }, { "name": "Mette Bengtsson" } ], "tags": [ "Article", "European Journal of Communication" ], "content_html": "

Abstract

This article investigates how professional fact-checkers defend collaborations with major platform companies such as Meta, Alphabet, and ByteDance. Drawing on 12 qualitative interviews with European fact-checkers, the study applies rhetorical apologia theory to analyse recurring justificatory arguments. We identify four modes of differentiation and three modes of transcendence employed by fact-checkers. Arguments of differentiation involve distancing fact-checking from platform company partners to emphasise editorial independence; distinguishing between different platform companies to legitimise partnerships with certain actors while rejecting others (notably TikTok owned by ByteDance); separating platform companies as a whole and specific employees within them; and contrasting platform funding with state funding to defend the former as less compromising for editorial autonomy. Arguments of transcendence invoke counter-factual scenarios of unmitigated misinformation; appealing to broader alliances against disinformation; and highlighting the potential for improving platform companies from within. These findings contribute to existing scholarship by unpacking how fact-checkers negotiate the complex institutional dependencies of platform company partnerships by simultaneously acknowledging risks and asserting pragmatic necessity. As such, the study provides a deeper understanding of the challenges facing fact-checking organisations and their efforts to establish legitimacy as epistemic authorities in the boundary terrain shared with other key actors in today's media landscape.

Details

Links

DOI

", "_academic": { "doi": "10.1177/02673231261422085", "citation_count": 0, "reference_count": 50, "type": "article", "publisher": "SAGE Publications", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 100 } }, { "id": "bibtex:Gauthier2026-iq", "title": "The political effects of X’s feed algorithm", "content_text": "Feed algorithms are widely suspected to influence political attitudes. However, previous evidence from switching off the algorithm on Meta platforms found no political effects1. Here we present results from a 2023 field experiment on Elon Musk’s platform X shedding light on this puzzle. We assigned active US-based users randomly to either an algorithmic or a chronological feed for 7 weeks, measuring political attitudes and online behaviour. Switching from a chronological to an algorithmic feed i...", "date_published": "2026-02-18T00:00:00Z", "_discovery_date": "2026-02-19T06:33:46.898860Z", "url": "https://doi.org/10.1038/s41586-026-10098-2", "external_url": "https://doi.org/10.1038/s41586-026-10098-2", "authors": [ { "name": "Germain Gauthier" }, { "name": "Roland Hodler" }, { "name": "Philine Widmer" }, { "name": "Ekaterina Zhuravskaya" } ], "tags": [ "Article", "Nature" ], "content_html": "

Abstract

Feed algorithms are widely suspected to influence political attitudes. However, previous evidence from switching off the algorithm on Meta platforms found no political effects1. Here we present results from a 2023 field experiment on Elon Musk’s platform X shedding light on this puzzle. We assigned active US-based users randomly to either an algorithmic or a chronological feed for 7 weeks, measuring political attitudes and online behaviour. Switching from a chronological to an algorithmic feed increased engagement and shifted political opinion towards more conservative positions, particularly regarding policy priorities, perceptions of criminal investigations into Donald Trump and views on the war in Ukraine. In contrast, switching from the algorithmic to the chronological feed had no comparable effects. Neither switching the algorithm on nor switching it off significantly affected affective polarization or self-reported partisanship. To investigate the mechanism, we analysed users’ feed content and behaviour. We found that the algorithm promotes conservative content and demotes posts by traditional media. Exposure to algorithmic content leads users to follow conservative political activist accounts, which they continue to follow even after switching off the algorithm, helping explain the asymmetry in effects. These results suggest that initial exposure to X’s algorithm has persistent effects on users’ current political attitudes and account-following behaviour, even in the absence of a detectable effect on partisanship. Among users initially on a chronological feed, 7 weeks of exposure to X’s algorithmic feed in 2023 shifted political attitudes and account-following behaviour in a more conservative direction compared with those remaining on a chronological feed, whereas switching the feed setting in the opposite direction, from algorithmic to chronological, had no effect.

Details

Links

DOI

", "_academic": { "doi": "10.1038/s41586-026-10098-2", "citation_count": 0, "reference_count": 47, "type": "article", "publisher": "Springer Science and Business Media LLC", "pages": "1--8", "metadata_source": "crossref", "confidence_score": 0.8333333333333333, "quality_score": 100 } }, { "id": "bibtex:Stanusch2026-ec", "title": "How AI is imagined by industry during the Sam Altman controversy", "content_text": "The aim of this research is to identify AI imaginaries, the issues they raise, and how the AI industry tackles them. It does so by choosing an opportune moment to map the issue space by virtue of a controversy around the firing and rehiring of OpenAI CEO Sam Altman. The sites for the mapping are X/Twitter and LinkedIn, where users frantically post not only about Altman but also about all manner of AI promises and pitfalls. By employing techniques from controversy mapping and digital research met...", "date_published": "2026-02-15T00:00:00Z", "_discovery_date": "2026-02-18T17:45:32.060369Z", "_date_estimated": true, "url": "https://doi.org/10.31124/advance.174979411.18178682/v1", "external_url": "https://doi.org/10.31124/advance.174979411.18178682/v1", "authors": [ { "name": "Natalia Stanusch" }, { "name": "Richard Rogers" } ], "tags": [ "New Media Soc.", "Article" ], "content_html": "

Abstract

The aim of this research is to identify AI imaginaries, the issues they raise, and how the AI industry tackles them. It does so by choosing an opportune moment to map the issue space by virtue of a controversy around the firing and rehiring of OpenAI CEO Sam Altman. The sites for the mapping are X/Twitter and LinkedIn, where users frantically post not only about Altman but also about all manner of AI promises and pitfalls. By employing techniques from controversy mapping and digital research methods, we locate contemporary AI imaginaries, assess their salience in a cross-platform perspective, and describe the stakes gleaned from the prominence of certain dominant imaginaries, such as longtermism, regulatory ambivalence, and techno-hagiography. We discuss the issues these imaginaries raise and the AI industry’s premediation and preclusion of them: the manners by which the AI industry strives to occupy the future and absorb the present.

Details

Links

DOI

", "_academic": { "doi": "10.31124/advance.174979411.18178682/v1", "citation_count": 1, "reference_count": 0, "type": "article", "publisher": "SAGE Publications", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 100 } }, { "id": "bibtex:Dierickx2026-tw", "title": "What is a fact in the age of generative AI? Fact-checking as an epistemological lens", "content_text": "Published in Inf. Commun. Soc. | Year: 2026 | Authors: Dierickx, Laurence, Opdahl, Andreas L, Bjerknes, Fredrik, Lindén, Carl-Gustav", "date_published": "2026-02-16T00:00:00Z", "_discovery_date": "2026-02-18T06:35:38.111534Z", "url": "https://doi.org/10.1080/1369118x.2026.2630697", "external_url": "https://doi.org/10.1080/1369118x.2026.2630697", "authors": [ { "name": "Laurence Dierickx" }, { "name": "Andreas L. Opdahl" }, { "name": "Fredrik Bjerknes" }, { "name": "Carl-Gustav Lindén" } ], "tags": [ "Article", "Information, Communication & Society" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1080/1369118x.2026.2630697", "citation_count": 0, "reference_count": 76, "type": "article", "publisher": "Informa UK Limited", "pages": "1--18", "metadata_source": "crossref", "confidence_score": 0.83, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Shi2026-ko", "title": "The Governance-Embedded Interactive Media Effect: the role of AI-generated disclosure in user credibility and engagement based on fact-checking videos on Chinese TikTok (Douyin)", "content_text": "Published in Chin. J. Commun. | Year: 2026 | Authors: Shi, Yiru", "date_published": "2026-02-13T00:00:00Z", "_discovery_date": "2026-02-18T05:19:50.109078Z", "url": "https://doi.org/10.1080/17544750.2026.2623075", "external_url": "https://doi.org/10.1080/17544750.2026.2623075", "authors": [ { "name": "Yiru Shi" } ], "tags": [ "Article", "Chinese Journal of Communication" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1080/17544750.2026.2623075", "citation_count": 0, "reference_count": 49, "type": "article", "publisher": "Informa UK Limited", "pages": "1--16", "metadata_source": "crossref", "confidence_score": 0.8999999999999999, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Philipp2026-tl", "title": "Election research in the age of regulated data access under the EU Digital Services Act", "content_text": "Political debates, campaigns, and advertising increasingly take place on online platforms. However, research on election communication has been significantly hampered as formerly accessible data sources have been withdrawn or commercialised. With the implementation of the EU Digital Services Act (DSA) a number of platforms have (re-)established data access modalities for ‘public’ data via APIs or specific portals. How access to public data is implemented, what it contains, and who gets access al...", "date_published": "2026-02-15T00:00:00Z", "_discovery_date": "2026-02-17T06:14:16.218868Z", "_date_estimated": true, "url": "https://policyreview.info/articles/analysis/election-research-data-access-dsa", "external_url": "https://policyreview.info/articles/analysis/election-research-data-access-dsa", "authors": [ { "name": "(Philipp), Darius" }, { "name": "(Johannes), Breuer" }, { "name": "(Simon), Kruschinski" }, { "name": "(Felicia), Loecherbach" }, { "name": "(Jasmin), Riedl" }, { "name": "(Sebastian), Stier" } ], "tags": [ "Misc" ], "content_html": "

Abstract

Political debates, campaigns, and advertising increasingly take place on online platforms. However, research on election communication has been significantly hampered as formerly accessible data sources have been withdrawn or commercialised. With the implementation of the EU Digital Services Act (DSA) a number of platforms have (re-)established data access modalities for ‘public’ data via APIs or specific portals. How access to public data is implemented, what it contains, and who gets access all depends on decisions by the platforms. This leads to a series of inconsistencies, challenges, and limitations for election research. In this contribution, we discuss the implications of regulated data access under the DSA for election research. We first review central research questions and relevant data types for election research. Then, we provide a historical overview of how data access modalities have changed over the last two decades. Next, we discuss relevant articles of the DSA that aim to improve data access for academic research as well as different data access paths and modalities, including alternatives to APIs such as web scraping and data donations. Finally we summarise key challenges and formulate requirements for data access to enable robust and reproducible election research.

Details

Links

URL

", "_academic": { "type": "misc", "metadata_source": "url", "quality_score": 90, "quality_issues": [ "not_enriched" ] } }, { "id": "bibtex:Choi2026-bz", "title": "The modality-congruent carryover effect: How difficulty in identifying (deep)fake news impacts self-confidence in truth discernment and susceptibility to subsequent disinformation", "content_text": "With the proliferation of (deep)fake news, individuals will increasingly find it difficult to discern whether content is created by human journalists or artificial intelligence. A two-wave online experiment investigated the carryover effects of news authentication during the initial (deep)fake news exposure on responses to subsequent (deep)fake news. Participants encountered celebrity (deep)fake news in text and video formats on social media. The results showed that the difficulty induced by new...", "date_published": "2026-02-10T00:00:00Z", "_discovery_date": "2026-02-16T20:52:21.543674Z", "url": "https://doi.org/10.1177/10776990251413726", "external_url": "https://doi.org/10.1177/10776990251413726", "authors": [ { "name": "Sukyoung Choi" } ], "tags": [ "Article", "Journalism & Mass Communication Quarterly" ], "content_html": "

Abstract

With the proliferation of (deep)fake news, individuals will increasingly find it difficult to discern whether content is created by human journalists or artificial intelligence. A two-wave online experiment investigated the carryover effects of news authentication during the initial (deep)fake news exposure on responses to subsequent (deep)fake news. Participants encountered celebrity (deep)fake news in text and video formats on social media. The results showed that the difficulty induced by news authentication lowered self-confidence in truth discernment, which, in turn, decreased the perceived credibility and engagement intentions toward subsequent unrelated fake news of the same modality.

Details

Links

DOI

", "_academic": { "doi": "10.1177/10776990251413726", "citation_count": 0, "reference_count": 82, "type": "article", "publisher": "SAGE Publications", "metadata_source": "crossref", "confidence_score": 0.8999999999999999, "quality_score": 100 } }, { "id": "bibtex:Inacio-da-Silva2026-zf", "title": "Identifying potentially irregular electoral ads in Facebook during the Brazilian elections", "content_text": "The 2016 United States presidential election was marked by the abuse of targeted advertising on Facebook. Concerned with the risk of the same kind of abuse to happen in the 2018 Brazilian elections, we designed and deployed an independent auditing system to monitor political ads on Meta in Brazil. To do that we first adapted a browser plugin to gather ads from the timeline of volunteers using Facebook. We managed to convince more than 2,000 volunteers to help our project and install our tool. Th...", "date_published": "2026-02-06T00:00:00Z", "_discovery_date": "2026-02-16T20:52:21.543650Z", "url": "https://doi.org/10.1145/3796545", "external_url": "https://doi.org/10.1145/3796545", "authors": [ { "name": "Marcio Inacio da Silva" }, { "name": "Lucas de Oliveira" }, { "name": "Pedro Olmo Vaz de Melo" }, { "name": "Oana Goga" }, { "name": "Fabrício Benevenuto" } ], "tags": [ "Article", "ACM Transactions on the Web" ], "content_html": "

Abstract

The 2016 United States presidential election was marked by the abuse of targeted advertising on Facebook. Concerned with the risk of the same kind of abuse to happen in the 2018 Brazilian elections, we designed and deployed an independent auditing system to monitor political ads on Meta in Brazil. To do that we first adapted a browser plugin to gather ads from the timeline of volunteers using Facebook. We managed to convince more than 2,000 volunteers to help our project and install our tool. Then, we use a Convolution Neural Network (CNN) to detect political Meta ads using word embeddings. To evaluate our approach, we manually label a data collection of 10k ads as political or non-political and then we provide an in-depth evaluation of proposed approach for identifying political ads by comparing it with classic supervised machine learning methods. Finally, we deployed a real system that shows the ads identified as related to politics during the 2018 National Brazilian elections. We also investigated early electoral advertisement before the 2020 local Brazilian elections using our model on unsponsored content (regular posts in groups and pages). We noticed that not all political ads we detected were present in the Meta Ad Library for political ads on 2018. Additionally, we found possible early electoral advertisements in 2020, which is forbidden in Brazil. Our results emphasize the importance of enforcement mechanisms for declaring political ads and the need for independent auditing platforms.

Details

Links

DOI

", "_academic": { "doi": "10.1145/3796545", "citation_count": 0, "reference_count": 68, "type": "article", "publisher": "Association for Computing Machinery (ACM)", "metadata_source": "crossref", "confidence_score": 0.8529411764705882, "quality_score": 100 } }, { "id": "bibtex:Heiss2026-qv", "title": "Addressing social media platforms’ influence on academic research", "content_text": "Social media platforms play a central role in shaping today’s information ecosystem, yet access to both their internal data and even publicly visible content remains tightly restricted for academic researchers. This stands in sharp contrast to other industries such as food and pharmaceuticals where researchers can independently study product ingredients and effects. As a result, academic research on social media faces an unprecedented dependency on industry-controlled data, increasing the risk o...", "date_published": "2026-02-15T00:00:00Z", "_discovery_date": "2026-02-15T07:10:31.064216Z", "_date_estimated": true, "url": "https://doi.org/10.31235/osf.io/ny2tx_v2", "external_url": "https://doi.org/10.31235/osf.io/ny2tx_v2", "authors": [ { "name": "Raffael Heiss" }, { "name": "Isabelle Freiling" } ], "tags": [ "Article", "Humanit. Soc. Sci. Commun." ], "content_html": "

Abstract

Social media platforms play a central role in shaping today’s information ecosystem, yet access to both their internal data and even publicly visible content remains tightly restricted for academic researchers. This stands in sharp contrast to other industries such as food and pharmaceuticals where researchers can independently study product ingredients and effects. As a result, academic research on social media faces an unprecedented dependency on industry-controlled data, increasing the risk of bias and potentially distorting the evidence needed for effective regulation and policymaking. Drawing on research from other disciplines, we examine how industry influence operates and how researchers’ reliance on platforms for data may amplify industry influence. We identify four challenges in collaborations between researchers and social media platforms: restricted data access, selective funding, hard-to-detect influence, and institutional entanglements. These challenges risk undermining the independence and transparency of research in a field of growing societal relevance. Addressing these challenges requires policymakers to regulate data access, as illustrated by the EU’s Digital Services Act (DSA), which mandates data access for vetted researchers while safeguarding user privacy. In addition, new independent funding mechanisms could help ensure that research agendas remain free from platform interests. In parallel, the social science community must adopt stronger ethical standards and invest in “research on research” to detect and mitigate potential biases in policy-relevant research. With a dual approach – policy reforms and critical academic debates – we can ensure that research on social media platforms serves the public interest rather than platform priorities.

Details

Links

DOI

", "_academic": { "doi": "10.31235/osf.io/ny2tx_v2", "citation_count": 0, "reference_count": 0, "type": "article", "publisher": "Springer Science and Business Media LLC", "volume": "13", "pages": "192", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 100 } }, { "id": "bibtex:Dehghan2026-sy", "title": "The entangled dynamics leading to the sedimentation of polarisation on political Reddit", "content_text": "Published in Inf. Commun. Soc. | Year: 2026 | Authors: Dehghan, Ehsan, Carlon, Dominique, Kasianenko, Kateryna, Nagappa, Ashwin, Padinjaredath Suresh, Vish", "date_published": "2026-02-05T00:00:00Z", "_discovery_date": "2026-02-12T19:43:06.526356Z", "url": "https://doi.org/10.1080/1369118x.2026.2623523", "external_url": "https://doi.org/10.1080/1369118x.2026.2623523", "authors": [ { "name": "Ehsan Dehghan" }, { "name": "Dominique Carlon" }, { "name": "Kateryna Kasianenko" }, { "name": "Ashwin Nagappa" }, { "name": "Vish Padinjaredath Suresh" } ], "tags": [ "Article", "Information, Communication & Society" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1080/1369118x.2026.2623523", "citation_count": 0, "reference_count": 75, "type": "article", "publisher": "Informa UK Limited", "pages": "1--24", "metadata_source": "crossref", "confidence_score": 0.85, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Yoo2026-ev", "title": "The role of the term “fake news” in U.s. alternative media: Shaping political discourse and hyperpartisanship", "content_text": "This study examines how the term “fake news” is strategically deployed in U.S. alternative media by analyzing Newsmax and Occupy Democrats as right- and left-leaning outlets. Drawing on Hallin’s sphere model and Egelhofer et al.’s coding categories, it uses manual content analysis and latent Dirichlet allocation topic modeling to identify rhetorical functions and patterns. Findings show that the term “fake news” functions mainly as a weaponized label to delegitimize opponents rather than describ...", "date_published": "2026-01-22T00:00:00Z", "_discovery_date": "2026-02-11T06:01:18.849873Z", "url": "https://doi.org/10.1177/10776990251410599", "external_url": "https://doi.org/10.1177/10776990251410599", "authors": [ { "name": "Joseph J. Yoo" }, { "name": "Hannah Lee" }, { "name": "Soontae An" } ], "tags": [ "Article", "Journalism & Mass Communication Quarterly" ], "content_html": "

Abstract

This study examines how the term “fake news” is strategically deployed in U.S. alternative media by analyzing Newsmax and Occupy Democrats as right- and left-leaning outlets. Drawing on Hallin’s sphere model and Egelhofer et al.’s coding categories, it uses manual content analysis and latent Dirichlet allocation topic modeling to identify rhetorical functions and patterns. Findings show that the term “fake news” functions mainly as a weaponized label to delegitimize opponents rather than describe misinformation. The results show how alternative media redraw boundaries between consensus, controversy, and deviance, underscoring their role in intensifying partisan discourse and redefining political communication.

Details

Links

DOI

", "_academic": { "doi": "10.1177/10776990251410599", "citation_count": 0, "reference_count": 56, "type": "article", "publisher": "SAGE Publications", "metadata_source": "crossref", "confidence_score": 0.8428571428571427, "quality_score": 100 } }, { "id": "bibtex:Lyons2026-ca", "title": "Exposure to low-credibility online health content is limited and is concentrated among older adults", "content_text": "Older adults have been shown to engage more with untrustworthy online content, but most digital trace research has focused on political misinformation. In contrast, studies of health misinformation have largely relied on self-reported survey measures. Using linked survey and digital trace data from a national US sample (n = 1,059), we examine exposure to low-credibility health content across websites and YouTube. Here we show that the overall exposure to low-credibility health content is limited...", "date_published": "2026-02-04T00:00:00Z", "_discovery_date": "2026-02-10T06:52:51.874075Z", "url": "https://doi.org/10.1038/s43587-025-01059-x", "external_url": "https://doi.org/10.1038/s43587-025-01059-x", "authors": [ { "name": "Benjamin Lyons" }, { "name": "Andy J. King" }, { "name": "Rebecca L. Barter" }, { "name": "Kimberly A. Kaphingst" } ], "tags": [ "Article", "Nature Aging" ], "content_html": "

Abstract

Older adults have been shown to engage more with untrustworthy online content, but most digital trace research has focused on political misinformation. In contrast, studies of health misinformation have largely relied on self-reported survey measures. Using linked survey and digital trace data from a national US sample (n = 1,059), we examine exposure to low-credibility health content across websites and YouTube. Here we show that the overall exposure to low-credibility health content is limited but increases with age and is not solely driven by the volume of health-related browsing. Importantly, those who believe inaccurate health claims are more likely to encounter low-credibility content, suggesting that exposure is not merely incidental. While older adults consume less content on YouTube overall, a higher proportion of what they view is from low-credibility sources. Additionally, individuals who consume low-credibility political news are significantly more likely to encounter low-credibility health content. This suggests a shared consumption profile that spans topics and platforms. These results raise new concerns about how online communication environments may potentially shape public health and well-being.

Details

Links

DOI

", "_academic": { "doi": "10.1038/s43587-025-01059-x", "citation_count": 0, "reference_count": 31, "type": "article", "publisher": "Nature Publishing Group", "pages": "1--9", "metadata_source": "crossref", "confidence_score": 0.825, "quality_score": 100 } }, { "id": "bibtex:van-der-Linden2026-jt", "title": "Prebunking misinformation techniques in social media feeds: Results from an Instagram field study", "content_text": "Boosting psychological defences against misleading content online is an active area of research, but transition from the lab to real-world uptake remains a challenge. We developed a 19-second prebunking video about emotionally manipulative content and showed it as a Story Feed ad to N = 375,597 Instagram users in the United Kingdom. Using an innovative method leveraging Instagram’s quiz functionality (N = 806), we found that treatment group users were 21 percentage points better than controls at...", "date_published": "2026-01-22T00:00:00Z", "_discovery_date": "2026-02-09T06:52:57.031866Z", "url": "https://doi.org/10.37016/mr-2020-193", "external_url": "https://doi.org/10.37016/mr-2020-193", "authors": [ { "name": "Sander van der Linden" }, { "name": "Debra Louison-Lavoy" }, { "name": "Nicholas Blazer" }, { "name": "Nancy S. Noble" }, { "name": "Jon Roozenbeek" } ], "tags": [ "Article", "Harvard Kennedy School Misinformation Review" ], "content_html": "

Abstract

Boosting psychological defences against misleading content online is an active area of research, but transition from the lab to real-world uptake remains a challenge. We developed a 19-second prebunking video about emotionally manipulative content and showed it as a Story Feed ad to N = 375,597 Instagram users in the United Kingdom. Using an innovative method leveraging Instagram’s quiz functionality (N = 806), we found that treatment group users were 21 percentage points better than controls at identifying manipulation in a news headline, with effects persisting for five months. Treated users were also more likely to click on a link to learn more. We outline how inoculation campaigns can be scaled in real-world social media feeds.

Details

Links

DOI

", "_academic": { "doi": "10.37016/mr-2020-193", "citation_count": 0, "reference_count": 49, "type": "article", "metadata_source": "crossref", "confidence_score": 0.84, "quality_score": 100 } }, { "id": "bibtex:Graham2026-fb", "title": "On the Internet no-one knows you’re not a bot: ‘Botting’ on Reddit as participatory culture", "content_text": "Repetitive online communication is often labelled a ‘bot problem’ by platforms, policymakers and users. However, repetitive posting does not exclusively indicate automation; humans also engage in bot-like posting for various purposes. We adopt the term ‘botting’ to describe repetitive posting enacted through manual, semi-automated, or fully automated means. While emerging research has linked manual botting practices to commercial or fame-seeking motivations, we extend this scholarship by examini...", "date_published": "2026-02-04T00:00:00Z", "_discovery_date": "2026-02-08T20:48:49.624050Z", "url": "https://doi.org/10.1177/14614448251409210", "external_url": "https://doi.org/10.1177/14614448251409210", "authors": [ { "name": "Timothy Graham" }, { "name": "Dominique Carlon" } ], "tags": [ "Article", "New Media & Society" ], "content_html": "

Abstract

Repetitive online communication is often labelled a ‘bot problem’ by platforms, policymakers and users. However, repetitive posting does not exclusively indicate automation; humans also engage in bot-like posting for various purposes. We adopt the term ‘botting’ to describe repetitive posting enacted through manual, semi-automated, or fully automated means. While emerging research has linked manual botting practices to commercial or fame-seeking motivations, we extend this scholarship by examining botting on Reddit – a pseudonymous platform that lacks the affordances typically associated with monetisation or personal branding. Through a mixed-methods analysis, we examine a case study in which mass-scale, repetitive posting of the mushroom emoji emerged as ‘in-group’ behaviour within Reddit’s participatory culture, prompting a performative counterpublic response. Our findings challenge the binary between human and automated posting, and underscore the importance of situating research on AI-generated and automated content within the cultural and contextual frameworks that shape its production and reception.

Details

Links

DOI

", "_academic": { "doi": "10.1177/14614448251409210", "citation_count": 0, "reference_count": 71, "type": "article", "publisher": "SAGE Publications", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 100 } }, { "id": "bibtex:Rohrbach2026-rc", "title": "Conspiracy in the making: The role of journalistic strategies in the formation of new conspiracy beliefs", "content_text": "Published in Journal. Stud. | Year: 2026 | Authors: Rohrbach, Tobias, Valli, Chiara", "date_published": "2026-02-15T00:00:00Z", "_discovery_date": "2026-02-08T18:59:43.466569Z", "_date_estimated": true, "url": "https://doi.org/10.1080/1461670x.2026.2623882", "external_url": "https://doi.org/10.1080/1461670x.2026.2623882", "authors": [ { "name": "Tobias Rohrbach" }, { "name": "Chiara Valli" } ], "tags": [ "Article", "Journalism Studies" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1080/1461670x.2026.2623882", "citation_count": 0, "reference_count": 78, "type": "article", "publisher": "Informa UK Limited", "pages": "1--21", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Szabo2026-rd", "title": "Conversational Inoculation to enhance resistance to misinformation", "content_text": "Proliferation of misinformation is a globally acknowledged problem. Cognitive Inoculation helps build resistance to different forms of persuasion, such as misinformation. We investigate Conversational Inoculation, a method to help people build resistance to misinformation through dynamic conversations with a chatbot. We built a Web-based system to implement the method, and conducted a within-subject user experiment to compare it with two traditional inoculation methods. Our results validate Conv...", "date_published": "2026-01-29T00:00:00Z", "_discovery_date": "2026-02-08T18:59:43.466543Z", "url": "https://doi.org/10.1145/3772318.379095", "external_url": "https://doi.org/10.1145/3772318.379095", "authors": [ { "name": "Dániel Szabó" }, { "name": "Chi-Lan Yang" }, { "name": "Aku Visuri" }, { "name": "Jonas Oppenlaender" }, { "name": "Bharathi Sekar" }, { "name": "Koji Yatani" }, { "name": "Simo Hosio" } ], "tags": [ "cs.HC", "Article", "arXiv [cs.HC]" ], "content_html": "

Abstract

Proliferation of misinformation is a globally acknowledged problem. Cognitive Inoculation helps build resistance to different forms of persuasion, such as misinformation. We investigate Conversational Inoculation, a method to help people build resistance to misinformation through dynamic conversations with a chatbot. We built a Web-based system to implement the method, and conducted a within-subject user experiment to compare it with two traditional inoculation methods. Our results validate Conversational Inoculation as a viable novel method, and show how it was able to enhance participants' resistance to misinformation. A qualitative analysis of the conversations between participants and the chatbot reveal independence and trust as factors that boosted the efficiency of Conversational Inoculation, and friction of interaction as a factor hindering it. We discuss the opportunities and challenges of using Conversational Inoculation to combat misinformation. Our work contributes a timely investigation and a promising research direction in scalable ways to combat misinformation.

Details

Links

DOI | arXiv | PDF

", "_academic": { "doi": "10.1145/3772318.379095", "open_access": true, "type": "article", "subjects": [ "cs.HC" ], "metadata_source": "arxiv", "confidence_score": 0.72, "quality_score": 100 } }, { "id": "bibtex:Bollenbacher2026-vz", "title": "Effects of antivaccine tweets on COVID-19 vaccinations, cases, and deaths", "content_text": "Abstract Despite the wide availability of COVID-19 vaccines in the United States and their effectiveness in reducing hospitalizations and mortality during the pandemic, a majority of Americans chose not to be vaccinated during 2021. Recent work shows that vaccine misinformation affects intentions in controlled settings, but does not link it to real-world vaccination rates. Here, we present observational evidence of a causal relationship between exposure to antivaccine content and vaccination rat...", "date_published": "2026-01-07T00:00:00Z", "_discovery_date": "2026-02-07T01:01:16.924569Z", "url": "https://doi.org/10.1140/epjds/s13688-025-00606-1", "external_url": "https://doi.org/10.1140/epjds/s13688-025-00606-1", "authors": [ { "name": "John Bollenbacher" }, { "name": "Filippo Menczer" }, { "name": "John Bryden" } ], "tags": [ "Article", "EPJ Data Science" ], "content_html": "

Abstract

Abstract Despite the wide availability of COVID-19 vaccines in the United States and their effectiveness in reducing hospitalizations and mortality during the pandemic, a majority of Americans chose not to be vaccinated during 2021. Recent work shows that vaccine misinformation affects intentions in controlled settings, but does not link it to real-world vaccination rates. Here, we present observational evidence of a causal relationship between exposure to antivaccine content and vaccination rates, and estimate the size of this effect. We present a compartmental epidemic model that includes vaccination, vaccine hesitancy, and exposure to antivaccine content. We fit the model to data to determine that a geographical pattern of exposure to online antivaccine content across US counties explains reduced vaccine uptake in the same counties. We find observational evidence that exposure to antivaccine content on Twitter caused about 14,000 people to refuse vaccination between February and August 2021 in the US, resulting in at least 545 additional cases and 8 additional deaths. This work provides a methodology for linking online speech with offline epidemic outcomes. Our findings should inform social media moderation policy as well as public health interventions.

Details

Links

DOI

", "_academic": { "doi": "10.1140/epjds/s13688-025-00606-1", "citation_count": 0, "reference_count": 44, "type": "article", "publisher": "Springer Science and Business Media LLC", "volume": "15", "pages": "12", "metadata_source": "crossref", "confidence_score": 0.8428571428571427, "quality_score": 100 } }, { "id": "bibtex:Lewandowsky2026-ob", "title": "Internet platforms must be held accountable for their actions", "content_text": "Multiple independent research institutions have recently and repeatedly sounded the alarm that democracy is in retreat worldwide. In the United States, a British academic has painstakingly recorded more than 2,300 actions by the second Trump administration that she believes “echo those of authoritarian regimes and may pose a threat to American democracy.” The reasons for these trends are complex and manifold, but several factors stand out. First, as my colleagues and I showed in a historical ana...", "date_published": "2026-02-05T00:00:00Z", "_discovery_date": "2026-02-06T12:31:50.153250Z", "url": "https://doi.org/10.1126/science.aee9835", "external_url": "https://doi.org/10.1126/science.aee9835", "authors": [ { "name": "Stephan Lewandowsky" } ], "tags": [ "Science", "Article" ], "content_html": "

Abstract

Multiple independent research institutions have recently and repeatedly sounded the alarm that democracy is in retreat worldwide. In the United States, a British academic has painstakingly recorded more than 2,300 actions by the second Trump administration that she believes “echo those of authoritarian regimes and may pose a threat to American democracy.” The reasons for these trends are complex and manifold, but several factors stand out. First, as my colleagues and I showed in a historical analysis of several countries that experienced democratic backsliding during the last century, in nearly all cases, backsliding was triggered by political elites—mainly elected politicians, but also their corporate allies and sympathetic media—violating democratic norms, such as honesty, in the quest to expand their power.

Details

Links

DOI

", "_academic": { "doi": "10.1126/science.aee9835", "citation_count": 0, "reference_count": 0, "type": "article", "publisher": "American Association for the Advancement of Science", "metadata_source": "crossref", "confidence_score": 0.8999999999999999, "quality_score": 100 } }, { "id": "bibtex:Elfes2026-jb", "title": "On narrative: The rhetorical mechanisms of online polarisation", "content_text": "Polarisation research has demonstrated how people cluster in homogeneous groups with opposing opinions. However, this effect emerges not only through interaction between people, limiting communication between groups, but also between narratives, shaping opinions and partisan identities. Yet, how polarised groups collectively construct and negotiate opposing interpretations of reality, and whether narratives move between groups despite limited interactions, remains unexplored. To address this gap...", "date_published": "2026-01-12T00:00:00Z", "_discovery_date": "2026-02-04T11:06:02.329326Z", "url": "http://arxiv.org/abs/2601.07398v1", "external_url": "http://arxiv.org/abs/2601.07398v1", "authors": [ { "name": "Jan Elfes" }, { "name": "Marco Bastos" }, { "name": "Luca Maria Aiello" } ], "tags": [ "Article", "cs.CL", "cs.CY", "arXiv [cs.CY]", "cs.SI" ], "content_html": "

Abstract

Polarisation research has demonstrated how people cluster in homogeneous groups with opposing opinions. However, this effect emerges not only through interaction between people, limiting communication between groups, but also between narratives, shaping opinions and partisan identities. Yet, how polarised groups collectively construct and negotiate opposing interpretations of reality, and whether narratives move between groups despite limited interactions, remains unexplored. To address this gap, we formalise the concept of narrative polarisation and demonstrate its measurement in 212 YouTube videos and 90,029 comments on the Israeli-Palestinian conflict. Based on structural narrative theory and implemented through a large language model, we extract the narrative roles assigned to central actors in two partisan information environments. We find that while videos produce highly polarised narratives, comments significantly reduce narrative polarisation, harmonising discourse on the surface level. However, on a deeper narrative level, recurring narrative motifs reveal additional differences between partisan groups.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.CY", "cs.CL", "cs.SI" ], "metadata_source": "arxiv", "confidence_score": 0.7749999999999999, "quality_score": 100 } }, { "id": "bibtex:Hollingshead2026-vx", "title": "Same platform, different stories: TikTok and the battle over immigration narratives", "content_text": "In an era defined by overlapping global crises, immigration has become a key fault line in what scholars term a “polycrisis.” Within this context, social media platforms serve as digital battlegrounds for competing narratives about immigration, with TikTok occupying a distinct and understudied niche. This article examines how immigration-related content in Canada is framed on TikTok and how the platform’s logic of mimesis and interactivity, grounded in its affordances, shape immigration discours...", "date_published": "2026-01-28T00:00:00Z", "_discovery_date": "2026-02-03T13:55:44.669422Z", "url": "https://doi.org/10.17645/mac.11409", "external_url": "https://doi.org/10.17645/mac.11409", "authors": [ { "name": "William Hollingshead" }, { "name": "Anatoliy Gruzd" }, { "name": "Philip Mai" } ], "tags": [ "affordances", "Article", "immigration", "digital social resilience", "framing", "Media and Communication", "Polycrisis", "TikTok" ], "content_html": "

Abstract

In an era defined by overlapping global crises, immigration has become a key fault line in what scholars term a “polycrisis.” Within this context, social media platforms serve as digital battlegrounds for competing narratives about immigration, with TikTok occupying a distinct and understudied niche. This article examines how immigration-related content in Canada is framed on TikTok and how the platform’s logic of mimesis and interactivity, grounded in its affordances, shape immigration discourse. From a dataset of 5,305 public TikTok videos containing immigration-related terms and hashtags, we selected a sample of 344 English-language videos posted in 2025, each with over 100,000 plays and likely shown to Canadian users. Through a mixed-methods content analysis, we found that, contrary to expectations, the content leaned toward positive portrayals of immigration, accounting for 41% of the sample. Furthermore, despite expressing differing perspectives on immigration, users used TikTok’s affordances in comparable ways. That is, the same affordances that can support immigrants’ information seeking and sense of belonging through practical guidance and relatable storytelling, respectively, can be weaponized to amplify xenophobia by way of manipulated statistics and racist humour performed in skits and AI-generated videos. This highlights how TikTok’s affordances can simultaneously support digital inclusion and community building while also enabling exclusion and hostility. The findings, although rooted in Canada, hold broader relevance for understanding how short-video platforms mediate contentious issues across digitally connected societies.

Details

Links

DOI

", "_academic": { "doi": "10.17645/mac.11409", "citation_count": 0, "reference_count": 61, "type": "article", "publisher": "Cogitatio", "volume": "14", "metadata_source": "crossref", "confidence_score": 0.8428571428571427, "quality_score": 100 } }, { "id": "bibtex:Kim2026-wg", "title": "Targeted digital voter suppression efforts likely decrease voter turnout", "content_text": "In light of continued foreign interference in the US presidential elections, where undisclosed digital voter suppression advertising has been deployed, this study addresses the questions of who is exposed to these ads and whether and how such exposure influences voter turnout. Using a sample that resembles the US voting-age population, the study directly measures each individual’s ad exposure through a user-level real-time ad tracking tool, which is merged with the same individual’s survey respo...", "date_published": "2026-02-03T00:00:00Z", "_discovery_date": "2026-02-02T06:36:34.699078Z", "url": "https://doi.org/10.1073/pnas.2519944123", "external_url": "https://doi.org/10.1073/pnas.2519944123", "authors": [ { "name": "Young Mie Kim" }, { "name": "Ross Dahlke" }, { "name": "Hyebin Song" }, { "name": "Richard Heinrich" } ], "tags": [ "Article", "advertising", "Proceedings of the National Academy of Sciences", "voter suppression", "foreign election interference", "social media", "microtargeting" ], "content_html": "

Abstract

In light of continued foreign interference in the US presidential elections, where undisclosed digital voter suppression advertising has been deployed, this study addresses the questions of who is exposed to these ads and whether and how such exposure influences voter turnout. Using a sample that resembles the US voting-age population, the study directly measures each individual’s ad exposure through a user-level real-time ad tracking tool, which is merged with the same individual’s survey responses to identify voter suppression content and its targeting patterns. By further matching individual-level exposure to voter suppression ads with the same individual’s verified voter turnout records, the study estimates the effects of voter suppression on actual turnout. The study findings from the 2016 US Presidential Election reveal clear geo-racial targeting patterns in voter suppression: non-Whites residing in the racial minority counties of battleground states were exposed to substantially more voter suppression ads than their counterparts. Moreover, exposure to voter suppression ads was associated with decreases in voter turnout at the population level, albeit small. The sharpest declines were observed among non-Whites residing in minority counties of battleground states, suggesting that the intensified turnout suppression among the targeted segments of the electorate may have played a role in shaping turnout.

Details

Links

DOI

", "_academic": { "doi": "10.1073/pnas.2519944123", "citation_count": 0, "reference_count": 55, "type": "article", "volume": "123", "pages": "e2519944123", "metadata_source": "crossref", "confidence_score": 0.83, "quality_score": 100 } }, { "id": "bibtex:Cazzamatta2026-lo", "title": "From moderation to chaos: Meta’s fact-checking and the battle over truth and free speech", "content_text": "Following Zuckerberg’s decision to terminate third-party fact-checking and his association of fact-checkers with censorship, this article examines how platforms respond to falsehoods post-debunking and explores fact-checkers’ views on effective content moderation, particularly regarding content removal or reduced visibility. A comparative content analysis of 2053 debunking articles by 16 Meta partners across 8 European and Latin American countries reveals that most false content was labeled or r...", "date_published": "2026-01-27T00:00:00Z", "_discovery_date": "2026-02-01T07:29:57.124171Z", "url": "https://doi.org/10.1177/14614448251413687", "external_url": "https://doi.org/10.1177/14614448251413687", "authors": [ { "name": "Regina Cazzamatta" } ], "tags": [ "Article", "New Media & Society" ], "content_html": "

Abstract

Following Zuckerberg’s decision to terminate third-party fact-checking and his association of fact-checkers with censorship, this article examines how platforms respond to falsehoods post-debunking and explores fact-checkers’ views on effective content moderation, particularly regarding content removal or reduced visibility. A comparative content analysis of 2053 debunking articles by 16 Meta partners across 8 European and Latin American countries reveals that most false content was labeled or remained online, with deletion occurring in approximately 30% of cases—though it remains unclear whether the removal was carried out by Facebook or by the original spreaders. In addition to 30 expert interviews, the study finds that fact-checkers prioritize counter-speech and transparency, rejecting a permissive “anything goes” stance. Some support removals in cases involving incitement to violence, illegality, or harmful health misinformation. Most agree that freedom of expression should not guarantee algorithmic amplification. Concerns were also raised about the politicization and potential manipulation of Community Notes.

Details

Links

DOI

", "_academic": { "doi": "10.1177/14614448251413687", "citation_count": 0, "reference_count": 47, "type": "article", "publisher": "SAGE Publications", "metadata_source": "crossref", "confidence_score": 0.8999999999999999, "quality_score": 100 } }, { "id": "bibtex:Balluff2026-bv", "title": "The Austrian political advertisement scandal: Patterns of “journalism for sale”", "content_text": "Mounting concern surrounds the influence of political actors on journalism, especially as media outlets face increasing financial pressures. These circumstances can give rise to instances of media capture, a mutually corrupting relationship between political actors and media organizations. However, empirical evidence substantiating such mechanisms and their consequences remains limited, particularly in the context of Western democracies. This chapter investigates a recent case in which a former ...", "date_published": "2026-01-15T00:00:00Z", "_discovery_date": "2026-01-27T10:52:07.596221Z", "_date_estimated": true, "url": "https://doi.org/10.1177/19401612241285672", "external_url": "https://doi.org/10.1177/19401612241285672", "authors": [ { "name": "Paul Balluff" }, { "name": "Jakob-Moritz Eberl" }, { "name": "Sarina Joy Oberhänsli" }, { "name": "Jana Bernhard-Harrer" }, { "name": "Hajo G. Boomgaarden" }, { "name": "Andreas Fahr" }, { "name": "Martin Huber" } ], "tags": [ "Article", "The International Journal of Press/Politics" ], "content_html": "

Abstract

Mounting concern surrounds the influence of political actors on journalism, especially as media outlets face increasing financial pressures. These circumstances can give rise to instances of media capture, a mutually corrupting relationship between political actors and media organizations. However, empirical evidence substantiating such mechanisms and their consequences remains limited, particularly in the context of Western democracies. This chapter investigates a recent case in which a former Austrian chancellor allegedly colluded with a tabloid newspaper to receive better news coverage in exchange for increased ad placements by government institutions. We employ automated content analysis to investigate political news articles from seventeen prominent Austrian news outlets spanning 2012–2021 ( n = 222,659). Adopting a difference-in-differences approach, we find a substantial increase in media visibility of the former Austrian chancellor within the news outlet that is alleged to have received bribes, as well as a decrease in favorability for challenger candidates. Although this study does not aim to prove or disprove the involvement of specific political actors or media organizations in unethical or illegal activities, it introduces an innovative method for detecting unusual patterns in media reporting. Findings are discussed in the context of current threats to media independence and underscore the crucial need to protect journalistic integrity and ensure unbiased information for the public.

Details

Links

DOI

", "_academic": { "doi": "10.1177/19401612241285672", "citation_count": 5, "reference_count": 99, "type": "article", "publisher": "SAGE Publications", "volume": "31", "pages": "91--117", "metadata_source": "crossref", "confidence_score": 0.8176470588235294, "quality_score": 100 } }, { "id": "bibtex:Gillespie2026-aa", "title": "AI red-teaming is a sociotechnical problem", "content_text": "As generative AI technologies find more and more real-world applications, the importance of testing their performance and safety is paramount. “Red-teaming” has quickly become the primary approach to testing AI models—prioritized by AI companies, and enshrined in AI policy and regulation. Members of red teams act as adversaries, probing AI systems to test their safety mechanisms and uncover vulnerabilities. Yet we know far too little about this work or its implications. In this article, we highl...", "date_published": "2026-01-21T00:00:00Z", "_discovery_date": "2026-01-27T06:19:49.072567Z", "url": "https://doi.org/10.1145/3731657", "external_url": "https://doi.org/10.1145/3731657", "authors": [ { "name": "Tarleton Gillespie" }, { "name": "Ryland Shaw" }, { "name": "Mary L. Gray" }, { "name": "Jina Suh" } ], "tags": [ "Article", "Communications of the ACM" ], "content_html": "

Abstract

As generative AI technologies find more and more real-world applications, the importance of testing their performance and safety is paramount. “Red-teaming” has quickly become the primary approach to testing AI models—prioritized by AI companies, and enshrined in AI policy and regulation. Members of red teams act as adversaries, probing AI systems to test their safety mechanisms and uncover vulnerabilities. Yet we know far too little about this work or its implications. In this article, we highlight the importance of understanding the values and assumptions behind red-teaming, the labor arrangements involved, and the psychological impacts on red-teamers, drawing insights from lessons learned around the work of content moderation. Red-teaming should be a deeply interdisciplinary concern. To avoid repeating the mistakes of the recent past, we call for a coordinated network of scholars, from the full range of the computational and social sciences, to study the technical, social, critical, and policy dimensions of red-teaming and of the emerging sociotechnical system that is AI.

Details

Links

DOI

", "_academic": { "doi": "10.1145/3731657", "citation_count": 0, "reference_count": 42, "type": "article", "publisher": "Association for Computing Machinery (ACM)", "metadata_source": "crossref", "confidence_score": 0.83, "quality_score": 100 } }, { "id": "bibtex:Schroeder2026-im", "title": "How malicious AI swarms can threaten democracy", "content_text": "Advances in AI portend a new era of sophisticated disinformation operations. While individual AI systems already create convincing—and at times misleading—information, an imminent development is the emergence of malicious AI swarms. These systems can coordinate covertly, infiltrate communities, evade traditional detectors, and run continuous A/B tests, with round-the-clock persistence. The result can include fabricated grassroots consensus, fragmented shared reality, mass harassment, voter micro...", "date_published": "2026-01-15T00:00:00Z", "_discovery_date": "2026-01-27T06:19:49.072533Z", "_date_estimated": true, "url": "https://doi.org/10.31219/osf.io/qm9yk_v3", "external_url": "https://doi.org/10.31219/osf.io/qm9yk_v3", "authors": [ { "name": "Daniel Thilo Schroeder" }, { "name": "Meeyoung Cha" }, { "name": "Andrea Baronchelli" }, { "name": "Nick Bostrom" }, { "name": "Nicholas Christakis" }, { "name": "David Garcia" }, { "name": "Amit Goldenberg" }, { "name": "Yara Kyrychenko" }, { "name": "Kevin Leyton-Brown" }, { "name": "Nina Lutz" }, { "name": "Gary Marcus" }, { "name": "Filippo Menczer" }, { "name": "Gordon Pennycook" }, { "name": "David Gertler Rand" }, { "name": "Frank Schweitzer" }, { "name": "Christopher Summerfield" }, { "name": "Audrey Tang" }, { "name": "Jay Joseph Van Bavel" }, { "name": "Sander van der Linden" }, { "name": "Dawn Song" }, { "name": "Jonas R. Kunst" } ], "tags": [ "Science", "Article" ], "content_html": "

Abstract

Advances in AI portend a new era of sophisticated disinformation operations. While individual AI systems already create convincing—and at times misleading—information, an imminent development is the emergence of malicious AI swarms. These systems can coordinate covertly, infiltrate communities, evade traditional detectors, and run continuous A/B tests, with round-the-clock persistence. The result can include fabricated grassroots consensus, fragmented shared reality, mass harassment, voter micro-suppression or mobilization, contamination of AI training data, and erosion of institutional trust. With increasing vulnerabilities in democratic processes worldwide, we urge a three-pronged response: (1) platform-side defenses—always-on swarm-detection dashboards, pre-election highfidelity swarm-simulation stress-tests, transparency audits, and optional client-side “AI shields” for users; (2) model-side safeguards—standardized persuasion-risk tests, provenance-authenticating passkeys, and watermarking; and (3) system-level oversight—a UN-backed AI Influence Observatory.

Details

Links

DOI

", "_academic": { "doi": "10.31219/osf.io/qm9yk_v3", "citation_count": 0, "reference_count": 0, "type": "article", "volume": "391", "pages": "354--357", "metadata_source": "crossref", "confidence_score": 0.8113207547169811, "quality_score": 100 } }, { "id": "bibtex:Hameleers2026-mc", "title": "Beyond textual disinformation: Comparing the effects of textual disinformation to AI-generated and video-based visual disinformation across different issues", "content_text": "Although visual and AI-generated disinformation have been associated with alarming political consequences, we currently lack a clear empirical understanding of the effects of different forms of visual disinformation. Against this background, we rely on a pre-registered experimental study in the United States ( N = 982) in which we exposed participants to various modes of textual and visual disinformation on two different issues: The disappearance of flight MH370 and the Russian invasion of Ukrai...", "date_published": "2026-01-23T00:00:00Z", "_discovery_date": "2026-01-27T05:48:02.219127Z", "url": "https://doi.org/10.1177/14614448251409208", "external_url": "https://doi.org/10.1177/14614448251409208", "authors": [ { "name": "Michael Hameleers" }, { "name": "Toni van der Meer" } ], "tags": [ "Article", "New Media & Society" ], "content_html": "

Abstract

Although visual and AI-generated disinformation have been associated with alarming political consequences, we currently lack a clear empirical understanding of the effects of different forms of visual disinformation. Against this background, we rely on a pre-registered experimental study in the United States ( N = 982) in which we exposed participants to various modes of textual and visual disinformation on two different issues: The disappearance of flight MH370 and the Russian invasion of Ukraine. Findings show that, for MH370, there was no difference in credibility between textual, AI-generated, or video-based disinformation. Yet, for the Russian invasion of Ukraine, video-based disinformation was perceived as more credible than textual or image-based disinformation. Our findings indicate that the consequences of visual disinformation are context-bound: Especially in the case of polarizing issues, the out-of-context placement of videos can serve as a plausible form of deceptive evidence.

Details

Links

DOI

", "_academic": { "doi": "10.1177/14614448251409208", "citation_count": 0, "reference_count": 30, "type": "article", "publisher": "SAGE Publications", "metadata_source": "crossref", "confidence_score": 0.9285714285714285, "quality_score": 100 } }, { "id": "bibtex:Di-Domenico2026-zq", "title": "don't you know that you're toxic? how influencer‐driven misinformation fuels online toxicity", "content_text": "ABSTRACT Research on misinformation has focused on message content and cognitive bias, overlooking how source type shapes toxic engagement. This study addresses that gap by showing that influencer‐driven misinformation does not merely increase toxicity: it reconfigures its nature and persistence through relational and social influence mechanisms. Drawing on Source Credibility, Parasocial Interaction, and Social Influence theories, we analyse 101 brand‐related misinformation posts (48,821 comment...", "date_published": "2026-01-19T00:00:00Z", "_discovery_date": "2026-01-25T09:11:59.349818Z", "url": "https://doi.org/10.1002/mar.70106", "external_url": "https://doi.org/10.1002/mar.70106", "authors": [ { "name": "Giandomenico Di Domenico" }, { "name": "Federico Mangió" }, { "name": "Denitsa Dineva" } ], "tags": [ "Article", "symbolic brand attacks", "Psychology & Marketing", "social media influencers", "toxic online discourse", "audience polarization", "digital misinformation" ], "content_html": "

Abstract

ABSTRACT Research on misinformation has focused on message content and cognitive bias, overlooking how source type shapes toxic engagement. This study addresses that gap by showing that influencer‐driven misinformation does not merely increase toxicity: it reconfigures its nature and persistence through relational and social influence mechanisms. Drawing on Source Credibility, Parasocial Interaction, and Social Influence theories, we analyse 101 brand‐related misinformation posts (48,821 comments) across major platforms using a mixed‐method design combining automated toxicity detection, topic modeling, and thematic analysis. Results reveal that influencers amplify toxicity under high engagement, sociopolitical salience, and low pseudonymity conditions, producing distinct patterns such as flame‐bait firestorms and toxic debunking. We identify two influencer‐specific mechanisms: brand‐related misinformation legitimation and community enmeshment, that sustain toxic echo chambers by converting credibility and parasocial bonds into collective antagonism. These findings advance marketing theory by reframing toxicity as a source‐amplified, relational phenomenon, and inform ecosystem‐level interventions structured around publishers, platforms, and people to mitigate influencer‐driven harm.

Details

Links

DOI

", "_academic": { "doi": "10.1002/mar.70106", "citation_count": 0, "reference_count": 139, "type": "article", "publisher": "Wiley", "metadata_source": "crossref", "confidence_score": 0.7875, "quality_score": 100 } }, { "id": "bibtex:Bak-Coleman2026-mk", "title": "Industry influence in high-profile social media research", "content_text": "To what extent is social media research independent from industry influence? Leveraging openly available data, we show that half of the research published in top journals has disclosable ties to industry in the form of prior funding, collaboration, or employment. However, the majority of these ties go undisclosed in the published research. These trends do not arise from broad scientific engagement with industry, but rather from a select group of scientists who maintain long-lasting relationships...", "date_published": "2026-01-16T00:00:00Z", "_discovery_date": "2026-01-22T06:56:36.937566Z", "url": "http://arxiv.org/abs/2601.11507v1", "external_url": "http://arxiv.org/abs/2601.11507v1", "authors": [ { "name": "Joseph Bak-Coleman" }, { "name": "Jevin West" }, { "name": "Cailin O'Connor" }, { "name": "Carl T. Bergstrom" } ], "tags": [ "cs.SI", "Article", "arXiv [cs.SI]" ], "content_html": "

Abstract

To what extent is social media research independent from industry influence? Leveraging openly available data, we show that half of the research published in top journals has disclosable ties to industry in the form of prior funding, collaboration, or employment. However, the majority of these ties go undisclosed in the published research. These trends do not arise from broad scientific engagement with industry, but rather from a select group of scientists who maintain long-lasting relationships with industry. Undisclosed ties to industry are common not just among authors, but among reviewers and academic editors during manuscript evaluation. Further, industry-tied research garners more attention within the academy, among policymakers, on social media, and in the news. Finally, we find evidence that industry ties are associated with a topical focus away from impacts of platform-scale features. Together, these findings suggest industry influence in social media research is extensive, impactful, and often opaque. Going forward there is a need to strengthen disclosure norms and implement policies to ensure the visibility of independent research, and the integrity of industry supported research.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.SI" ], "metadata_source": "arxiv", "confidence_score": 0.7272727272727272, "quality_score": 100 } }, { "id": "bibtex:De2026-ld", "title": "The discursive flexibility of changecraft: Platform change discourse in Meta, TikTok, YouTube, and X", "content_text": "Social media platforms evolve rapidly. While platform studies have often analyzed specific policy or feature changes, there remains a lack of shared language to conceptualize how platforms themselves represent such changes. In this article, we analyze public communications from Meta, YouTube, X, and TikTok to examine how platforms construct and justify change. We explicate platform evolution as a technical but also deeply discursive process. Platforms frame their transformations in interesting w...", "date_published": "2026-01-15T00:00:00Z", "_discovery_date": "2026-01-18T13:03:55.454361Z", "_date_estimated": true, "url": "https://doi.org/10.1177/29768624251408212", "external_url": "https://doi.org/10.1177/29768624251408212", "authors": [ { "name": "Ankolika De" }, { "name": "Kelley Cotter" } ], "tags": [ "Article", "Platforms & Society" ], "content_html": "

Abstract

Social media platforms evolve rapidly. While platform studies have often analyzed specific policy or feature changes, there remains a lack of shared language to conceptualize how platforms themselves represent such changes. In this article, we analyze public communications from Meta, YouTube, X, and TikTok to examine how platforms construct and justify change. We explicate platform evolution as a technical but also deeply discursive process. Platforms frame their transformations in interesting ways, especially if these shifts consolidate power or deepen user dependence. We introduce the concept of changecraft : the strategic discursive practices through which platforms manage, legitimize, and normalize change. Changecraft encompasses the rendering of infrastructural shifts as visible, the framing of ideological pivots as continuity, and the deployment of patchworked updates to subtly reorient platform futures. This framework provides scholars a way to interrogate platform change not just by what changes but by how platforms seek to make change meaningful and acceptable to their publics.

Details

Links

DOI

", "_academic": { "doi": "10.1177/29768624251408212", "citation_count": 0, "reference_count": 57, "type": "article", "publisher": "SAGE Publications", "volume": "3", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 100 } }, { "id": "bibtex:Emilio2026-ik", "title": "The Generative AI Paradox: GenAI and the erosion of trust, the corrosion of information verification, and the demise of truth", "content_text": "Generative AI (GenAI) now produces text, images, audio, and video that can be perceptually convincing at scale and at negligible marginal cost. While public debate often frames the associated harms as \"deepfakes\" or incremental extensions of misinformation and fraud, this view misses a broader socio-technical shift: GenAI enables synthetic realities; coherent, interactive, and potentially personalized information environments in which content, identity, and social interaction are jointly manufac...", "date_published": "2026-01-01T00:00:00Z", "_discovery_date": "2026-01-15T00:00:00Z", "url": "http://arxiv.org/abs/2601.00306v1", "external_url": "http://arxiv.org/abs/2601.00306v1", "authors": [ { "name": "Emilio Ferrara" } ], "tags": [ "Article", "cs.CY", "cs.AI", "arXiv [cs.CY]", "cs.HC" ], "content_html": "

Abstract

Generative AI (GenAI) now produces text, images, audio, and video that can be perceptually convincing at scale and at negligible marginal cost. While public debate often frames the associated harms as "deepfakes" or incremental extensions of misinformation and fraud, this view misses a broader socio-technical shift: GenAI enables synthetic realities; coherent, interactive, and potentially personalized information environments in which content, identity, and social interaction are jointly manufactured and mutually reinforcing. We argue that the most consequential risk is not merely the production of isolated synthetic artifacts, but the progressive erosion of shared epistemic ground and institutional verification practices as synthetic content, synthetic identity, and synthetic interaction become easy to generate and hard to audit. This paper (i) formalizes synthetic reality as a layered stack (content, identity, interaction, institutions), (ii) expands a taxonomy of GenAI harms spanning personal, economic, informational, and socio-technical risks, (iii) articulates the qualitative shifts introduced by GenAI (cost collapse, throughput, customization, micro-segmentation, provenance gaps, and trust erosion), and (iv) synthesizes recent risk realizations (2023-2025) into a compact case bank illustrating how these mechanisms manifest in fraud, elections, harassment, documentation, and supply-chain compromise. We then propose a mitigation stack that treats provenance infrastructure, platform governance, institutional workflow redesign, and public resilience as complementary rather than substitutable, and outline a research agenda focused on measuring epistemic security. We conclude with the Generative AI Paradox: as synthetic media becomes ubiquitous, societies may rationally discount digital evidence altogether.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.CY", "cs.AI", "cs.HC" ], "metadata_source": "arxiv", "confidence_score": 0.7999999999999999, "quality_score": 100 } }, { "id": "bibtex:Iris2026-pg", "title": "Cross-national evidence of disproportionate media visibility for the Radical Right in the 2024 European elections", "content_text": "This study provides a systematic comparative analysis of media visibility of different political families during the 2024 European Parliament elections. We analyzed close to 21,500 unique news from leading national outlets in Austria, Germany, Ireland, Poland, and Portugal - countries with diverse political contexts and levels of media trust. Combining computational and human classification, we identified parties, political leaders, and groups from the article's URLs and titles, and clustered th...", "date_published": "2026-01-09T00:00:00Z", "_discovery_date": "2026-01-15T00:00:00Z", "url": "http://arxiv.org/abs/2601.05826v1", "external_url": "http://arxiv.org/abs/2601.05826v1", "authors": [ { "name": "Íris Damião" }, { "name": "João Franco" }, { "name": "Mariana Silva" }, { "name": "Paulo Almeida" }, { "name": "Pedro C. Magalhães" }, { "name": "Joana Gonçalves-Sá" } ], "tags": [ "cs.CY", "Article", "arXiv [cs.CY]" ], "content_html": "

Abstract

This study provides a systematic comparative analysis of media visibility of different political families during the 2024 European Parliament elections. We analyzed close to 21,500 unique news from leading national outlets in Austria, Germany, Ireland, Poland, and Portugal - countries with diverse political contexts and levels of media trust. Combining computational and human classification, we identified parties, political leaders, and groups from the article's URLs and titles, and clustered them according to European Parliament political families and broad political leanings. Cross-country comparison shows that the Mainstream and the Radical Right were mentioned more often than the other political groups. Moreover, the Radical Right received disproportionate attention relative to electoral results (from 2019 or 2024) and electoral projections, particularly in Austria, Germany, and Ireland. This imbalance increased in the final weeks of the campaign, when media influence on undecided voters is greatest. Outlet-level analysis shows that coverage of right-leaning entities dominated across news sources, especially those generating the highest traffic, suggesting a structural rather than outlet-specific pattern. Media visibility is a central resource, and this systematic mapping of online coverage highlights how traditional media can contribute to structural asymmetries in democratic competition.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.CY" ], "metadata_source": "arxiv", "confidence_score": 0.7214285714285714, "quality_score": 100 } }, { "id": "bibtex:Dubey2026-bl", "title": "Investigating perceived trust and utility of balanced news chatbots among individuals with varying conspiracy beliefs", "content_text": "Published in Comput. Human Behav. | Year: 2026 | Authors: Dubey, Shreya, Ketelaar, Paul E, Dingler, Tilman, Peetz, Hannah K, van Schie, Hein T", "date_published": "2026-01-15T00:00:00Z", "_discovery_date": "2026-01-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1016/j.chb.2026.108920", "external_url": "https://doi.org/10.1016/j.chb.2026.108920", "authors": [ { "name": "Shreya Dubey" }, { "name": "Paul E. Ketelaar" }, { "name": "Tilman Dingler" }, { "name": "Hannah K. Peetz" }, { "name": "Hein T. van Schie" } ], "tags": [ "Article", "Computers in Human Behavior" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1016/j.chb.2026.108920", "citation_count": 0, "reference_count": 65, "type": "article", "publisher": "Elsevier BV", "pages": "108920", "metadata_source": "crossref", "confidence_score": 0.8374999999999999, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Voelkel2026-lc", "title": "A registered report megastudy on the persuasiveness of the most-cited climate messages", "content_text": "It is important to understand how persuasive the most-cited climate change messaging strategies are. In five replication studies, we found limited evidence of persuasive effects of three highly cited strategies (N=3,216). We then conducted a registered report megastudy (N=13,544) testing the effects of the 10 most-cited climate change messaging strategies on Americans’ pro-environmental attitudes and behavior. Six messages significantly affected multiple preregistered attitudes, with effects ran...", "date_published": "2026-01-15T00:00:00Z", "_discovery_date": "2026-01-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.31235/osf.io/xwceg_v2", "external_url": "https://doi.org/10.31235/osf.io/xwceg_v2", "authors": [ { "name": "Jan G. Voelkel" }, { "name": "Ashwini Ashokkumar" }, { "name": "Adina T. Abeles" }, { "name": "Jarret Crawford" }, { "name": "Kylie Fuller" }, { "name": "Chrystal Redekopp" }, { "name": "Renata Bongiorno" }, { "name": "Troy H. Campbell" }, { "name": "Ullrich K. H. Ecker" }, { "name": "Matthew Feinberg" }, { "name": "P. Sol Hart" }, { "name": "Matthew Hornsey" }, { "name": "John Jost" }, { "name": "Aaron Kay" }, { "name": "Anthony Leiserowitz" }, { "name": "Stephan Lewandowsky" }, { "name": "Edward Maibach" }, { "name": "Erik Nisbet" }, { "name": "Nicholas Pidgeon" }, { "name": "Alexa Spence" }, { "name": "Sander van der Linden" }, { "name": "Christopher V. Wolsko" }, { "name": "Jane K. Willenbring" }, { "name": "neil malhotra" }, { "name": "Robb Willer" } ], "tags": [ "Article", "Nat. Clim. Chang." ], "content_html": "

Abstract

It is important to understand how persuasive the most-cited climate change messaging strategies are. In five replication studies, we found limited evidence of persuasive effects of three highly cited strategies (N=3,216). We then conducted a registered report megastudy (N=13,544) testing the effects of the 10 most-cited climate change messaging strategies on Americans’ pro-environmental attitudes and behavior. Six messages significantly affected multiple preregistered attitudes, with effects ranging from one to four percentage points. Persuasiveness varied little across party lines, inconsistent with theories predicting heterogeneous effects for targeted messages. No message increased pro-environmental donations, suggesting costly behaviors are difficult to influence with messaging alone. Inference of mechanisms driving effects was limited as the most impactful messages influenced multiple mediating variables. Taken together, these results identify several persuasive strategies, while also highlighting the limits of short-form messages for increasing Americans’ support for action to address climate change.

Details

Links

DOI

", "_academic": { "doi": "10.31235/osf.io/xwceg_v2", "citation_count": 0, "reference_count": 0, "type": "article", "publisher": "Springer Science and Business Media LLC", "pages": "1--12", "metadata_source": "crossref", "confidence_score": 0.809375, "quality_score": 100 } }, { "id": "bibtex:Poliakoff2026-fa", "title": "Prigozhin’s propaganda team: The st Petersburg internet research agency (2013–2021)", "content_text": "Published in Eur. Asia. Stud. | Year: 2026 | Authors: Poliakoff, Serge, Toepfl, Florian", "date_published": "2026-01-02T00:00:00Z", "_discovery_date": "2026-01-15T00:00:00Z", "url": "https://doi.org/10.1080/09668136.2025.2588334", "external_url": "https://doi.org/10.1080/09668136.2025.2588334", "authors": [ { "name": "Serge Poliakoff" }, { "name": "Florian Toepfl" } ], "tags": [ "Article", "Europe-Asia Studies" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1080/09668136.2025.2588334", "citation_count": 0, "reference_count": 51, "type": "article", "publisher": "Informa UK Limited", "pages": "1--22", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Arceneaux2026-xk", "title": "Social bots as agenda-builders: Evaluating the impact of algorithmic amplification on organizational messaging", "content_text": "Published in J. Publ. Relat. Res. | Year: 2026 | Authors: Arceneaux, Phillip, Anderson, Joshua, Lukito, Josephine, Shah, Mansi, Kiousis, Spiro", "date_published": "2026-01-04T00:00:00Z", "_discovery_date": "2026-01-15T00:00:00Z", "url": "https://doi.org/10.1080/1062726x.2025.2606676", "external_url": "https://doi.org/10.1080/1062726x.2025.2606676", "authors": [ { "name": "Phillip Arceneaux" }, { "name": "Joshua Anderson" }, { "name": "Josephine Lukito" }, { "name": "Mansi Shah" }, { "name": "Spiro Kiousis" } ], "tags": [ "Article", "Journal of Public Relations Research" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1080/1062726x.2025.2606676", "citation_count": 0, "reference_count": 103, "type": "article", "publisher": "Informa UK Limited", "pages": "1--34", "metadata_source": "crossref", "confidence_score": 0.8272727272727272, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Bouchafra2026-ts", "title": "‘My Europe Builds Walls’: A cross-platform visual analysis of the Sweden Democrats’ 2024 EU election campaign", "content_text": "This article examines the visual securitising discourse of Sweden Democrats (SD) through a qualitatively centred analysis of the party’s 2024 European Union (EU) election campaign and its official election slogan ‘My Europe Builds Walls: Against Immigration, Against Criminal Gangs, Against Islamists’. Through a comparative, cross-platform multimodal critical discourse analysis (MCDA) of SD’s posts on Facebook, X and TikTok, this article explores the differences in campaign content across platfor...", "date_published": "2026-01-12T00:00:00Z", "_discovery_date": "2026-01-15T00:00:00Z", "url": "https://doi.org/10.1177/14614448251408336", "external_url": "https://doi.org/10.1177/14614448251408336", "authors": [ { "name": "Salma Bouchafra" }, { "name": "Mathilda Åkerlund" } ], "tags": [ "Article", "New Media & Society" ], "content_html": "

Abstract

This article examines the visual securitising discourse of Sweden Democrats (SD) through a qualitatively centred analysis of the party’s 2024 European Union (EU) election campaign and its official election slogan ‘My Europe Builds Walls: Against Immigration, Against Criminal Gangs, Against Islamists’. Through a comparative, cross-platform multimodal critical discourse analysis (MCDA) of SD’s posts on Facebook, X and TikTok, this article explores the differences in campaign content across platforms, and analyses how these differences provide insights into the party’s understanding of its audiences and the platforms’ respective functionalities. The analysis shows how SD leveraged platform functionalities to balance textual and visual features, repost content, and incorporate hyperlinks on Facebook and X. Using these features, the party posted text-laden, argumentative and seemingly informative posts, which are likely to appeal not only to the customary format of content on the platforms but also to its respective audiences. Yet, although SD had larger followings and much more well-established accounts on both Facebook and X, the party posted the majority of its campaign material on TikTok, primarily in the form of memes. These memes tended to include securitising clips of non-white men engaging in violent protests, vandalism and violence directed towards the local community and law enforcement. We discuss the role these memes play in the SD election campaign and the potential implications such content might have.

Details

Links

DOI

", "_academic": { "doi": "10.1177/14614448251408336", "citation_count": 0, "reference_count": 108, "type": "article", "publisher": "SAGE Publications", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 100 } }, { "id": "bibtex:Rieder2026-pp", "title": "The Tate-space on YouTube: Ambient ideology and the limits of platform moderation", "content_text": "This article investigates the persistence and transformation of Andrew Tate’s presence on YouTube following the removal of his official channels in August 2022. Combining two empirical approaches—a small-scale analysis of top-ranked videos from YouTube search results in 2022 and 2024, and a large-scale data set of over 112k videos—we examine how Tate-related content continues to circulate and how the platform moderates such material. Our findings show that Tate remains highly visible through a d...", "date_published": "2026-01-12T00:00:00Z", "_discovery_date": "2026-01-15T00:00:00Z", "url": "https://doi.org/10.1177/14614448251409209", "external_url": "https://doi.org/10.1177/14614448251409209", "authors": [ { "name": "Bernhard Rieder" }, { "name": "Bastian August" }, { "name": "Brogan Latil" } ], "tags": [ "Article", "New Media & Society" ], "content_html": "

Abstract

This article investigates the persistence and transformation of Andrew Tate’s presence on YouTube following the removal of his official channels in August 2022. Combining two empirical approaches—a small-scale analysis of top-ranked videos from YouTube search results in 2022 and 2024, and a large-scale data set of over 112k videos—we examine how Tate-related content continues to circulate and how the platform moderates such material. Our findings show that Tate remains highly visible through a diffuse and decentralized network of actors who repackage his messaging into interviews, remixes, and YouTube-native formats. This configuration produces what we term the “Tate-space”: an ambient ideological environment where motivational rhetoric, aspirational masculinity, and far-right talking points converge. We find that YouTube’s substantial moderation efforts are outpaced by the speed and scale of recommendation-driven circulation and that deplatforming, while symbolically significant, fails to disrupt the cultural and logistical dynamics that sustain Tate’s influence.

Details

Links

DOI

", "_academic": { "doi": "10.1177/14614448251409209", "citation_count": 0, "reference_count": 59, "type": "article", "publisher": "SAGE Publications", "metadata_source": "crossref", "confidence_score": 0.8428571428571427, "quality_score": 100 } }, { "id": "bibtex:Larsson2026-ro", "title": "“Meet the new boss – Same as the old boss”: A longitudinal study of political post sentiment and Facebook engagement", "content_text": "This study investigates the dynamics of online political communication on Facebook, focusing on the Facebook posts made by the Norwegian Progress Party and its recent leaders, Siv Jensen and Sylvi Listhaug, over a decade-long period. Answering the call for longitudinal insights into political communication, we utilize a novel hybrid content analysis approach combining large language models (LLMs) and human classification, to assess the sentiment of Facebook posts — categorized as negative, posit...", "date_published": "2026-01-08T00:00:00Z", "_discovery_date": "2026-01-15T00:00:00Z", "url": "https://doi.org/10.5210/fm.v31i1.14448", "external_url": "https://doi.org/10.5210/fm.v31i1.14448", "authors": [ { "name": "Anders Olof Larsson" } ], "tags": [ "Article", "Large Language Models", "Political Communication", "Facebook", "First Monday", "Content Analysis", "Norway" ], "content_html": "

Abstract

This study investigates the dynamics of online political communication on Facebook, focusing on the Facebook posts made by the Norwegian Progress Party and its recent leaders, Siv Jensen and Sylvi Listhaug, over a decade-long period. Answering the call for longitudinal insights into political communication, we utilize a novel hybrid content analysis approach combining large language models (LLMs) and human classification, to assess the sentiment of Facebook posts — categorized as negative, positive, or neutral. Moreover, we assess the levels of engagement reached by posts featuring different sentiment. Our analysis reveals increased negative sentiment and engagement over time, particularly under Listhaug’s leadership, confirming the hypothesis that negative content drives higher engagement. Posts with negative sentiment consistently garnered more shares and comments, reflecting the strategic value of negativity in amplifying political messages. However, likes were more frequently associated with positive content, suggesting more nuanced engagement patterns. These findings contribute to our understanding the evolving landscape of digital politicking, highlighting the interplay between political actors’ communication strategies and audience engagement.

Details

Links

DOI

", "_academic": { "doi": "10.5210/fm.v31i1.14448", "citation_count": 0, "reference_count": 0, "type": "article", "publisher": "University of Illinois Libraries", "metadata_source": "crossref", "confidence_score": 0.95, "quality_score": 100 } }, { "id": "bibtex:noauthor_undated-zb", "title": "Untitled", "content_text": null, "date_published": null, "_discovery_date": "2026-01-01T13:07:00.531343Z", "url": "https://www.sciencedirect.com/science/article/abs/pii/S2468696425000424", "external_url": "https://www.sciencedirect.com/science/article/abs/pii/S2468696425000424", "tags": [ "Misc" ], "content_html": "

Links

URL

", "_academic": { "type": "misc", "metadata_source": "url", "quality_score": 10, "quality_issues": [ "missing_title", "missing_authors", "missing_abstract", "missing_date", "not_enriched" ] } }, { "id": "bibtex:noauthor_undated-ue", "title": "Scam Gpt", "content_text": null, "date_published": null, "_discovery_date": "2026-01-01T13:07:00.531343Z", "url": "https://www.sciencedirect.com/science/article/abs/pii/S2468696425000424", "external_url": "https://www.sciencedirect.com/science/article/abs/pii/S2468696425000424", "tags": [ "Misc" ], "content_html": "

Links

URL

", "_academic": { "type": "misc", "metadata_source": "url", "quality_score": 10, "quality_issues": [ "missing_title", "missing_authors", "missing_abstract", "missing_date", "not_enriched" ] } }, { "id": "bibtex:noauthor_undated-pl", "title": "Untitled", "content_text": null, "date_published": null, "_discovery_date": "2026-01-01T13:07:00.531343Z", "url": "https://www.sciencedirect.com/science/article/abs/pii/S2468696425000424", "external_url": "https://www.sciencedirect.com/science/article/abs/pii/S2468696425000424", "tags": [ "Misc" ], "content_html": "

Links

URL

", "_academic": { "type": "misc", "metadata_source": "url", "quality_score": 10, "quality_issues": [ "missing_title", "missing_authors", "missing_abstract", "missing_date", "not_enriched" ] } }, { "id": "bibtex:noauthor_undated-bz", "title": "Untitled", "content_text": null, "date_published": null, "_discovery_date": "2026-01-01T13:07:00.531343Z", "url": "https://www.sciencedirect.com/science/article/abs/pii/S2468696425000424", "external_url": "https://www.sciencedirect.com/science/article/abs/pii/S2468696425000424", "tags": [ "Misc" ], "content_html": "

Links

URL

", "_academic": { "type": "misc", "metadata_source": "url", "quality_score": 10, "quality_issues": [ "missing_title", "missing_authors", "missing_abstract", "missing_date", "not_enriched" ] } }, { "id": "bibtex:noauthor_undated-fm", "title": "Volume 7 issue 2 article 4", "content_text": "The Journal of Communication Technology (JoCTEC) is an official journal of the Communication Technology division of the Association for Education in Journalism and Mass Communication (AEJMC). ***This is the official journal site. The older site (joctec.org) is no longer updated. ***", "date_published": null, "_discovery_date": "2026-01-01T13:07:00.531309Z", "url": "https://www.joctec.net/all-issues/volume-7/volume-7-issue-2/volume-7-issue-2-article-4", "external_url": "https://www.joctec.net/all-issues/volume-7/volume-7-issue-2/volume-7-issue-2-article-4", "tags": [ "Misc" ], "content_html": "

Abstract

The Journal of Communication Technology (JoCTEC) is an official journal of the Communication Technology division of the Association for Education in Journalism and Mass Communication (AEJMC). ***This is the official journal site. The older site (joctec.org) is no longer updated. ***

Links

URL

", "_academic": { "type": "misc", "metadata_source": "url", "quality_score": 55, "quality_issues": [ "missing_authors", "missing_date", "not_enriched" ] } }, { "id": "bibtex:Efstratiou2025-gs", "title": "Rabble-rousers in the new king's court: Algorithmic effects on account visibility in pre-X twitter", "content_text": "Algorithmic effects on social media platforms have come under recent scrutiny, with several works reporting that right-leaning accounts tend to receive more exposure. In this paper, we expand upon this body of work using data collected from user feeds after Twitter's change of ownership but before its re-branding to X. We replicate findings from prior work regarding the increased exposure of right-leaning accounts to wider audiences in algorithmically curated compared to reverse-chronological fe...", "date_published": "2025-12-05T00:00:00Z", "_discovery_date": "2025-12-15T00:00:00Z", "url": "http://arxiv.org/abs/2512.06129v1", "external_url": "http://arxiv.org/abs/2512.06129v1", "authors": [ { "name": "Alexandros Efstratiou" }, { "name": "Kayla Duskin" }, { "name": "Kate Starbird" }, { "name": "Emma Spiro" } ], "tags": [ "cs.SI", "Article", "arXiv [cs.SI]" ], "content_html": "

Abstract

Algorithmic effects on social media platforms have come under recent scrutiny, with several works reporting that right-leaning accounts tend to receive more exposure. In this paper, we expand upon this body of work using data collected from user feeds after Twitter's change of ownership but before its re-branding to X. We replicate findings from prior work regarding the increased exposure of right-leaning accounts to wider audiences in algorithmically curated compared to reverse-chronological feeds, and, crucially, we further unpack this effect to understand what correlated (and did not correlate) with these differences. Our results reveal that right-leaning accounts benefited not necessarily due to their political affiliation, but possibly because they behaved in ways associated with algorithmic rewards; namely, posting more agitating content and receiving attention from the platform's owner, Elon Musk, who was the most central network account. We also demonstrate that legacy-verified accounts, like businesses and government officials, received less exposure in the algorithmic feed compared to non-verified or Twitter Blue-verified accounts. We discuss implications of these findings for the intersection between behavioral incentives for algorithmic reach and online trust and safety.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.SI" ], "metadata_source": "arxiv", "confidence_score": 0.7333333333333333, "quality_score": 100 } }, { "id": "bibtex:Pierri2025-hm", "title": "Research opportunities and challenges of the EU's Digital Services Act", "content_text": "The Digital Services Act (DSA) introduced by the European Union in 2022 offers a landmark framework for platform transparency, with Article 40 enabling vetted researchers to access data from major online platforms. Yet significant legal, technical, and organizational barriers still hinder effective research on systemic online risks. This piece outlines the key challenges emerging from the Article 40 process and proposes practical measures to ensure that the DSA fulfills its transparency and acco...", "date_published": "2025-12-16T00:00:00Z", "_discovery_date": "2025-12-15T00:00:00Z", "url": "http://arxiv.org/abs/2512.14223v1", "external_url": "http://arxiv.org/abs/2512.14223v1", "authors": [ { "name": "Francesco Pierri" }, { "name": "Theo Araujo" }, { "name": "Sanne Kruikemeier" }, { "name": "Philipp Lorenz-Spreen" }, { "name": "Mariek M. P. Vanden Abeele" }, { "name": "Laura Vandenbosch" }, { "name": "Joana Gonçalves-Sa" }, { "name": "Przemyslaw A. Grabowicz" } ], "tags": [ "cs.CY", "Article", "cs.SI", "arXiv [cs.CY]" ], "content_html": "

Abstract

The Digital Services Act (DSA) introduced by the European Union in 2022 offers a landmark framework for platform transparency, with Article 40 enabling vetted researchers to access data from major online platforms. Yet significant legal, technical, and organizational barriers still hinder effective research on systemic online risks. This piece outlines the key challenges emerging from the Article 40 process and proposes practical measures to ensure that the DSA fulfills its transparency and accountability goals.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.CY", "cs.SI" ], "metadata_source": "arxiv", "confidence_score": 0.7142857142857142, "quality_score": 100 } }, { "id": "bibtex:Iannucci2025-eg", "title": "Detecting coordinated activities through temporal, multiplex, and collaborative analysis", "content_text": "In the era of widespread online content consumption, effective detection of coordinated efforts is crucial for mitigating potential threats arising from information manipulation. Despite advances in isolating inauthentic and automated actors, the actions of individual accounts involved in influence campaigns may not stand out as anomalous if analyzed independently of the coordinated group. Given the collaborative nature of information operations, coordinated campaigns are better characterized by...", "date_published": "2025-12-22T00:00:00Z", "_discovery_date": "2025-12-15T00:00:00Z", "url": "http://arxiv.org/abs/2512.19677v1", "external_url": "http://arxiv.org/abs/2512.19677v1", "authors": [ { "name": "Letizia Iannucci" }, { "name": "Elisa Muratore" }, { "name": "Antonis Matakos" }, { "name": "Mikko Kivelä" } ], "tags": [ "cs.SI", "cs.CY", "Article", "arXiv [cs.SI]" ], "content_html": "

Abstract

In the era of widespread online content consumption, effective detection of coordinated efforts is crucial for mitigating potential threats arising from information manipulation. Despite advances in isolating inauthentic and automated actors, the actions of individual accounts involved in influence campaigns may not stand out as anomalous if analyzed independently of the coordinated group. Given the collaborative nature of information operations, coordinated campaigns are better characterized by evidence of similar temporal behavioral patterns that extend beyond coincidental synchronicity across a group of accounts. We propose a framework to model complex coordination patterns across multiple online modalities. This framework utilizes multiplex networks to first decompose online activities into different interaction layers, and subsequently aggregate evidence of online coordination across the layers. In addition, we propose a time-aware collaboration model to capture patterns of online coordination for each modality. The proposed time-aware model builds upon the node-normalized collaboration model and accounts for repetitions of coordinated actions over different time intervals by employing an exponential decay temporal kernel. We validate our approach on multiple datasets featuring different coordinated activities. Our results demonstrate that a multiplex time-aware model excels in the identification of coordinating groups, outperforming previously proposed methods in coordinated activity detection.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.SI", "cs.CY" ], "metadata_source": "arxiv", "confidence_score": 0.7333333333333333, "quality_score": 100 } }, { "id": "bibtex:Gardam2025-er", "title": "Multimodal narratives of climate denial: A novel, visual-first methodology for analysing conspiracy theory discourse on Instagram", "content_text": "Published in Discourse Context \\& Media | Year: 2025 | Authors: Gardam, Caroline, Riedlinger, Michelle, Angus, Daniel, (Jane) Tan, Xue Ying", "date_published": "2025-12-15T00:00:00Z", "_discovery_date": "2025-12-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1016/j.dcm.2025.100946", "external_url": "https://doi.org/10.1016/j.dcm.2025.100946", "authors": [ { "name": "Caroline Gardam" }, { "name": "Michelle Riedlinger" }, { "name": "Daniel Angus" }, { "name": "Xue Ying (Jane) Tan" } ], "tags": [ "Article", "Discourse, Context & Media" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1016/j.dcm.2025.100946", "citation_count": 0, "reference_count": 64, "type": "article", "publisher": "Elsevier BV", "volume": "68", "pages": "100946", "metadata_source": "crossref", "confidence_score": 0.8818181818181817, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Lin2025-xp", "title": "Persuading voters using human–artificial intelligence dialogues", "content_text": "There is great public concern about the potential use of generative artificial intelligence (AI) for political persuasion and the resulting impacts on elections and democracy1–6. We inform these concerns using pre-registered experiments to assess the ability of large language models to influence voter attitudes. In the context of the 2024 US presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election, we assigned participants randomly to have a conversatio...", "date_published": "2025-12-11T00:00:00Z", "_discovery_date": "2025-12-15T00:00:00Z", "url": "https://doi.org/10.1038/s41586-025-09771-9", "external_url": "https://doi.org/10.1038/s41586-025-09771-9", "authors": [ { "name": "Hause Lin" }, { "name": "Gabriela Czarnek" }, { "name": "Benjamin Lewis" }, { "name": "Joshua P. White" }, { "name": "Adam J. Berinsky" }, { "name": "Thomas Costello" }, { "name": "Gordon Pennycook" }, { "name": "David G. Rand" } ], "tags": [ "Article", "Nature" ], "content_html": "

Abstract

There is great public concern about the potential use of generative artificial intelligence (AI) for political persuasion and the resulting impacts on elections and democracy1–6. We inform these concerns using pre-registered experiments to assess the ability of large language models to influence voter attitudes. In the context of the 2024 US presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election, we assigned participants randomly to have a conversation with an AI model that advocated for one of the top two candidates. We observed significant treatment effects on candidate preference that are larger than typically observed from traditional video advertisements7–9. We also document large persuasion effects on Massachusetts residents’ support for a ballot measure legalizing psychedelics. Examining the persuasion strategies9 used by the models indicates that they persuade with relevant facts and evidence, rather than using sophisticated psychological persuasion techniques. Not all facts and evidence presented, however, were accurate; across all three countries, the AI models advocating for candidates on the political right made more inaccurate claims. Together, these findings highlight the potential for AI to influence voters and the important role it might play in future elections. Human–artificial intelligence (AI) dialogues can meaningfully impact voters’ attitudes towards presidential candidates and policy, demonstrating the potential of conversational AI to influence political decision-making.

Details

Links

DOI

", "_academic": { "doi": "10.1038/s41586-025-09771-9", "citation_count": 3, "reference_count": 58, "type": "article", "publisher": "Springer Science and Business Media LLC", "pages": "1--8", "metadata_source": "crossref", "confidence_score": 0.8142857142857142, "quality_score": 100 } }, { "id": "bibtex:Matias2025-px", "title": "How public involvement can improve the science of AI", "content_text": "As AI systems from decision-making algorithms to generative AI are deployed more widely, computer scientists and social scientists alike are being called on to provide trustworthy quantitative evaluations of AI safety and reliability. These calls have included demands from affected parties to be given a seat at the table of AI evaluation. What, if anything, can public involvement add to the science of AI? In this perspective, we summarize the sociotechnical challenge of evaluating AI systems, wh...", "date_published": "2025-12-02T00:00:00Z", "_discovery_date": "2025-12-15T00:00:00Z", "url": "https://doi.org/10.1073/pnas.2421111122", "external_url": "https://doi.org/10.1073/pnas.2421111122", "authors": [ { "name": "J. Nathan Matias" }, { "name": "Megan Price" } ], "tags": [ "Article", "Proceedings of the National Academy of Sciences", "citizen science", "policy", "AI", "evaluation", "participatory research" ], "content_html": "

Abstract

As AI systems from decision-making algorithms to generative AI are deployed more widely, computer scientists and social scientists alike are being called on to provide trustworthy quantitative evaluations of AI safety and reliability. These calls have included demands from affected parties to be given a seat at the table of AI evaluation. What, if anything, can public involvement add to the science of AI? In this perspective, we summarize the sociotechnical challenge of evaluating AI systems, which often adapt to multiple layers of social context that shape their outcomes. We then offer guidance for improving the science of AI by engaging lived-experience experts in the design, data collection, and interpretation of scientific evaluations. This article reviews common models of public engagement in AI research alongside common concerns about participatory methods, including questions about generalizable knowledge, subjectivity, reliability, and practical logistics. To address these questions, we summarize the literature on participatory science, discuss case studies from AI in healthcare, and share our own experience evaluating AI in areas from policing systems to social media algorithms. Overall, we describe five parts of any quantitative evaluation where public participation can improve the science of AI: equipoise, explanation, measurement, inference, and interpretation. We conclude with reflections on the role that participatory science can play in trustworthy AI by supporting trustworthy science.

Details

Links

DOI

", "_academic": { "doi": "10.1073/pnas.2421111122", "citation_count": 0, "reference_count": 172, "type": "article", "publisher": "Proceedings of the National Academy of Sciences", "volume": "122", "pages": "e2421111122", "metadata_source": "crossref", "confidence_score": 0.85, "quality_score": 100 } }, { "id": "bibtex:Knupfer2025-vt", "title": "The logic of connective faction: How digitally networked elites and hyper-partisan media radicalize politics", "content_text": "Published in Polit. Commun. | Year: 2025 | Authors: Knüpfer, Curd B, Yang, Yunkang, Cowburn, Mike", "date_published": "2025-12-23T00:00:00Z", "_discovery_date": "2025-12-15T00:00:00Z", "url": "https://doi.org/10.1080/10584609.2025.2604708", "external_url": "https://doi.org/10.1080/10584609.2025.2604708", "authors": [ { "name": "Curd B. Knüpfer" }, { "name": "Yunkang Yang" }, { "name": "Mike Cowburn" } ], "tags": [ "Article", "Political Communication" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1080/10584609.2025.2604708", "citation_count": 0, "reference_count": 114, "type": "article", "publisher": "Informa UK Limited", "pages": "1--33", "metadata_source": "crossref", "confidence_score": 0.8374999999999999, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Kalsnes2025-zb", "title": "‘More than a feeling’ – Facebook reactions and the sharing of political posts during Scandinavian elections", "content_text": "Social media platforms offer political actors a wide range of opportunities to present their main issues, engage with their constituencies and mobilize voters. Research has shown that parties striv...", "date_published": "2025-12-05T00:00:00Z", "_discovery_date": "2025-12-15T00:00:00Z", "url": "https://doi.org/10.1080/1369118x.2025.2595670", "external_url": "https://doi.org/10.1080/1369118x.2025.2595670", "authors": [ { "name": "Bente Kalsnes" }, { "name": "Anders Olof Larsson" } ], "tags": [ "Article", "Information, Communication & Society" ], "content_html": "

Abstract

Social media platforms offer political actors a wide range of opportunities to present their main issues, engage with their constituencies and mobilize voters. Research has shown that parties striv...

Details

Links

DOI

", "_academic": { "doi": "10.1080/1369118x.2025.2595670", "citation_count": 0, "reference_count": 65, "type": "article", "publisher": "Informa UK Limited", "pages": "1--20", "metadata_source": "crossref", "confidence_score": 0.8999999999999999, "quality_score": 100 } }, { "id": "bibtex:Lieu2025-nl", "title": "Testing the impact of fallacies and contrarian claims in climate change misinformation", "content_text": "Abstract Climate misinformation reduces public acceptance of climate change and undermines support for mitigation policies. This study explored the impact of different types of climate misinformation, examining through content‐based and logic‐based frameworks. The content‐based framework was based on a taxonomy of contrarian claims consisting of five categories—it's not real, it's not us, it's not bad, climate solutions won't work and scientists are not reliable. The logic‐based framework examin...", "date_published": "2025-12-29T00:00:00Z", "_discovery_date": "2025-12-15T00:00:00Z", "url": "https://doi.org/10.1111/bjop.70049", "external_url": "https://doi.org/10.1111/bjop.70049", "authors": [ { "name": "Renee Lieu" }, { "name": "Oliver R. Hayes" }, { "name": "John Cook" } ], "tags": [ "Article", "misinformation", "British Journal of Psychology", "virality", "veracity", "social media", "reasoning fallacies" ], "content_html": "

Abstract

Abstract Climate misinformation reduces public acceptance of climate change and undermines support for mitigation policies. This study explored the impact of different types of climate misinformation, examining through content‐based and logic‐based frameworks. The content‐based framework was based on a taxonomy of contrarian claims consisting of five categories—it's not real, it's not us, it's not bad, climate solutions won't work and scientists are not reliable. The logic‐based framework examined six rhetorical techniques used in science denial arguments—misrepresentation, false equivalence, oversimplification, red herring, cherry picking and slothful induction. We experimentally tested 30 misinformation examples, crossing five content categories with six fallacies. Participants rated the perceived veracity of misinformation as well as the likelihood of interacting with it. We found no main effect of fallacy on perceived veracity or likelihood to interact but did find a main effect of content category, with the fourth category (climate solutions won't work) perceived as most veracious. We also found that content categories interacted with political ideology, replicating past research into the polarizing effect of climate misinformation. Specifically, the most polarizing categories of misinformation were those targeting climate solutions or attacking climate scientists. Our results highlight the need to prioritize combatting misinformation that targets solutions and scientists.

Details

Links

DOI

", "_academic": { "doi": "10.1111/bjop.70049", "citation_count": 0, "reference_count": 77, "type": "article", "publisher": "Wiley", "metadata_source": "crossref", "confidence_score": 0.8374999999999999, "quality_score": 100 } }, { "id": "bibtex:Copland2025-em", "title": "Sky News Australia as network propaganda: how a niche cable channel became an international right-wing propaganda machine", "content_text": "In the past decade or so, the right-wing news channel, Sky News Australia , has pivoted to publishing online content that speaks to broadly right-wing audiences both in Australia and internationally. While the channel has long been regarded as comparatively unsuccessful, with low TV ratings, such unimpressive pay-TV audience ratings obscure a considerably more significant development elsewhere: Sky News Australia's content is shared and consumed increasingly widely in digital form, via social me...", "date_published": "2025-12-24T00:00:00Z", "_discovery_date": "2025-12-15T00:00:00Z", "url": "https://doi.org/10.1177/1329878x251406899", "external_url": "https://doi.org/10.1177/1329878x251406899", "authors": [ { "name": "Simon Copland" }, { "name": "Axel Bruns" }, { "name": "Timothy Graham" } ], "tags": [ "Article", "Media International Australia" ], "content_html": "

Abstract

In the past decade or so, the right-wing news channel, Sky News Australia , has pivoted to publishing online content that speaks to broadly right-wing audiences both in Australia and internationally. While the channel has long been regarded as comparatively unsuccessful, with low TV ratings, such unimpressive pay-TV audience ratings obscure a considerably more significant development elsewhere: Sky News Australia's content is shared and consumed increasingly widely in digital form, via social media. Positioning Sky News Australia as an example of a global system of network propaganda, this article examines how the channel has spread its content online. Studying a core period of Sky News Australia's digital growth during the 2020 COVID pandemic and US presidential race, we use a unique combination of digital methods to document how Sky News Australia has transitioned from a niche cable station to an international right-wing propaganda machine.

Details

Links

DOI

", "_academic": { "doi": "10.1177/1329878x251406899", "citation_count": 0, "reference_count": 26, "type": "article", "publisher": "SAGE Publications", "metadata_source": "crossref", "confidence_score": 0.8428571428571427, "quality_score": 100 } }, { "id": "bibtex:Triedman2025-uy", "title": "What did Elon change? A comprehensive analysis of Grokipedia", "content_text": "Elon Musk released Grokipedia on 27 October 2025 to provide an alternative to Wikipedia, the crowdsourced online encyclopedia. In this paper, we provide the first comprehensive analysis of Grokipedia and compare it to a dump of Wikipedia, with a focus on article similarity and citation practices. Although Grokipedia articles are much longer than their corresponding English Wikipedia articles, we find that much of Grokipedia's content (including both articles with and without Creative Commons lic...", "date_published": "2025-11-12T00:00:00Z", "_discovery_date": "2025-11-15T00:00:00Z", "url": "http://arxiv.org/abs/2511.09685v1", "external_url": "http://arxiv.org/abs/2511.09685v1", "authors": [ { "name": "Harold Triedman" }, { "name": "Alexios Mantzarlis" } ], "tags": [ "cs.SI", "Article", "arXiv [cs.SI]" ], "content_html": "

Abstract

Elon Musk released Grokipedia on 27 October 2025 to provide an alternative to Wikipedia, the crowdsourced online encyclopedia. In this paper, we provide the first comprehensive analysis of Grokipedia and compare it to a dump of Wikipedia, with a focus on article similarity and citation practices. Although Grokipedia articles are much longer than their corresponding English Wikipedia articles, we find that much of Grokipedia's content (including both articles with and without Creative Commons licenses) is highly derivative of Wikipedia. Nevertheless, citation practices between the sites differ greatly, with Grokipedia citing many more sources deemed "generally unreliable" or "blacklisted" by the English Wikipedia community and low quality by external scholars, including dozens of citations to sites like Stormfront and Infowars. We then analyze article subsets: one about elected officials, one about controversial topics, and one random subset for which we derive article quality and topic. We find that the elected official and controversial article subsets showed less similarity between their Wikipedia version and Grokipedia version than other pages. The random subset illustrates that Grokipedia focused rewriting the highest quality articles on Wikipedia, with a bias towards biographies, politics, society, and history. Finally, we publicly release our nearly-full scrape of Grokipedia, as well as embeddings of the entire Grokipedia corpus.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.SI" ], "metadata_source": "arxiv", "confidence_score": 0.76, "quality_score": 100 } }, { "id": "bibtex:Song2025-yh", "title": "The spread of pro- and anti-vaccine views by coordinated communities on facebook during COVID-19 pandemic", "content_text": "Abstract The widespread dissemination of problematic vaccine-related content during the COVID-19 pandemic has posed serious challenges to public health and eroded institutional trust. This study investigates the interplay between manipulative social media actors, coordinated behaviors, and misleading information by analyzing coordinated link sharing behavior (CLSB) on Facebook in the United Kingdom and the United States. Drawing on a dataset of 3,469,719 public Facebook posts, we examine whether...", "date_published": "2025-11-15T00:00:00Z", "_discovery_date": "2025-11-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1007/s42001-025-00401-y", "external_url": "https://doi.org/10.1007/s42001-025-00401-y", "authors": [ { "name": "Yunya Song" }, { "name": "Yin Zhang" }, { "name": "Sheng Zou" }, { "name": "Xian Yang" }, { "name": "Qintao Huang" } ], "tags": [ "Article", "Journal of Computational Social Science" ], "content_html": "

Abstract

Abstract The widespread dissemination of problematic vaccine-related content during the COVID-19 pandemic has posed serious challenges to public health and eroded institutional trust. This study investigates the interplay between manipulative social media actors, coordinated behaviors, and misleading information by analyzing coordinated link sharing behavior (CLSB) on Facebook in the United Kingdom and the United States. Drawing on a dataset of 3,469,719 public Facebook posts, we examine whether anti-vaccine content was disseminated more systematically and inauthentically than pro-vaccine content. We also trace the evolution of coordinated narratives over time and their cross-national variations. Methodologically, we apply computational techniques, including transfer learning for sentiment classification and structural topic modeling, to detect pro- and anti-vaccine stances and to identify thematic patterns within coordinated networks. Our findings reveal that in the UK, anti-vaccine entities exhibited denser CLSB networks and greater engagement than their pro-vaccine counterparts, whereas the opposite trend was observed in the U.S. Furthermore, we identify key differences in vaccine discourse: UK anti-vaccine communities predominantly emphasized vaccine safety concerns, while U.S. communities focused more on individual freedom. These cross-national comparisons highlight how political and cultural contexts shape the structure and rhetoric of vaccine-related coordination online.

Details

Links

DOI

", "_academic": { "doi": "10.1007/s42001-025-00401-y", "citation_count": 0, "reference_count": 80, "type": "article", "publisher": "Springer Science and Business Media LLC", "volume": "8", "metadata_source": "crossref", "confidence_score": 0.8272727272727272, "quality_score": 100 } }, { "id": "bibtex:Allen2025-ot", "title": "Platform-independent experiments on social media", "content_text": "Changing algorithms with artificial intelligence tools can influence partisan animosity", "date_published": "2025-11-27T00:00:00Z", "_discovery_date": "2025-11-15T00:00:00Z", "url": "https://doi.org/10.1126/science.aec7388", "external_url": "https://doi.org/10.1126/science.aec7388", "authors": [ { "name": "Jennifer Allen" }, { "name": "Joshua A. Tucker" } ], "tags": [ "Science", "Article" ], "content_html": "

Abstract

Changing algorithms with artificial intelligence tools can influence partisan animosity

Details

Links

DOI

", "_academic": { "doi": "10.1126/science.aec7388", "citation_count": 2, "reference_count": 14, "type": "article", "publisher": "American Association for the Advancement of Science (AAAS)", "volume": "390", "pages": "883--884", "metadata_source": "crossref", "confidence_score": 0.85, "quality_score": 100 } }, { "id": "bibtex:FitzGerald2025-nv", "title": "The persistence of informational manipulation and the appropriation of emerging events", "content_text": "Year: 2025 | Authors: FitzGerald, Katherine M, Whelan-Shamy, Daniel, Graham, Timothy", "date_published": "2025-11-05T00:00:00Z", "_discovery_date": "2025-11-15T00:00:00Z", "url": "https://doi.org/10.4324/9781003628088-6", "external_url": "https://doi.org/10.4324/9781003628088-6", "authors": [ { "name": "Katherine M. FitzGerald" }, { "name": "Daniel Whelan-Shamy" }, { "name": "Timothy Graham" } ], "tags": [ "Authoritarian Actors and Strategic Digital Information Operations", "Incollection" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.4324/9781003628088-6", "citation_count": 0, "reference_count": 0, "type": "incollection", "publisher": "Routledge", "pages": "65--85", "metadata_source": "crossref", "confidence_score": 0.8374999999999999, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Murtfeldt2025-wu", "title": "RIP Twitter API: A eulogy to its vast research contributions", "content_text": "Since 2006, Twitter's APIs have been rich sources of data for researchers studying social phenomena such as misinformation, public communication, crisis response, and political behavior. However, in 2023, Twitter began heavily restricting data access, dismantling its academic access program, and setting the Enterprise API price at $42,000 per month. Lacking funds to pay this fee, academics are scrambling to continue their research. This study systematically tabulates the number of studies, citat...", "date_published": "2024-04-10T00:00:00Z", "_discovery_date": "2025-10-15T00:00:00Z", "url": "http://arxiv.org/abs/2404.07340v2", "external_url": "http://arxiv.org/abs/2404.07340v2", "authors": [ { "name": "Ryan Murtfeldt" }, { "name": "Sejin Paik" }, { "name": "Naomi Alterman" }, { "name": "Ihsan Kahveci" }, { "name": "Jevin D. West" } ], "tags": [ "cs.CY", "Article", "arXiv [cs.CY]" ], "content_html": "

Abstract

Since 2006, Twitter's APIs have been rich sources of data for researchers studying social phenomena such as misinformation, public communication, crisis response, and political behavior. However, in 2023, Twitter began heavily restricting data access, dismantling its academic access program, and setting the Enterprise API price at $42,000 per month. Lacking funds to pay this fee, academics are scrambling to continue their research. This study systematically tabulates the number of studies, citations, publication dates, disciplines, and major topics of research using Twitter data between 2006 and 2024. While we cannot know exactly what will be lost now that Twitter data is cost-prohibitive, we can illustrate its research value during the years it was available. A search of eight databases found that between 2006 and 2024, a total of 33,306 studies were published in 8,914 venues, with 610,738 citations across 16 disciplines. Major disciplines include social science, engineering, data science, and public health. Major topics include information dissemination, tweet credibility, research methodologies, event detection, and human behavior. Twitter-based studies increased by a median of 25% annually from 2006 to 2023, but following Twitter's decision to charge for data, the number of studies dropped by 13%. Much of the 2024 research likely used data collected before the API shutdown, suggesting further decline ahead. This trend highlights a growing loss of empirical insight and access to real-time, public communication-raising concerns about the long-term consequences for studying society, technology, and global events in an era increasingly connected by social media.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.CY" ], "metadata_source": "arxiv", "confidence_score": 0.7230769230769231, "quality_score": 100 } }, { "id": "bibtex:Bak-Coleman2025-pm", "title": "The risks of industry influence in tech research", "content_text": "Emerging information technologies like social media, search engines, and AI can have a broad impact on public health, political institutions, social dynamics, and the natural world. It is critical to develop a scientific understanding of these impacts to inform evidence-based technology policy that minimizes harm and maximizes benefits. Unlike most other global-scale scientific challenges, however, the data necessary for scientific progress are generated and controlled by the same industry that ...", "date_published": "2025-10-22T00:00:00Z", "_discovery_date": "2025-10-15T00:00:00Z", "url": "http://arxiv.org/abs/2510.19894v2", "external_url": "http://arxiv.org/abs/2510.19894v2", "authors": [ { "name": "Joseph Bak-Coleman" }, { "name": "Cailin O'Connor" }, { "name": "Carl Bergstrom" }, { "name": "Jevin West" } ], "tags": [ "cs.SI", "cs.HC", "Article", "arXiv [cs.SI]" ], "content_html": "

Abstract

Emerging information technologies like social media, search engines, and AI can have a broad impact on public health, political institutions, social dynamics, and the natural world. It is critical to develop a scientific understanding of these impacts to inform evidence-based technology policy that minimizes harm and maximizes benefits. Unlike most other global-scale scientific challenges, however, the data necessary for scientific progress are generated and controlled by the same industry that might be subject to evidence-based regulation. Moreover, technology companies historically have been, and continue to be, a major source of funding for this field. These asymmetries in information and funding raise significant concerns about the potential for undue industry influence on the scientific record. In this Perspective, we explore how technology companies can influence our scientific understanding of their products. We argue that science faces unique challenges in the context of technology research that will require strengthening existing safeguards and constructing wholly new ones.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.SI", "cs.HC" ], "metadata_source": "arxiv", "confidence_score": 0.7333333333333333, "quality_score": 100 } }, { "id": "bibtex:Orlando2025-ul", "title": "Emergent coordinated behaviors in networked LLM agents: Modeling the strategic dynamics of information operations", "content_text": "Generative agents are rapidly advancing in sophistication, raising urgent questions about how they might coordinate when deployed in online ecosystems. This is particularly consequential in information operations (IOs), influence campaigns that aim to manipulate public opinion on social media. While traditional IOs have been orchestrated by human operators and relied on manually crafted tactics, agentic AI promises to make campaigns more automated, adaptive, and difficult to detect. This work pr...", "date_published": "2025-10-28T00:00:00Z", "_discovery_date": "2025-10-15T00:00:00Z", "url": "http://arxiv.org/abs/2510.25003v1", "external_url": "http://arxiv.org/abs/2510.25003v1", "authors": [ { "name": "Gian Marco Orlando" }, { "name": "Jinyi Ye" }, { "name": "Valerio La Gatta" }, { "name": "Mahdi Saeedi" }, { "name": "Vincenzo Moscato" }, { "name": "Emilio Ferrara" }, { "name": "Luca Luceri" } ], "tags": [ "arXiv [cs.MA]", "Article", "cs.MA" ], "content_html": "

Abstract

Generative agents are rapidly advancing in sophistication, raising urgent questions about how they might coordinate when deployed in online ecosystems. This is particularly consequential in information operations (IOs), influence campaigns that aim to manipulate public opinion on social media. While traditional IOs have been orchestrated by human operators and relied on manually crafted tactics, agentic AI promises to make campaigns more automated, adaptive, and difficult to detect. This work presents the first systematic study of emergent coordination among generative agents in simulated IO campaigns. Using generative agent-based modeling, we instantiate IO and organic agents in a simulated environment and evaluate coordination across operational regimes, from simple goal alignment to team knowledge and collective decision-making. As operational regimes become more structured, IO networks become denser and more clustered, interactions more reciprocal and positive, narratives more homogeneous, amplification more synchronized, and hashtag adoption faster and more sustained. Remarkably, simply revealing to agents which other agents share their goals can produce coordination levels nearly equivalent to those achieved through explicit deliberation and collective voting. Overall, we show that generative agents, even without human guidance, can reproduce coordination strategies characteristic of real-world IOs, underscoring the societal risks posed by increasingly automated, self-organizing IOs.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.MA" ], "metadata_source": "arxiv", "confidence_score": 0.7176470588235294, "quality_score": 100 } }, { "id": "bibtex:Frischlich2025-vn", "title": "The complexity of misinformation extends beyond virus and warfare analogies", "content_text": "Abstract Debates about misinformation and countermeasures are often driven by dramatic analogies, such as “infodemic” or “information warfare”. While useful shortcuts to interference, these analogies obscure the complex system through which misinformation propagates, leaving perceptual gaps where solutions lie unseen. We present a new framework of the complex multilevel system through which misinformation propagates and show how popular analogies fail to account for this complexity. We discuss i...", "date_published": "2025-10-01T00:00:00Z", "_discovery_date": "2025-10-15T00:00:00Z", "url": "https://doi.org/10.1038/s44260-025-00053-z", "external_url": "https://doi.org/10.1038/s44260-025-00053-z", "authors": [ { "name": "Lena Frischlich" }, { "name": "Henrik Olsson" }, { "name": "Abhishek Roy" }, { "name": "Heidi Schulze" }, { "name": "Stan Rhodes" }, { "name": "Alison Mansheim" } ], "tags": [ "Article", "npj Complexity" ], "content_html": "

Abstract

Abstract Debates about misinformation and countermeasures are often driven by dramatic analogies, such as “infodemic” or “information warfare”. While useful shortcuts to interference, these analogies obscure the complex system through which misinformation propagates, leaving perceptual gaps where solutions lie unseen. We present a new framework of the complex multilevel system through which misinformation propagates and show how popular analogies fail to account for this complexity. We discuss implications for policy making and future research.

Details

Links

DOI

", "_academic": { "doi": "10.1038/s44260-025-00053-z", "citation_count": 0, "reference_count": 99, "type": "article", "publisher": "Springer Science and Business Media LLC", "volume": "2", "pages": "1--8", "metadata_source": "crossref", "confidence_score": 0.823076923076923, "quality_score": 100 } }, { "id": "bibtex:Starbird2025-jj", "title": "What is going on? An evidence-frame framework for analyzing online rumors about election integrity", "content_text": "Pervasive falsehoods that erode trust in election processes are of increasing concern to democracies around the world. Misleading claims like these are often understood as simply ''getting the facts wrong''. Using a grounded, interpretative, mixed-method approach to study Twitter activity during the 2022 U.S. Midterm Election in Arizona, our work paints a more nuanced picture. We adapt Klein's data-frame theory of collective sensemaking to online rumors, demonstrating how misleading claims about...", "date_published": "2025-10-18T00:00:00Z", "_discovery_date": "2025-10-15T00:00:00Z", "url": "https://doi.org/10.1145/3757522", "external_url": "https://doi.org/10.1145/3757522", "authors": [ { "name": "Kate Starbird" }, { "name": "Stephen Prochaska" }, { "name": "Ben Yamron" } ], "tags": [ "Article", "Proceedings of the ACM on Human-Computer Interaction" ], "content_html": "

Abstract

Pervasive falsehoods that erode trust in election processes are of increasing concern to democracies around the world. Misleading claims like these are often understood as simply ''getting the facts wrong''. Using a grounded, interpretative, mixed-method approach to study Twitter activity during the 2022 U.S. Midterm Election in Arizona, our work paints a more nuanced picture. We adapt Klein's data-frame theory of collective sensemaking to online rumors, demonstrating how misleading claims about election administration take shape online through interactions between (often factual) evidence and frames. We introduce a methodological approach for analyzing rumors through this evidence-frame lens and provide insights into the dynamics of online rumoring around claims of ''rigged elections''. Our work highlights how rumors are as much about political framing as they are about faulty facts, and locates the crux of the problem of misinformation in the interactions with and between evidence and distorted political frames.

Details

Links

DOI

", "_academic": { "doi": "10.1145/3757522", "citation_count": 2, "reference_count": 49, "type": "article", "publisher": "Association for Computing Machinery (ACM)", "volume": "9", "pages": "1--37", "metadata_source": "crossref", "confidence_score": 0.8428571428571427, "quality_score": 100 } }, { "id": "bibtex:Prochaska2025-ef", "title": "Deep storytelling: Collective sensemaking and layers of meaning in U.s. elections", "content_text": "Misinformation and disinformation about elections remain pressing concerns for researchers, policymakers, and the public. Critics, however, argue that fears surrounding these issues are exaggerated due to a lack of evidence of impact. This debate highlights the challenges inherent in assessing the impacts of misinformation, as the drivers of false and misleading content often exist in the context of a specific claim. To address this issue, we examined false and misleading information surrounding...", "date_published": "2025-10-18T00:00:00Z", "_discovery_date": "2025-10-15T00:00:00Z", "url": "https://doi.org/10.1145/3757576", "external_url": "https://doi.org/10.1145/3757576", "authors": [ { "name": "Stephen Prochaska" }, { "name": "Julie Vera" }, { "name": "Douglas Lew Tan" }, { "name": "Ben Yamron" }, { "name": "Sylvie Venuto" }, { "name": "Amaya Kejriwal" }, { "name": "Sarah Chu" }, { "name": "Kate Starbird" } ], "tags": [ "Article", "Proceedings of the ACM on Human-Computer Interaction" ], "content_html": "

Abstract

Misinformation and disinformation about elections remain pressing concerns for researchers, policymakers, and the public. Critics, however, argue that fears surrounding these issues are exaggerated due to a lack of evidence of impact. This debate highlights the challenges inherent in assessing the impacts of misinformation, as the drivers of false and misleading content often exist in the context of a specific claim. To address this issue, we examined false and misleading information surrounding the 2020 and 2022 U.S. national elections, focusing on the contextual features of online conversations that fueled various rumors. We developed two qualitative codebooks, creating the second after realizing that the first, which labeled individual tweets, failed to capture broader rumoring dynamics. By integrating multi-layered qualitative coding with thematic analysis and quantitative visualizations, we show how influencers, political elites, and audiences collaboratively told deep stories from 2020 through 2022. As these stories were told, audiences interpreted events in 2022 through the lens of the 2020 story, guided by influencers' cues, leading to an evolution in storytelling style between the two election cycles. This ongoing performance was tailored to align with the incentive structures, affordances, and attention economy of social media. We combine deep stories with theories of collective sensemaking and rumoring, creating a framework to better assess the contextual features surrounding false and misleading information.

Details

Links

DOI

", "_academic": { "doi": "10.1145/3757576", "citation_count": 2, "reference_count": 83, "type": "article", "publisher": "Association for Computing Machinery (ACM)", "volume": "9", "pages": "1--43", "metadata_source": "crossref", "confidence_score": 0.8166666666666667, "quality_score": 100 } }, { "id": "bibtex:Oswald2025-km", "title": "The tip of the iceberg: How the social media production-consumption gap distorts public opinion for citizens and researchers", "content_text": "The production–consumption gap on social media is a consistent finding across time, platforms, and cultural contexts: A small minority of highly active users produce the majority of online political content, while the majority of users consume content passively and remainlargely silent. Online content thus reveals only the tip of an iceberg, from which citizens and scholars alike are apt to draw incorrect inferences regarding the submerged mass of public opinion. This has substantive as well as ...", "date_published": "2025-10-15T00:00:00Z", "_discovery_date": "2025-10-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.31235/osf.io/frcv5_v1", "external_url": "https://doi.org/10.31235/osf.io/frcv5_v1", "authors": [ { "name": "Lisa Oswald" }, { "name": "Will Schulz" }, { "name": "Ralph Hertwig" }, { "name": "David Lazer" }, { "name": "Sebastian Stier" } ], "tags": [ "Article", "SocArXiv" ], "content_html": "

Abstract

The production–consumption gap on social media is a consistent finding across time, platforms, and cultural contexts: A small minority of highly active users produce the majority of online political content, while the majority of users consume content passively and remainlargely silent. Online content thus reveals only the tip of an iceberg, from which citizens and scholars alike are apt to draw incorrect inferences regarding the submerged mass of public opinion. This has substantive as well as methodological consequences for social media research, which must be taken into account when designing studies to describe and understand how social media use relates to content exposure, public opinion, and political behavior, and when designing and testing pro-democratic interventions.

Details

Links

DOI

", "_academic": { "doi": "10.31235/osf.io/frcv5_v1", "citation_count": 0, "reference_count": 0, "type": "article", "metadata_source": "crossref", "confidence_score": 0.8272727272727272, "quality_score": 100 } }, { "id": "bibtex:Donovan2025-ws", "title": "EXPRESS: A short history of misinformation-at-scale and efforts to mitigate it", "content_text": "This article traces the social construction of misinformation-at-scale by charting the diverse content moderation methods adopted by social media companies between 2016-2021, when perceptions about social media harming public health and political outcomes became a major public issue. The concept of “misinformation” refers to false or inaccurate information. Misinformation-at scale occurs when false information is amplified online and shared by politicians, journalists, public figures, or other d...", "date_published": "2025-09-18T00:00:00Z", "_discovery_date": "2025-09-15T00:00:00Z", "url": "https://doi.org/10.1177/07439156251384249", "external_url": "https://doi.org/10.1177/07439156251384249", "authors": [ { "name": "Joan Donovan" } ], "tags": [ "Journal of Public Policy & Marketing", "Article" ], "content_html": "

Abstract

This article traces the social construction of misinformation-at-scale by charting the diverse content moderation methods adopted by social media companies between 2016-2021, when perceptions about social media harming public health and political outcomes became a major public issue. The concept of “misinformation” refers to false or inaccurate information. Misinformation-at scale occurs when false information is amplified online and shared by politicians, journalists, public figures, or other densely networked actors, whose engagement turns false claims into public controversies. This situation creates a paradox for social media companies and their users. On the one hand, the business model of social media companies is to increase engagement without consideration of the veracity of the content. As a result, content moderation methods have focused on reducing the speed and scale of misinformation, but not rooting it out entirely. On the other hand, users increasingly depend on social media companies for access to TALK (timely, accurate, local knowledge), which is difficult to discover, especially during breaking news or on contested subjects. This paper documents this history and discusses ways to address this challenge posed by misinformation-at-scale.

Details

Links

DOI

", "_academic": { "doi": "10.1177/07439156251384249", "citation_count": 0, "reference_count": 0, "type": "article", "publisher": "SAGE Publications", "metadata_source": "crossref", "confidence_score": 0.8999999999999999, "quality_score": 100 } }, { "id": "bibtex:Arminio2025-tw", "title": "Leveraging VLLMs for visual clustering: Image-to-text mapping shows increased semantic capabilities and interpretability", "content_text": "Automated image categorization is vital for computational social science, particularly considering the rise of visual content on social media, as it helps the identification of emerging visual narratives in online debates. However, the methods currently used in the field to represent images numerically are unable to fully capture their connotative meaning and do not produce interpretable clusters. In response to these challenges, we evaluate an approach based on the automated generation of inter...", "date_published": "2025-09-15T00:00:00Z", "_discovery_date": "2025-09-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.31235/osf.io/bf459", "external_url": "https://doi.org/10.31235/osf.io/bf459", "authors": [ { "name": "Luigi Arminio" }, { "name": "Matteo Magnani" }, { "name": "Matias Piqueras" }, { "name": "Luca Rossi" }, { "name": "Alexandra Segerberg" } ], "tags": [ "Article", "Soc. Sci. Comput. Rev." ], "content_html": "

Abstract

Automated image categorization is vital for computational social science, particularly considering the rise of visual content on social media, as it helps the identification of emerging visual narratives in online debates. However, the methods currently used in the field to represent images numerically are unable to fully capture their connotative meaning and do not produce interpretable clusters. In response to these challenges, we evaluate an approach based on the automated generation of intermediate textual descriptions of the input images with respect to the connotative semantic validity of the generated clusters and their interpretability. We show that both aspects are improved over the currently typical clustering approach based on convolutional neural networks.

Details

Links

DOI

", "_academic": { "doi": "10.31235/osf.io/bf459", "citation_count": 1, "reference_count": 0, "type": "article", "publisher": "SAGE Publications", "metadata_source": "crossref", "confidence_score": 0.8272727272727272, "quality_score": 100 } }, { "id": "bibtex:Oprea2025-lf", "title": "Behind the screen: The use of Facebook accounts with inauthentic behavior during European elections", "content_text": "Technology has reshaped political communication, allowing fake engagement to drive real influence in the democratic process. Hyperactive social media users, who are over-proportionally active in relation to the mean, are the new political activists, spreading partisan content at scale on social media platforms. Using The Authenticity Matrix tool, this study revealed Facebook accounts of hyperactive users exhibiting inauthentic behaviour that were used during the electoral campaign (May 10, 2024,...", "date_published": "2025-09-04T00:00:00Z", "_discovery_date": "2025-09-15T00:00:00Z", "url": "https://doi.org/10.17645/mac.10733", "external_url": "https://doi.org/10.17645/mac.10733", "authors": [ { "name": "Bogdan Oprea" }, { "name": "Paula Pașnicu" }, { "name": "Alexandru-Ninel Niculae" }, { "name": "Constantin-Cozmin Bonciu" }, { "name": "Dragoș Tudorașcu-Dobre" } ], "tags": [ "fake accounts", "Meta", "Article", "election campaign", "Facebook", "inauthentic behavior", "manipulation", "Media and Communication", "political communication", "social media" ], "content_html": "

Abstract

Technology has reshaped political communication, allowing fake engagement to drive real influence in the democratic process. Hyperactive social media users, who are over-proportionally active in relation to the mean, are the new political activists, spreading partisan content at scale on social media platforms. Using The Authenticity Matrix tool, this study revealed Facebook accounts of hyperactive users exhibiting inauthentic behaviour that were used during the electoral campaign (May 10, 2024, to June 8, 2024) for the 2024 election of Romanian members of the European Parliament. The results indicate that, for some posts, up to 45% of shares were made by hyperactive users (four or more shares per post by the same account) and 33.9% by super-active users (10 or more times). This type of online behavior is considered by Meta as manipulation of “public opinion,” “political discussion,” and “public debate,” and Meta’s Community Standards is committed to preventing such behavior in the context of elections. Another key contribution of this research is the identification of dominant characteristics of hyperactive user accounts, using information publicly available on their social media profile, which provides insights into their specific features and helps users better identify them on social media. The article highlights that online social network platforms condemn these manipulative practices in theory, but they don’t take sufficient measures to effectively reduce them in order to limit their impact on our societies.

Details

Links

DOI

", "_academic": { "doi": "10.17645/mac.10733", "citation_count": 1, "reference_count": 89, "type": "article", "publisher": "Cogitatio", "volume": "13", "metadata_source": "crossref", "confidence_score": 0.8272727272727272, "quality_score": 100 } }, { "id": "bibtex:Rodriguez_Farres2025-sg", "title": "Real time detection of coordinated bots on Bluesky", "content_text": "The growing popularity of social media platforms such as X, Instagram, TikTok, and, more recently, Bluesky, has led to an increase in fake data and misinformation. Although fact checking is essential, misinformation is often perceived as true when a post goes viral. Viralization is frequently driven by coordinated activity. While organic coordination can exist, the presence of coordinated bot networks poses a significant threat to information integrity. This paper focuses on real time detection ...", "date_published": "2025-09-22T00:00:00Z", "_discovery_date": "2025-09-15T00:00:00Z", "url": "https://doi.org/10.3233/faia250576", "external_url": "https://doi.org/10.3233/faia250576", "authors": [ { "name": "Pol Rodríguez Farrés" }, { "name": "Athina Masali" }, { "name": "Jesus Cerquides" } ], "tags": [ "Frontiers in Artificial Intelligence and Applications", "Incollection" ], "content_html": "

Abstract

The growing popularity of social media platforms such as X, Instagram, TikTok, and, more recently, Bluesky, has led to an increase in fake data and misinformation. Although fact checking is essential, misinformation is often perceived as true when a post goes viral. Viralization is frequently driven by coordinated activity. While organic coordination can exist, the presence of coordinated bot networks poses a significant threat to information integrity. This paper focuses on real time detection of coordinated bots on Bluesky by identifying common and synchronous behaviour activity. A mixture model is defined in order to validate the identification of coordinated users that reposted the same posts. The model is constructed using a baseline scenario comprising exclusively human activity, serving as a reference for validation.

Details

Links

DOI

", "_academic": { "doi": "10.3233/faia250576", "citation_count": 0, "reference_count": 0, "type": "incollection", "publisher": "IOS Press", "pages": "57--68", "metadata_source": "crossref", "confidence_score": 0.8374999999999999, "quality_score": 100 } }, { "id": "bibtex:Tonneau2025-bv", "title": "Language Disparities in Moderation Workforce Allocation by Social Media Platforms", "content_text": "Social media platforms operate globally, yet whether they invest adequately in content moderation to protect users across languages remains unclear. Leveraging newly mandated transparency data under the European Union’s Digital Services Act, we uncover substantial cross-lingual disparities in moderation workforce allocation across platforms, both in language coverage and the number of moderators relative to the volume of user-generated content in each language. While larger platforms such as You...", "date_published": "2025-08-15T00:00:00Z", "_discovery_date": "2025-08-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.31235/osf.io/amfws_v1", "external_url": "https://doi.org/10.31235/osf.io/amfws_v1", "authors": [ { "name": "Manuel Tonneau" }, { "name": "Diyi Liu" }, { "name": "Ryan McGrady" }, { "name": "Kevin Zheng" }, { "name": "Ralph Schroeder" }, { "name": "Ethan Zuckerman" }, { "name": "Scott Hale" } ], "tags": [ "Article" ], "content_html": "

Abstract

Social media platforms operate globally, yet whether they invest adequately in content moderation to protect users across languages remains unclear. Leveraging newly mandated transparency data under the European Union’s Digital Services Act, we uncover substantial cross-lingual disparities in moderation workforce allocation across platforms, both in language coverage and the number of moderators relative to the volume of user-generated content in each language. While larger platforms such as YouTube and Meta have moderators in many languages, we find that millions of EU-based users on smaller platforms, including Twitter/X, post in languages without any human oversight. Even when languages have at least one moderator, moderator allocation varies widely and disproportionately to content volume, with Twitter/X mainly prioritizing English while YouTube invests proportionally more in other European languages. Across platforms, languages primarily spoken in the ‘Global South’—such as Spanish, Portuguese, and Arabic—consistently receive proportionally fewer moderators than English, ranging from an average of 55% of English’s allocation on YouTube to only 7.5% on Twitter/X. These findings highlight the need for more meaningful and globally inclusive transparency in platform moderation, to ensure that social media users everywhere receive equitable protection from online harms.

Details

Links

DOI

", "_academic": { "doi": "10.31235/osf.io/amfws_v1", "citation_count": 0, "reference_count": 0, "type": "article", "metadata_source": "crossref", "confidence_score": 0.82, "quality_score": 100 } }, { "id": "bibtex:Fan2025-ut", "title": "The medium is not the message: Deconfounding text embeddings via linear concept erasure", "content_text": "Embedding-based similarity metrics between text sequences can be influenced not just by the content dimensions we most care about, but can also be biased by spurious attributes like the text's source or language. These document confounders cause problems for many applications, but especially those that need to pool texts from different corpora. This paper shows that a debiasing algorithm that removes information about observed confounders from the encoder representations substantially reduces th...", "date_published": "2025-07-01T00:00:00Z", "_discovery_date": "2025-07-15T00:00:00Z", "url": "http://arxiv.org/abs/2507.01234v3", "external_url": "http://arxiv.org/abs/2507.01234v3", "authors": [ { "name": "Yu Fan" }, { "name": "Yang Tian" }, { "name": "Shauli Ravfogel" }, { "name": "Mrinmaya Sachan" }, { "name": "Elliott Ash" }, { "name": "Alexander Hoyle" } ], "tags": [ "arXiv [cs.CL]", "Article", "cs.CL" ], "content_html": "

Abstract

Embedding-based similarity metrics between text sequences can be influenced not just by the content dimensions we most care about, but can also be biased by spurious attributes like the text's source or language. These document confounders cause problems for many applications, but especially those that need to pool texts from different corpora. This paper shows that a debiasing algorithm that removes information about observed confounders from the encoder representations substantially reduces these biases at a minimal computational cost. Document similarity and clustering metrics improve across every embedding variant and task we evaluate -- often dramatically. Interestingly, performance on out-of-distribution benchmarks is not impacted, indicating that the embeddings are not otherwise degraded.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.CL" ], "metadata_source": "arxiv", "confidence_score": 0.6064102564102565, "quality_score": 100 } }, { "id": "bibtex:Mannocci2025-ig", "title": "Multimodal coordinated online behavior: Trade-offs and strategies", "content_text": "Coordinated online behavior, which spans from beneficial collective actions to harmful manipulation such as disinformation campaigns, has become a key focus in digital ecosystem analysis. Traditional methods often rely on monomodal approaches, focusing on single types of interactions like co-retweets or co-hashtags, or consider multiple modalities independently of each other. However, these approaches may overlook the complex dynamics inherent in multimodal coordination. This study compares diff...", "date_published": "2025-07-16T00:00:00Z", "_discovery_date": "2025-07-15T00:00:00Z", "url": "http://arxiv.org/abs/2507.12108v2", "external_url": "http://arxiv.org/abs/2507.12108v2", "authors": [ { "name": "Lorenzo Mannocci" }, { "name": "Stefano Cresci" }, { "name": "Matteo Magnani" }, { "name": "Anna Monreale" }, { "name": "Maurizio Tesconi" } ], "tags": [ "Article", "arXiv [cs.SI]", "cs.CY", "cs.AI", "cs.HC", "cs.LG", "cs.SI" ], "content_html": "

Abstract

Coordinated online behavior, which spans from beneficial collective actions to harmful manipulation such as disinformation campaigns, has become a key focus in digital ecosystem analysis. Traditional methods often rely on monomodal approaches, focusing on single types of interactions like co-retweets or co-hashtags, or consider multiple modalities independently of each other. However, these approaches may overlook the complex dynamics inherent in multimodal coordination. This study compares different ways of operationalizing the detection of multimodal coordinated behavior. It examines the trade-off between weakly and strongly integrated multimodal models, highlighting the balance between capturing broader coordination patterns and identifying tightly coordinated behavior. By comparing monomodal and multimodal approaches, we assess the unique contributions of different data modalities and explore how varying implementations of multimodality impact detection outcomes. Our findings reveal that not all the modalities provide distinct insights, but that with a multimodal approach we can get a more comprehensive understanding of coordination dynamics. This work enhances the ability to detect and analyze coordinated online behavior, offering new perspectives for safeguarding the integrity of digital platforms.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.SI", "cs.AI", "cs.CY", "cs.HC", "cs.LG" ], "metadata_source": "arxiv", "confidence_score": 0.7272727272727272, "quality_score": 100 } }, { "id": "bibtex:Zhao2025-ny", "title": "Demystifying hashtag hijacking in the public opinion game: attention, narratives, and social bots", "content_text": "Published in Inf. Commun. Soc. | Year: 2025 | Authors: Zhao, Bei, Ren, Wujiong, He, Yuan, Zhang, Hongzhong", "date_published": "2025-07-15T00:00:00Z", "_discovery_date": "2025-07-15T00:00:00Z", "url": "https://doi.org/10.1080/1369118x.2025.2531130", "external_url": "https://doi.org/10.1080/1369118x.2025.2531130", "authors": [ { "name": "Bei Zhao" }, { "name": "Wujiong Ren" }, { "name": "Yuan He" }, { "name": "Hongzhong Zhang" } ], "tags": [ "Article", "Information, Communication & Society" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1080/1369118x.2025.2531130", "citation_count": 0, "reference_count": 66, "type": "article", "publisher": "Informa UK Limited", "pages": "1--23", "metadata_source": "crossref", "confidence_score": 0.8333333333333333, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Kuznetsova2025-nu", "title": "Amplifying the regime: identifying coordinated activity of pro-government Telegram channels in Russia and Belarus", "content_text": "Published in J. Inf. Technol. Politics | Year: 2025 | Authors: Kuznetsova, Daria", "date_published": "2025-07-29T00:00:00Z", "_discovery_date": "2025-07-15T00:00:00Z", "url": "https://doi.org/10.1080/19331681.2025.2540822", "external_url": "https://doi.org/10.1080/19331681.2025.2540822", "authors": [ { "name": "Daria Kuznetsova" } ], "tags": [ "Article", "Journal of Information Technology & Politics" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1080/19331681.2025.2540822", "citation_count": 1, "reference_count": 47, "type": "article", "publisher": "Informa UK Limited", "pages": "1--17", "metadata_source": "crossref", "confidence_score": 0.8999999999999999, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Sadler2025-vu", "title": "Suspicious stories: taking narrative seriously in disinformation research", "content_text": "Abstract The concept of narrative has been widely employed in the disinformation literature. Nonetheless, its use has been dominated by loose intuitions of what narratives are and do alongside vague accounts of specific stories. Theorized more carefully, thinking in terms of narrative has much to offer disinformation studies by helping to better define what constitutes disinformation and providing sophisticated frameworks for assessing specific content. To enable a nuanced view of narrative trut...", "date_published": "2025-07-09T00:00:00Z", "_discovery_date": "2025-07-15T00:00:00Z", "url": "https://doi.org/10.1093/ct/qtaf013", "external_url": "https://doi.org/10.1093/ct/qtaf013", "authors": [ { "name": "Neil Sadler" } ], "tags": [ "Communication Theory", "Article" ], "content_html": "

Abstract

Abstract The concept of narrative has been widely employed in the disinformation literature. Nonetheless, its use has been dominated by loose intuitions of what narratives are and do alongside vague accounts of specific stories. Theorized more carefully, thinking in terms of narrative has much to offer disinformation studies by helping to better define what constitutes disinformation and providing sophisticated frameworks for assessing specific content. To enable a nuanced view of narrative truth, I propose a “hermeneutic realist” approach in which stories “disclose” wider realities rather than constructing or simply reflecting them, supplemented with Phelan’s “narrative ethics” to facilitate inquiry into how far stories are morally legitimate. I apply these ideas in a case study of Twitter content posted by a Venezuelan political influencer blaming Russia’s 2022 invasion of Ukraine on NATO enlargement. In so doing, I show that content may be referentially and ethically problematic without necessarily being false.

Details

Links

DOI

", "_academic": { "doi": "10.1093/ct/qtaf013", "citation_count": 0, "reference_count": 110, "type": "article", "publisher": "Oxford University Press (OUP)", "metadata_source": "crossref", "confidence_score": 0.8999999999999999, "quality_score": 100 } }, { "id": "bibtex:Thiele2025-ol", "title": "Attributing coordinated social media manipulation: A theoretical model and typology", "content_text": "Social media are key arenas for public opinion formation, but are susceptible to coordinated social media manipulation (CSMM), that is, the orchestrated activity of multiple accounts to increase content visibility and deceive audiences. Despite advances in detecting and characterizing CSMM, the attribution problem—identifying the principals behind CSMM campaigns—has received little scholarly attention. In this article, we address this gap by synthesizing existing research and developing a theore...", "date_published": "2025-07-29T00:00:00Z", "_discovery_date": "2025-07-15T00:00:00Z", "url": "https://doi.org/10.1177/14614448251350100", "external_url": "https://doi.org/10.1177/14614448251350100", "authors": [ { "name": "Daniel Thiele" }, { "name": "Miriam Milzner" }, { "name": "Annett Heft" }, { "name": "Baoning Gong" }, { "name": "Barbara Pfetsch" } ], "tags": [ "Article", "New Media & Society" ], "content_html": "

Abstract

Social media are key arenas for public opinion formation, but are susceptible to coordinated social media manipulation (CSMM), that is, the orchestrated activity of multiple accounts to increase content visibility and deceive audiences. Despite advances in detecting and characterizing CSMM, the attribution problem—identifying the principals behind CSMM campaigns—has received little scholarly attention. In this article, we address this gap by synthesizing existing research and developing a theoretical model for understanding CSMM. We propose a consolidated definition of CSMM, identify its key observable and hidden characteristics, and present a rational choice model for inferring principals’ strategic decisions from campaign features. In addition, we present a typology of CSMM campaigns, linking variations in scale, elaborateness, and disguise to principals’ resources, stakes, and influence strategies. Our contribution provides researchers with conceptual and heuristic tools for attribution and invites interdisciplinary and comparative research on CSMM campaigns.

Details

Links

DOI

", "_academic": { "doi": "10.1177/14614448251350100", "citation_count": 2, "reference_count": 67, "type": "article", "publisher": "SAGE Publications", "metadata_source": "crossref", "confidence_score": 0.8272727272727272, "quality_score": 100 } }, { "id": "bibtex:Meher2025-qb", "title": "ConflLlama: Domain-specific adaptation of large language models for conflict event classification", "content_text": "We present ConflLlama, demonstrating how efficient fine-tuning of large language models can advance automated classification tasks in political science research. While classification of political events has traditionally relied on manual coding or rigid rule-based systems, modern language models offer the potential for more nuanced, context-aware analysis. However, deploying these models requires overcoming significant technical and resource barriers. We demonstrate how to adapt open-source lang...", "date_published": "2025-07-15T00:00:00Z", "_discovery_date": "2025-07-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1177/20531680251356282", "external_url": "https://doi.org/10.1177/20531680251356282", "authors": [ { "name": "Shreyas Meher" }, { "name": "Patrick T. Brandt" } ], "tags": [ "Article", "Research & Politics" ], "content_html": "

Abstract

We present ConflLlama, demonstrating how efficient fine-tuning of large language models can advance automated classification tasks in political science research. While classification of political events has traditionally relied on manual coding or rigid rule-based systems, modern language models offer the potential for more nuanced, context-aware analysis. However, deploying these models requires overcoming significant technical and resource barriers. We demonstrate how to adapt open-source language models to specialized political science tasks, using conflict event classification as our proof of concept. Through quantization and efficient fine-tuning techniques, we show state-of-the-art performance while minimizing computational requirements. Our approach achieves a macro-averaged AUC of 0.791 and a weighted F1-score of 0.753, representing a 37.6% improvement over the base model, with accuracy gains of up to 1463% in challenging classifications. We offer a roadmap for political scientists to adapt these methods to their own research domains, democratizing access to advanced NLP capabilities across the discipline. This work bridges the gap between cutting-edge AI developments and practical political science research needs, enabling broader adoption of these powerful analytical tools.

Details

Links

DOI

", "_academic": { "doi": "10.1177/20531680251356282", "citation_count": 1, "reference_count": 24, "type": "article", "publisher": "SAGE Publications", "volume": "12", "metadata_source": "crossref", "confidence_score": 0.8428571428571427, "quality_score": 100 } }, { "id": "bibtex:Marwick2025-ov", "title": "Shapeshifters and starseeds: Populist knowledge production, generous epistemology, and disinformation on U.s. conspiracy TikTok", "content_text": "This article investigates the intersection of identity, power, and knowledge production on U.S. ConspiracyTok, a genre of TikTok videos promoting conspiracy theories ranging from harmless speculation to harmful disinformation. Drawing on qualitative content analysis of 202 highly viewed videos, we examine how identity markers such as race and gender shape who is empowered or undermined in conspiratorial narratives, and how creators construct and circulate “evidence” to support their claims. We f...", "date_published": "2025-07-15T00:00:00Z", "_discovery_date": "2025-07-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1177/20563051251357483", "external_url": "https://doi.org/10.1177/20563051251357483", "authors": [ { "name": "Alice Marwick" }, { "name": "Courtlyn Pippert" }, { "name": "Katherine Furl" }, { "name": "Elaine Schnabel" } ], "tags": [ "Article", "Social Media + Society" ], "content_html": "

Abstract

This article investigates the intersection of identity, power, and knowledge production on U.S. ConspiracyTok, a genre of TikTok videos promoting conspiracy theories ranging from harmless speculation to harmful disinformation. Drawing on qualitative content analysis of 202 highly viewed videos, we examine how identity markers such as race and gender shape who is empowered or undermined in conspiratorial narratives, and how creators construct and circulate “evidence” to support their claims. We find that American ConspiracyTok is populated largely by young, non-White, and/or female creators who challenge the stereotype of the White, male conspiracy theorist. These creators interpellate audiences through visible identity markers, fostering a sense of intimacy and trust. Marginalized groups are often cast as victims, while institutions like science, government, and media are portrayed as villains. Creators construct legitimacy through visual media, personal anecdotes, deep lore, and remixing fictional and mainstream texts—engaging in a form of populist knowledge production within a generous epistemology that welcomes divergent truths and alternative worldviews. These practices blur the lines between entertainment and ideology, often mimicking academic, or journalistic knowledge production while rejecting institutional authority. While ConspiracyTok can serve as a form of standpoint epistemology that empowers minoritized creators and critiques systemic injustice, it can just as easily reinforce bias and spread disinformation. ConspiracyTok is a site of vernacular theorizing where epistemology and identity are deeply entangled, offering both a critique of mainstream power and a cautionary tale about the populist appeal of conspiratorial thinking.

Details

Links

DOI

", "_academic": { "doi": "10.1177/20563051251357483", "citation_count": 1, "reference_count": 72, "type": "article", "publisher": "SAGE Publications", "volume": "11", "metadata_source": "crossref", "confidence_score": 0.8333333333333333, "quality_score": 100 } }, { "id": "bibtex:Jurg2025-ur", "title": "Ranking authority: A critical audit of YouTube’s content moderation", "content_text": "This chapter examines YouTube’s content moderation practices during the 2024 European Parliamentary elections. Using search results from the Netherlands, Germany, and France, plus an API sample, it explores YouTube’s pledge to raise authoritative sources and remove harmful content. Findings suggest the search algorithm favors legacy media and Public Service Media (PSM). While many PSM carry a publisher context label, deployment seems patchy and absent in European languages such as Basque, Catala...", "date_published": "2025-07-15T00:00:00Z", "_discovery_date": "2025-07-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.31219/osf.io/j3cn5_v1", "external_url": "https://doi.org/10.31219/osf.io/j3cn5_v1", "authors": [ { "name": "Daniel Jurg" }, { "name": "Salvatore Romano" }, { "name": "Bernhard Rieder" } ], "tags": [ "Article" ], "content_html": "

Abstract

This chapter examines YouTube’s content moderation practices during the 2024 European Parliamentary elections. Using search results from the Netherlands, Germany, and France, plus an API sample, it explores YouTube’s pledge to raise authoritative sources and remove harmful content. Findings suggest the search algorithm favors legacy media and Public Service Media (PSM). While many PSM carry a publisher context label, deployment seems patchy and absent in European languages such as Basque, Catalan, Danish, Finnish, Galician, Greek, and Portuguese. The study logs 486 election videos that became unavailable. However, sparse information problematizes assessing the enforcement of Terms of Service. The chapter concludes with a call for increased data access via the YouTube Research Program to scale content moderation studies on the platform.

Details

Links

DOI

", "_academic": { "doi": "10.31219/osf.io/j3cn5_v1", "citation_count": 0, "reference_count": 0, "type": "article", "metadata_source": "crossref", "confidence_score": 0.8428571428571427, "quality_score": 100 } }, { "id": "bibtex:Paci2025-ag", "title": "They want to pretend not to understand: The Limits of Current LLMs in Interpreting Implicit Content of Political Discourse", "content_text": "Implicit content plays a crucial role in political discourse, where speakers systematically employ pragmatic strategies such as implicatures and presuppositions to influence their audiences. Large Language Models (LLMs) have demonstrated strong performance in tasks requiring complex semantic and pragmatic understanding, highlighting their potential for detecting and explaining the meaning of implicit content. However, their ability to do this within political discourse remains largely underexplo...", "date_published": "2025-06-07T00:00:00Z", "_discovery_date": "2025-06-15T00:00:00Z", "url": "http://arxiv.org/abs/2506.06775v1", "external_url": "http://arxiv.org/abs/2506.06775v1", "authors": [ { "name": "Walter Paci" }, { "name": "Alessandro Panunzi" }, { "name": "Sandro Pezzelle" } ], "tags": [ "arXiv [cs.CL]", "Article", "cs.CL" ], "content_html": "

Abstract

Implicit content plays a crucial role in political discourse, where speakers systematically employ pragmatic strategies such as implicatures and presuppositions to influence their audiences. Large Language Models (LLMs) have demonstrated strong performance in tasks requiring complex semantic and pragmatic understanding, highlighting their potential for detecting and explaining the meaning of implicit content. However, their ability to do this within political discourse remains largely underexplored. Leveraging, for the first time, the large IMPAQTS corpus, which comprises Italian political speeches with the annotation of manipulative implicit content, we propose methods to test the effectiveness of LLMs in this challenging problem. Through a multiple-choice task and an open-ended generation task, we demonstrate that all tested models struggle to interpret presuppositions and implicatures. We conclude that current LLMs lack the key pragmatic capabilities necessary for accurately interpreting highly implicit language, such as that found in political discourse. At the same time, we highlight promising trends and future directions for enhancing model performance. We release our data and code at https://github.com/WalterPaci/IMPAQTS-PID

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.CL" ], "metadata_source": "arxiv", "confidence_score": 0.7428571428571428, "quality_score": 100 } }, { "id": "bibtex:Entrena-Serrano2025-gw", "title": "TikTok's Research API: Problems Without Explanations", "content_text": "Following the Digital Services Act of 2023, which requires Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to facilitate data accessibility for independent research, TikTok augmented its Research API access within Europe in July 2023. This action was intended to ensure compliance with the DSA, bolster transparency, and address systemic risks. Nonetheless, research findings reveal that despite this expansion, notable limitations and inconsistencies persist within...", "date_published": "2025-06-11T00:00:00Z", "_discovery_date": "2025-06-15T00:00:00Z", "url": "http://arxiv.org/abs/2506.09746v2", "external_url": "http://arxiv.org/abs/2506.09746v2", "authors": [ { "name": "Carlos Entrena-Serrano" }, { "name": "Martin Degeling" }, { "name": "Salvatore Romano" }, { "name": "Raziye Buse Çetin" } ], "tags": [ "cs.CY", "Article", "arXiv [cs.CY]" ], "content_html": "

Abstract

Following the Digital Services Act of 2023, which requires Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to facilitate data accessibility for independent research, TikTok augmented its Research API access within Europe in July 2023. This action was intended to ensure compliance with the DSA, bolster transparency, and address systemic risks. Nonetheless, research findings reveal that despite this expansion, notable limitations and inconsistencies persist within the data provided. Our experiment reveals that the API fails to provide metadata for one in eight videos provided through data donations, including official TikTok videos, advertisements, and content from specific accounts, without an apparent reason. The API data is incomplete, making it unreliable when working with data donations, a prominent methodology for algorithm audits and research on platform accountability. To monitor the functionality of the API and eventual fixes implemented by TikTok, we publish a dashboard with a daily check of the availability of 10 videos that were not retrievable in the last month. The video list includes very well-known accounts, notably that of Taylor Swift. The current API lacks the necessary capabilities for thorough independent research and scrutiny. It is crucial to support and safeguard researchers who utilize data scraping to independently validate the platform's data quality.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.CY" ], "metadata_source": "arxiv", "confidence_score": 0.76, "quality_score": 100 } }, { "id": "bibtex:Rieder2025-ju", "title": "Forgetful by design? A critical audit of YouTube's search API for academic research", "content_text": "This paper critically audits the search endpoint of YouTube's Data API (v3), a common tool for academic research. Through systematic weekly searches over six months using eleven queries, we identify major limitations regarding completeness, representativeness, consistency, and bias. Our findings reveal substantial differences between ranking parameters like relevance and date in terms of video recall and precision, with relevance often retrieving numerous off-topic videos. We also observe severe...", "date_published": "2025-06-13T00:00:00Z", "_discovery_date": "2025-06-15T00:00:00Z", "url": "https://doi.org/10.1080/1369118X.2025.2591767", "external_url": "https://doi.org/10.1080/1369118X.2025.2591767", "authors": [ { "name": "Bernhard Rieder" }, { "name": "Adrian Padilla" }, { "name": "Oscar Coromina" } ], "tags": [ "Information, Communication and Society, 1-20", "Article", "cs.IR", "cs.HC", "cs.SI" ], "content_html": "

Abstract

This paper critically audits the search endpoint of YouTube's Data API (v3), a common tool for academic research. Through systematic weekly searches over six months using eleven queries, we identify major limitations regarding completeness, representativeness, consistency, and bias. Our findings reveal substantial differences between ranking parameters like relevance and date in terms of video recall and precision, with relevance often retrieving numerous off-topic videos. We also observe severe temporal decay in video discoverability: the number of retrievable videos for a given period drops dramatically within just 20-60 days of publication, even though these videos remain on the platform. This potentially undermines research designs that rely on systematic data collection. Furthermore, search results lack consistency, with identical queries yielding different video sets over time, compromising replicability. A case study on the European Parliament elections highlights how these issues impact research outcomes. While the paper offers several mitigation strategies, it concludes that the API's search function, potentially prioritizing 'freshness' over comprehensive retrieval, is not adequate for robust academic research, especially concerning Digital Services Act requirements.

Details

Links

DOI | arXiv | PDF

", "_academic": { "doi": "10.1080/1369118X.2025.2591767", "open_access": true, "type": "article", "subjects": [ "cs.IR", "cs.HC", "cs.SI" ], "metadata_source": "arxiv", "confidence_score": 0.7428571428571428, "quality_score": 100 } }, { "id": "bibtex:Goel2025-iq", "title": "Using co-sharing to identify use of mainstream news for promoting potentially misleading narratives", "content_text": "Much of the research quantifying volume and spread of online misinformation measures the construct at the source level, identifying a set of specific unreliable domains that account for a relatively small share of news consumption. This source-level dichotomy obscures the potential for users to repurpose factually true information from reliable sources to advance misleading narratives. We demonstrate this potentially far more prevalent form of misinformation by identifying articles from reliable...", "date_published": "2025-06-10T00:00:00Z", "_discovery_date": "2025-06-15T00:00:00Z", "url": "https://doi.org/10.1038/s41562-025-02223-4", "external_url": "https://doi.org/10.1038/s41562-025-02223-4", "authors": [ { "name": "Pranav Goel" }, { "name": "Jon Green" }, { "name": "David Lazer" }, { "name": "Philip S. Resnik" } ], "tags": [ "Nature Human Behaviour", "Article" ], "content_html": "

Abstract

Much of the research quantifying volume and spread of online misinformation measures the construct at the source level, identifying a set of specific unreliable domains that account for a relatively small share of news consumption. This source-level dichotomy obscures the potential for users to repurpose factually true information from reliable sources to advance misleading narratives. We demonstrate this potentially far more prevalent form of misinformation by identifying articles from reliable sources that are frequently co-shared with (shared by users who also shared) 'fake' news on social media, and concurrently extracting narratives present in fake news content and claims fact checked as false. Specifically in this study, we use Twitter/X data from May 2018 to November 2021 matched to a US voter file. We find that narratives present in misinformation content are significantly more likely to occur in co-shared articles than in articles from the same reliable sources that are not co-shared, consistent with users using information from mainstream sources to enhance the credibility and reach of potentially misleading claims.

Details

Links

DOI

", "_academic": { "doi": "10.1038/s41562-025-02223-4", "citation_count": 0, "reference_count": 74, "type": "article", "publisher": "Nature Publishing Group", "pages": "1--18", "metadata_source": "crossref", "confidence_score": 0.8272727272727272, "quality_score": 100 } }, { "id": "bibtex:Renault2025-uh", "title": "Republicans are flagged more often than Democrats for sharing misinformation on X’s Community Notes", "content_text": "We use crowd-sourced assessments from X’s Community Notes program to examine whether there are partisan differences in the sharing of misleading information. Unlike previous studies, misleadingness here is determined by agreement across a diverse community of platform users, rather than by fact-checkers. We find that 67% more posts by Republicans are flagged as misleading compared to posts by Democrats. These results are not base rate artifacts, as we find no meaningful over-representation of Re...", "date_published": "2025-06-15T00:00:00Z", "_discovery_date": "2025-06-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.31234/osf.io/vk5yj_v1", "external_url": "https://doi.org/10.31234/osf.io/vk5yj_v1", "authors": [ { "name": "Thomas Renault" }, { "name": "Mohsen Mosleh" }, { "name": "David Gertler Rand" } ], "tags": [ "Article", "Proc. Natl. Acad. Sci. U. S. A." ], "content_html": "

Abstract

We use crowd-sourced assessments from X’s Community Notes program to examine whether there are partisan differences in the sharing of misleading information. Unlike previous studies, misleadingness here is determined by agreement across a diverse community of platform users, rather than by fact-checkers. We find that 67% more posts by Republicans are flagged as misleading compared to posts by Democrats. These results are not base rate artifacts, as we find no meaningful over-representation of Republicans among X users. Our findings provide strong evidence of a partisan asymmetry in misinformation sharing which cannot be attributed to political bias on the part of raters, and indicate that Republicans will be sanctioned more than Democrats even if platforms transition from professional fact-checking to Community Notes.

Details

Links

DOI

", "_academic": { "doi": "10.31234/osf.io/vk5yj_v1", "citation_count": 0, "reference_count": 0, "type": "article", "publisher": "Proceedings of the National Academy of Sciences", "volume": "122", "metadata_source": "crossref", "confidence_score": 0.8333333333333333, "quality_score": 100 } }, { "id": "bibtex:Ventura2025-sw", "title": "Misinformation beyond traditional feeds: Evidence from a WhatsApp deactivation experiment in Brazil", "content_text": "Published in J. Polit. | Year: 2025 | Authors: Ventura, Tiago, Majumdar, Rajeshwari, Nagler, Jonathan, Tucker, Joshua", "date_published": "2025-06-12T00:00:00Z", "_discovery_date": "2025-06-15T00:00:00Z", "url": "https://doi.org/10.1086/737172", "external_url": "https://doi.org/10.1086/737172", "authors": [ { "name": "Tiago Ventura" }, { "name": "Rajeshwari Majumdar" }, { "name": "Jonathan Nagler" }, { "name": "Joshua Tucker" } ], "tags": [ "Article", "The Journal of Politics" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1086/737172", "citation_count": 0, "reference_count": 73, "type": "article", "publisher": "University of Chicago Press", "metadata_source": "crossref", "confidence_score": 0.8333333333333333, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Waight2025-al", "title": "Quantifying narrative similarity across languages", "content_text": "How can one understand the spread of ideas across text data? This is a key measurement problem in sociological inquiry, from the study of how interest groups shape media discourse, to the spread of policy across institutions, to the diffusion of organizational structures and institution themselves. To study how ideas and narratives diffuse across text, we must first develop a method to identify whether texts share the same information and narratives, rather than the same broad themes or exact fe...", "date_published": "2025-06-15T00:00:00Z", "_discovery_date": "2025-06-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1177/00491241251340080", "external_url": "https://doi.org/10.1177/00491241251340080", "authors": [ { "name": "Hannah Waight" }, { "name": "Solomon Messing" }, { "name": "Anton Shirikov" }, { "name": "Margaret E. Roberts" }, { "name": "Jonathan Nagler" }, { "name": "Jason Greenfield" }, { "name": "Megan A. Brown" }, { "name": "Kevin Aslett" }, { "name": "Joshua A. Tucker" } ], "tags": [ "Article", "Sociological Methods & Research" ], "content_html": "

Abstract

How can one understand the spread of ideas across text data? This is a key measurement problem in sociological inquiry, from the study of how interest groups shape media discourse, to the spread of policy across institutions, to the diffusion of organizational structures and institution themselves. To study how ideas and narratives diffuse across text, we must first develop a method to identify whether texts share the same information and narratives, rather than the same broad themes or exact features. We propose a novel approach to measure this quantity of interest, which we call “narrative similarity,” by using large language models to distill texts to their core ideas and then compare the similarity of claims rather than of words, phrases, or sentences. The result is an estimand much closer to narrative similarity than what is possible with past relevant alternatives, including exact text reuse, which returns lexically similar documents; topic modeling, which returns topically similar documents; or an array of alternative approaches. We devise an approach to providing out-of-sample measures of performance (precision, recall, F1) and show that our approach outperforms relevant alternatives by a large margin. We apply our approach to an important case study: The spread of Russian claims about the development of a Ukrainian bioweapons program in U.S. mainstream and fringe news websites. While we focus on news in this application, our approach can be applied more broadly to the study of propaganda, misinformation, diffusion of policy and cultural objects, among other topics.

Details

Links

DOI

", "_academic": { "doi": "10.1177/00491241251340080", "citation_count": 1, "reference_count": 109, "type": "article", "publisher": "SAGE Publications", "metadata_source": "crossref", "confidence_score": 0.8136363636363636, "quality_score": 100 } }, { "id": "bibtex:Kansaon2025-id", "title": "From fake news to real protests: WhatsApp’s role in Brazilian political coordination", "content_text": "The growth of social networks has raised concerns about the misuse of these platforms by disinformation campaigns, social bots, and coordinated activities. Among these platforms, WhatsApp has become a focal point for this abuse, particularly in Brazil, one of the countries with the highest use of the platform. Despite acknowledging the presence of coordinated campaigns and implementing restrictions on the number of messages forwarded per user, the platform continues to be abused. Due to its priv...", "date_published": "2025-06-07T00:00:00Z", "_discovery_date": "2025-06-15T00:00:00Z", "url": "https://doi.org/10.1609/icwsm.v19i1.35857", "external_url": "https://doi.org/10.1609/icwsm.v19i1.35857", "authors": [ { "name": "Daniel Kansaon" }, { "name": "Philipe de Freitas Melo" }, { "name": "Savvas Zannettou" }, { "name": "Fabricio Benevenuto" } ], "tags": [ "Article", "Proceedings of the International AAAI Conference on Web and Social Media" ], "content_html": "

Abstract

The growth of social networks has raised concerns about the misuse of these platforms by disinformation campaigns, social bots, and coordinated activities. Among these platforms, WhatsApp has become a focal point for this abuse, particularly in Brazil, one of the countries with the highest use of the platform. Despite acknowledging the presence of coordinated campaigns and implementing restrictions on the number of messages forwarded per user, the platform continues to be abused. Due to its private nature and the difficulty of collecting information, little is known about these campaigns and the messages they disseminate. Given this context, our study investigates the presence of coordinated activities on WhatsApp in Brazil, identifying their content and purpose, especially how these messages relate to recent Brazilian political events. To answer these questions, we analyzed 13 million messages from 1,444 political groups over seven months from July 2022 to January 2023. Using network analysis, our findings suggest a significant prevalence of coordinated activity in the propagation of news messages, 26% of which originate from misinformation sites. Furthermore, we found that images play a key role in coordinated activity, accounting for 15% of messages, which are also used to mislead. Finally, coordinated accounts were used to organize collective actions, including attacks and protests against election results.

Details

Links

DOI

", "_academic": { "doi": "10.1609/icwsm.v19i1.35857", "citation_count": 0, "reference_count": 0, "type": "article", "publisher": "Association for the Advancement of Artificial Intelligence (AAAI)", "volume": "19", "pages": "1007--1020", "metadata_source": "crossref", "confidence_score": 0.85, "quality_score": 100 } }, { "id": "bibtex:Marwick2025-vx", "title": "True costs of misinformation| mountains of evidence: Processual “redpilling” as a Socio-technical effect of disinformation", "content_text": "How do people come to believe Far-Right, extremist, and conspiratorial ideas they encounter online? This article examines how participants in primarily U.S.-based Far-Right online communities describe their adoption of “redpill” beliefs and the role of disinformation in these accounts. Applying the socio-technical theory of media effects, we conduct a qualitative content analysis of “redpilling narratives” gathered from Reddit, Gab, and Discord. While many users frame redpilling as a moment of c...", "date_published": "2025-06-15T00:00:00Z", "_discovery_date": "2025-06-15T00:00:00Z", "_date_estimated": true, "url": "https://scholar.google.com/scholar?q=True%20costs%20of%20misinformation%7C%20mountains%20of%20evidence%3A%20Processual%20%E2%80%9Credpilling%E2%80%9D%20as%20a%20Socio-technical%20effect%20of%20disinformation", "external_url": "https://scholar.google.com/scholar?q=True%20costs%20of%20misinformation%7C%20mountains%20of%20evidence%3A%20Processual%20%E2%80%9Credpilling%E2%80%9D%20as%20a%20Socio-technical%20effect%20of%20disinformation", "authors": [ { "name": "Marwick, Alice E" }, { "name": "Furl, Katherine" } ], "tags": [ "disinformation", "Article", "online radicalization", "Int. J. Commun.", "extremism", "evidence", "Far-Right", "redpill" ], "content_html": "

Abstract

How do people come to believe Far-Right, extremist, and conspiratorial ideas they encounter online? This article examines how participants in primarily U.S.-based Far-Right online communities describe their adoption of “redpill” beliefs and the role of disinformation in these accounts. Applying the socio-technical theory of media effects, we conduct a qualitative content analysis of “redpilling narratives” gathered from Reddit, Gab, and Discord. While many users frame redpilling as a moment of conversion, others portray redpilling as a process, something achieved incrementally through years of community participation and “doing your own research.” In both cases, disinformation presented as evidence and the capacity to determine the veracity of presented evidence play important roles in redpilling oneself and others. By framing their beliefs as the rational and logical results of fully considering a plethora of evidence, redpill adherents can justify holding and promoting otherwise indefensible prejudices. The community’s creation, promotion, and repetition of Far-Right disinformation, much of which is historical or “scientific” in nature, play a crucial role in the adoption of Far-Right beliefs.

Details

", "_academic": { "type": "article", "volume": "19", "pages": "26", "quality_score": 80, "quality_issues": [ "missing_link", "not_enriched" ] } }, { "id": "bibtex:Gaisbauer2025-by", "title": "A political cartography of news sharing: Capturing story, outlet and content level of news circulation on Twitter", "content_text": "News sharing on digital platforms shapes the digital spaces millions of users navigate. Trace data from these platforms also enables researchers to study online news circulation. In this context, research on the types of news shared by users of differential political leaning has received considerable attention. We argue that most existing approaches (i) rely on an overly simplified measurement of political leaning, (ii) consider only the outlet level in their analyses, and/or (iii) study news ci...", "date_published": "2025-05-13T00:00:00Z", "_discovery_date": "2025-05-15T00:00:00Z", "url": "http://arxiv.org/abs/2505.08359v1", "external_url": "http://arxiv.org/abs/2505.08359v1", "authors": [ { "name": "Felix Gaisbauer" }, { "name": "Armin Pournaki" }, { "name": "Jakob Ohme" } ], "tags": [ "cs.SI", "Article", "arXiv [cs.SI]" ], "content_html": "

Abstract

News sharing on digital platforms shapes the digital spaces millions of users navigate. Trace data from these platforms also enables researchers to study online news circulation. In this context, research on the types of news shared by users of differential political leaning has received considerable attention. We argue that most existing approaches (i) rely on an overly simplified measurement of political leaning, (ii) consider only the outlet level in their analyses, and/or (iii) study news circulation among partisans by making ex-ante distinctions between partisan and non-partisan news. In this methodological contribution, we introduce a research pipeline that allows a systematic mapping of news sharing both with respect to source and content. As a proof of concept, we demonstrate insights that otherwise remain unnoticed: Diversification of news sharing along the second political dimension; topic-dependent sharing of outlets; some outlets catering different items to different audiences.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.SI" ], "metadata_source": "arxiv", "confidence_score": 0.7428571428571428, "quality_score": 100 } }, { "id": "bibtex:Luceri2025-tr", "title": "Coordinated inauthentic behavior on TikTok: Challenges and opportunities for detection in a video-first ecosystem", "content_text": "Detecting coordinated inauthentic behavior (CIB) is central to the study of online influence operations. However, most methods focus on text-centric platforms, leaving video-first ecosystems like TikTok largely unexplored. To address this gap, we develop and evaluate a computational framework for detecting CIB on TikTok, leveraging a network-based approach adapted to the platform's unique content and interaction structures. Building on existing approaches, we construct user similarity networks b...", "date_published": "2025-05-16T00:00:00Z", "_discovery_date": "2025-05-15T00:00:00Z", "url": "http://arxiv.org/abs/2505.10867v2", "external_url": "http://arxiv.org/abs/2505.10867v2", "authors": [ { "name": "Luca Luceri" }, { "name": "Tanishq Vijay Salkar" }, { "name": "Ashwin Balasubramanian" }, { "name": "Gabriela Pinto" }, { "name": "Chenning Sun" }, { "name": "Emilio Ferrara" } ], "tags": [ "cs.SI", "Article", "arXiv [cs.SI]" ], "content_html": "

Abstract

Detecting coordinated inauthentic behavior (CIB) is central to the study of online influence operations. However, most methods focus on text-centric platforms, leaving video-first ecosystems like TikTok largely unexplored. To address this gap, we develop and evaluate a computational framework for detecting CIB on TikTok, leveraging a network-based approach adapted to the platform's unique content and interaction structures. Building on existing approaches, we construct user similarity networks based on shared behaviors, including synchronized posting, repeated use of similar captions, multimedia content reuse, and hashtag sequence overlap, and apply graph pruning techniques to identify dense networks of likely coordinated accounts. Analyzing a dataset of 793K TikTok videos related to the 2024 U.S. Presidential Election, we uncover a range of coordinated activities, from synchronized amplification of political narratives to semi-automated content replication using AI-generated voiceovers and split-screen video formats. Our findings show that while traditional coordination indicators generalize well to TikTok, other signals, such as those based on textual similarity of video transcripts or Duet and Stitch interactions, prove ineffective, highlighting the platform's distinct content norms and interaction mechanics. This work provides the first empirical foundation for studying and detecting CIB on TikTok, paving the way for future research into influence operations in short-form video platforms.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.SI" ], "metadata_source": "arxiv", "confidence_score": 0.7214285714285714, "quality_score": 100 } }, { "id": "bibtex:Gerard2025-br", "title": "Bridging the narrative divide: Cross-platform discourse networks in fragmented ecosystems", "content_text": "Political discourse has grown increasingly fragmented across different social platforms, making it challenging to trace how narratives spread and evolve within such a fragmented information ecosystem. Reconstructing social graphs and information diffusion networks is challenging, and available strategies typically depend on platform-specific features and behavioral signals which are often incompatible across systems and increasingly restricted. To address these challenges, we present a platform-...", "date_published": "2025-05-22T00:00:00Z", "_discovery_date": "2025-05-15T00:00:00Z", "url": "http://arxiv.org/abs/2505.21729v1", "external_url": "http://arxiv.org/abs/2505.21729v1", "authors": [ { "name": "Patrick Gerard" }, { "name": "Hans W. A. Hanley" }, { "name": "Luca Luceri" }, { "name": "Emilio Ferrara" } ], "tags": [ "cs.SI", "cs.CY", "Article", "arXiv [cs.SI]" ], "content_html": "

Abstract

Political discourse has grown increasingly fragmented across different social platforms, making it challenging to trace how narratives spread and evolve within such a fragmented information ecosystem. Reconstructing social graphs and information diffusion networks is challenging, and available strategies typically depend on platform-specific features and behavioral signals which are often incompatible across systems and increasingly restricted. To address these challenges, we present a platform-agnostic framework that allows to accurately and efficiently reconstruct the underlying social graph of users' cross-platform interactions, based on discovering latent narratives and users' participation therein. Our method achieves state-of-the-art performance in key network-based tasks: information operation detection, ideological stance prediction, and cross-platform engagement prediction$\\unicode{x2013}$$\\unicode{x2013}$while requiring significantly less data than existing alternatives and capturing a broader set of users. When applied to cross-platform information dynamics between Truth Social and X (formerly Twitter), our framework reveals a small, mixed-platform group of $\\textit{bridge users}$, comprising just 0.33% of users and 2.14% of posts, who introduce nearly 70% of $\\textit{migrating narratives}$ to the receiving platform. These findings offer a structural lens for anticipating how narratives traverse fragmented information ecosystems, with implications for cross-platform governance, content moderation, and policy interventions.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.SI", "cs.CY" ], "metadata_source": "arxiv", "confidence_score": 0.7272727272727272, "quality_score": 100 } }, { "id": "bibtex:Hartmann2025-px", "title": "A systematic review of echo chamber research: comparative analysis of conceptualizations, operationalizations, and varying outcomes", "content_text": "Abstract This systematic review synthesizes research on echo chambers and filter bubbles to explore the reasons behind dissent regarding their existence, antecedents, and effects. It provides a taxonomy of conceptualizations and operationalizations, analyzing how measurement approaches and contextual factors influence outcomes. The review of 129 studies identifies variations in measurement approaches, as well as regional, political, cultural, and platform-specific biases, as key factors contribu...", "date_published": "2025-05-15T00:00:00Z", "_discovery_date": "2025-05-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1007/s42001-025-00381-z", "external_url": "https://doi.org/10.1007/s42001-025-00381-z", "authors": [ { "name": "David Hartmann" }, { "name": "Sonja Mei Wang" }, { "name": "Lena Pohlmann" }, { "name": "Bettina Berendt" } ], "tags": [ "Article", "Journal of Computational Social Science" ], "content_html": "

Abstract

Abstract This systematic review synthesizes research on echo chambers and filter bubbles to explore the reasons behind dissent regarding their existence, antecedents, and effects. It provides a taxonomy of conceptualizations and operationalizations, analyzing how measurement approaches and contextual factors influence outcomes. The review of 129 studies identifies variations in measurement approaches, as well as regional, political, cultural, and platform-specific biases, as key factors contributing to the lack of consensus. Studies based on homophily and computational social science methods often support the echo chamber hypothesis, while research on content exposure and broader media environments, such as surveys, tends to challenge it. Group behavior, cultural influences, instant messaging platforms, and short video platforms remain underexplored. The strong geographic focus on the United States further highlights the need for studies in multi-party systems and regions beyond the Global North. Future research should prioritize cross-platform studies, continuous algorithmic audits, and investigations into the causal links between polarization, fragmentation, and echo chambers to advance the field. This review also provides recommendations for using the EU’s Digital Services Act to enhance research in this area and conduct studies outside the US in multi-party systems. By addressing these gaps, this review contributes to a more comprehensive understanding of echo chambers, their measurement, and their societal impacts.

Details

Links

DOI

", "_academic": { "doi": "10.1007/s42001-025-00381-z", "citation_count": 8, "reference_count": 188, "type": "article", "publisher": "Springer Science and Business Media LLC", "volume": "8", "pages": "52", "metadata_source": "crossref", "confidence_score": 0.83, "quality_score": 100 } }, { "id": "bibtex:Yang2025-iv", "title": "Coordinated link sharing on Facebook", "content_text": "Malicious actors regularly attempt to manipulate social media using coordinated posting. Many existing methods for detecting this coordination, though, have relied primarily on post-timing, which is trivially easy to change. In this paper, we make a significant methodological advancement in coordination detection, leveraging highly regular statistical patterns in the speed and frequency of sharing. We apply and validate this approach on Facebook, using 11.2 million link posts from a list of 16,1...", "date_published": "2025-05-05T00:00:00Z", "_discovery_date": "2025-05-15T00:00:00Z", "url": "https://doi.org/10.1038/s41598-025-00233-w", "external_url": "https://doi.org/10.1038/s41598-025-00233-w", "authors": [ { "name": "Yunkang Yang" }, { "name": "Ramesh Paudel" }, { "name": "Jordan McShan" }, { "name": "Matthew Hindman" }, { "name": "H. Howie Huang" }, { "name": "David Broniatowski" } ], "tags": [ "Article", "Facebook", "Coordination", "Scientific Reports", "Social media" ], "content_html": "

Abstract

Malicious actors regularly attempt to manipulate social media using coordinated posting. Many existing methods for detecting this coordination, though, have relied primarily on post-timing, which is trivially easy to change. In this paper, we make a significant methodological advancement in coordination detection, leveraging highly regular statistical patterns in the speed and frequency of sharing. We apply and validate this approach on Facebook, using 11.2 million link posts from a list of 16,169 most popular English-language Facebook pages that referenced at least one of the top eight US politicians in any post, a set of pages that produced more than 91\\% of all user engagement in this category during our collection period. Our approach can be calibrated and adapted across contexts, platforms, and times, allowing researchers to build valid, testable, but still human-interpretable models of platform manipulations.

Details

Links

DOI

", "_academic": { "doi": "10.1038/s41598-025-00233-w", "citation_count": 1, "reference_count": 33, "type": "article", "publisher": "Springer Science and Business Media LLC", "volume": "15", "pages": "15684", "metadata_source": "crossref", "confidence_score": 0.8214285714285714, "quality_score": 100 } }, { "id": "bibtex:Votta2025-xz", "title": "The cost of reach: Testing the role of ad delivery algorithms in online political campaigns", "content_text": "Published in Polit. Commun. | Year: 2025 | Authors: Votta, Fabio, Dobber, Tom, Guinaudeau, Benjamin, Helberger, Natali, de Vreese, Claes", "date_published": "2025-05-04T00:00:00Z", "_discovery_date": "2025-05-15T00:00:00Z", "url": "https://doi.org/10.1080/10584609.2024.2439317", "external_url": "https://doi.org/10.1080/10584609.2024.2439317", "authors": [ { "name": "Fabio Votta" }, { "name": "Tom Dobber" }, { "name": "Benjamin Guinaudeau" }, { "name": "Natali Helberger" }, { "name": "Claes de Vreese" } ], "tags": [ "Article", "Political Communication" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1080/10584609.2024.2439317", "citation_count": 0, "reference_count": 64, "type": "article", "publisher": "Informa UK Limited", "volume": "42", "pages": "476--508", "metadata_source": "crossref", "confidence_score": 0.85, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Di-Marco2025-aa", "title": "Post-hoc evaluation of nodes influence in information cascades: The case of coordinated accounts", "content_text": "In the last few years, social media has gained an unprecedented amount of attention, playing a pivotal role in shaping the contemporary landscape of communication and connection. However, Coordinated inauthentic Behaviour (CIB), defined as orchestrated efforts by entities to deceive or mislead users about their identity and intentions, has emerged as a tactic to exploit the online discourse. In this study, we quantify the efficacy of CIB tactics by defining a general framework for evaluating the...", "date_published": "2025-05-31T00:00:00Z", "_discovery_date": "2025-05-15T00:00:00Z", "url": "https://doi.org/10.1145/3700644", "external_url": "https://doi.org/10.1145/3700644", "authors": [ { "name": "Niccolò Di Marco" }, { "name": "Sara Brunetti" }, { "name": "Matteo Cinelli" }, { "name": "Walter Quattrociocchi" } ], "tags": [ "Article", "ACM Transactions on the Web" ], "content_html": "

Abstract

In the last few years, social media has gained an unprecedented amount of attention, playing a pivotal role in shaping the contemporary landscape of communication and connection. However, Coordinated inauthentic Behaviour (CIB), defined as orchestrated efforts by entities to deceive or mislead users about their identity and intentions, has emerged as a tactic to exploit the online discourse. In this study, we quantify the efficacy of CIB tactics by defining a general framework for evaluating the influence of a subset of nodes in a directed tree. We design two algorithms that provide optimal and greedy post-hoc placement strategies that lead to maximising the configuration influence. We then consider cascades from information spreading on X (formerly known as Twitter) to compare the observed behaviour with our algorithms. The results show that, according to our model, coordinated accounts are quite inefficient in terms of their network influence, thus suggesting that they may play a less pivotal role than expected. Moreover, the causes of these poor results may be found in two separate aspects: a bad placement strategy and a scarcity of resources.

Details

Links

DOI

", "_academic": { "doi": "10.1145/3700644", "citation_count": 5, "reference_count": 78, "type": "article", "publisher": "Association for Computing Machinery (ACM)", "volume": "19", "pages": "1--19", "metadata_source": "crossref", "confidence_score": 0.83, "quality_score": 100 } }, { "id": "bibtex:Allcott2025-jb", "title": "The effects of political advertising on Facebook and Instagram before the 2020 US election", "content_text": "We study the effects of social media political advertising by randomizing subsets of 36,906 Facebook users and 25,925 Instagram users to have political ads removed from their news feeds for six weeks before the 2020 US presidential election. We show that most presidential ads were targeted toward parties’ own supporters and that fundraising ads were most common. On both Facebook and Instagram, we found no detectable effects of removing political ads on political knowledge, polarization, perceive...", "date_published": "2025-05-15T00:00:00Z", "_discovery_date": "2025-05-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.2139/ssrn.5259653", "external_url": "https://doi.org/10.2139/ssrn.5259653", "authors": [ { "name": "Hunt Allcott" }, { "name": "Matthew Gentzkow" }, { "name": "Ro’ee Levy" }, { "name": "Adriana Crespo-Tenorio" }, { "name": "Natasha Dumas" }, { "name": "Winter Mason" }, { "name": "Devra Moehler" }, { "name": "Pablo Barbera" }, { "name": "Taylor Brown" }, { "name": "Juan Carlos Cisneros" }, { "name": "Drew Dimmery" }, { "name": "Deen Freelon" }, { "name": "Sandra González-Bailón" }, { "name": "Andrew Guess" }, { "name": "Young Mie Kim" }, { "name": "David Lazer" }, { "name": "Neil A. Malhotra" }, { "name": "Sameer Nair-Desai" }, { "name": "Brendan Nyhan" }, { "name": "Ana Carolina Paixao de Queiroz" }, { "name": "Jennifer Pan" }, { "name": "Jaime Settle" }, { "name": "Emily Thorson" }, { "name": "Rebekah Tromble" }, { "name": "Carlos Velasco" }, { "name": "Benjamin Wittenbrink" }, { "name": "Magdalena Wojcieszak" }, { "name": "Shiqi Yang" }, { "name": "Saam Zahedian" }, { "name": "Annie Franco" }, { "name": "Chad Kiewiet de Jonge" }, { "name": "Talia Stroud" }, { "name": "Joshua Aaron Tucker" } ], "tags": [ "SSRN Electronic Journal", "Techreport" ], "content_html": "

Abstract

We study the effects of social media political advertising by randomizing subsets of 36,906 Facebook users and 25,925 Instagram users to have political ads removed from their news feeds for six weeks before the 2020 US presidential election. We show that most presidential ads were targeted toward parties’ own supporters and that fundraising ads were most common. On both Facebook and Instagram, we found no detectable effects of removing political ads on political knowledge, polarization, perceived legitimacy of the election, political participation (including campaign contributions), candidate favorability, and turnout. This was true overall and for both Democrats and Republicans separately.

Details

Links

DOI

", "_academic": { "doi": "10.2139/ssrn.5259653", "citation_count": 0, "reference_count": 350, "type": "techreport", "publisher": "National Bureau of Economic Research", "metadata_source": "crossref", "confidence_score": 0.864, "quality_score": 100 } }, { "id": "bibtex:Gattermann2025-yx", "title": "The role of far-right party performance in shaping disinformation concerns of European voters: evidence from the 2024 European Parliament elections", "content_text": "Published in J. Eur. Public Policy | Year: 2025 | Authors: Gattermann, Katjana, van den Hoogen, Elske, de Vreese, Claes", "date_published": "2025-04-15T00:00:00Z", "_discovery_date": "2025-04-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1080/13501763.2025.2489088", "external_url": "https://doi.org/10.1080/13501763.2025.2489088", "authors": [ { "name": "Katjana Gattermann" }, { "name": "Elske van den Hoogen" }, { "name": "Claes de Vreese" } ], "tags": [ "Article", "Journal of European Public Policy" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1080/13501763.2025.2489088", "citation_count": 2, "reference_count": 64, "type": "article", "publisher": "Informa UK Limited", "pages": "1--26", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Smith2025-kc", "title": "Emergent structures of attention on social media are driven by amplification and triad transitivity", "content_text": "Abstract As they evolve, social networks tend to form transitive triads more often than random chance and structural constraints would suggest. However, the mechanisms by which triads in these networks become transitive are largely unexplored. We leverage a unique combination of data and methods to demonstrate a causal link between amplification and triad transitivity in a directed social network. Additionally, we develop the concept of the “attention broker,” an extension of the previously theo...", "date_published": "2025-03-27T00:00:00Z", "_discovery_date": "2025-04-15T00:00:00Z", "url": "https://doi.org/10.1093/pnasnexus/pgaf106", "external_url": "https://doi.org/10.1093/pnasnexus/pgaf106", "authors": [ { "name": "Alyssa H Smith" }, { "name": "Jon Green" }, { "name": "Brooke F. Welles" }, { "name": "David Lazer" } ], "tags": [ "Article", "PNAS Nexus", "social media", "social networks", "triad transitivity", "tertius iungens", "amplification" ], "content_html": "

Abstract

Abstract As they evolve, social networks tend to form transitive triads more often than random chance and structural constraints would suggest. However, the mechanisms by which triads in these networks become transitive are largely unexplored. We leverage a unique combination of data and methods to demonstrate a causal link between amplification and triad transitivity in a directed social network. Additionally, we develop the concept of the “attention broker,” an extension of the previously theorized tertius iungens (or “third who joins”). We use an innovative technique to identify time-bounded Twitter/X following events, and then use difference-in-differences to show that attention brokers cause triad transitivity by amplifying content. Attention brokers intervene in the evolution of any sociotechnical system where individuals can amplify content while referencing its originator.

Details

Links

DOI

", "_academic": { "doi": "10.1093/pnasnexus/pgaf106", "citation_count": 0, "reference_count": 82, "type": "article", "publisher": "Oxford University Press (OUP)", "volume": "4", "pages": "gaf106", "metadata_source": "crossref", "confidence_score": 0.8272727272727272, "quality_score": 100 } }, { "id": "bibtex:Moran2025-qn", "title": "The end of trust and safety?: Examining the future of content moderation and upheavals in professional online safety efforts", "content_text": "Year: 2025 | Authors: Moran, Rachel Elizabeth, Schafer, Joseph, Bayar, Mert, Starbird, Kate", "date_published": "2025-04-26T00:00:00Z", "_discovery_date": "2025-04-15T00:00:00Z", "url": "https://doi.org/10.1145/3706598.3713662", "external_url": "https://doi.org/10.1145/3706598.3713662", "authors": [ { "name": "Rachel Elizabeth Moran" }, { "name": "Joseph Schafer" }, { "name": "Mert Bayar" }, { "name": "Kate Starbird" } ], "tags": [ "Inproceedings", "Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1145/3706598.3713662", "citation_count": 6, "reference_count": 72, "type": "inproceedings", "publisher": "ACM", "pages": "1--14", "metadata_source": "crossref", "confidence_score": 0.83, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Bruns2025-fz", "title": "Untangling the furball: A practice mapping approach to the analysis of multimodal interactions in social networks", "content_text": "This article introduces the analytical approach of practice mapping , using vector embeddings of network actions and interactions to map commonalities and disjunctures in the practices of social media users, as a framework for methodological advancement beyond the limitations of conventional network analysis and visualization. In particular, the methodological framework we outline here has the potential to incorporate multiple distinct modes of interaction into a single practice map; can be furt...", "date_published": "2025-04-15T00:00:00Z", "_discovery_date": "2025-04-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1177/20563051251331748", "external_url": "https://doi.org/10.1177/20563051251331748", "authors": [ { "name": "Axel Bruns" }, { "name": "Kateryna Kasianenko" }, { "name": "Vish Padinjaredath Suresh" }, { "name": "Ehsan Dehghan" }, { "name": "Laura Vodden" } ], "tags": [ "Article", "Social Media + Society" ], "content_html": "

Abstract

This article introduces the analytical approach of practice mapping , using vector embeddings of network actions and interactions to map commonalities and disjunctures in the practices of social media users, as a framework for methodological advancement beyond the limitations of conventional network analysis and visualization. In particular, the methodological framework we outline here has the potential to incorporate multiple distinct modes of interaction into a single practice map; can be further enriched with account-level attributes such as information gleaned from textual analysis, profile information, available demographic details, and other features; and can be applied even to a cross-platform analysis of communicative patterns and practices. The article presents practice mapping as an analytical framework and outlines its key methodological considerations. Given its prominence in past social media research, we draw on examples and data from the platform formerly known as Twitter to enable experienced scholars to translate their approaches to a practice mapping paradigm more easily, but point out how data from other platforms may be used in equivalent ways in practice mapping studies. We illustrate the utility of the approach by applying it to a dataset where the application of conventional network analysis and visualization approaches has produced few meaningful insights.

Details

Links

DOI

", "_academic": { "doi": "10.1177/20563051251331748", "citation_count": 4, "reference_count": 50, "type": "article", "publisher": "SAGE Publications", "volume": "11", "pages": "20563051251331748", "metadata_source": "crossref", "confidence_score": 0.825, "quality_score": 100 } }, { "id": "bibtex:Esau2025-tf", "title": "The quality of connections: Deliberative reciprocity and inclusive listening as antidote to destructive polarization online", "content_text": "Conflict and disagreement are integral to healthy democracies, but the extreme polarization observed on many social media platforms poses a serious risk to the core functions of public communication. This theoretical article draws on the concept of connective democracy, further theorizing it to bridge the gap between empirical online deliberation and polarization research. It introduces and refines the concept of destructive polarization and its symptoms—manifested in user-generated content on s...", "date_published": "2025-04-15T00:00:00Z", "_discovery_date": "2025-04-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1177/20563051251332421", "external_url": "https://doi.org/10.1177/20563051251332421", "authors": [ { "name": "Katharina Esau" } ], "tags": [ "Article", "Social Media + Society" ], "content_html": "

Abstract

Conflict and disagreement are integral to healthy democracies, but the extreme polarization observed on many social media platforms poses a serious risk to the core functions of public communication. This theoretical article draws on the concept of connective democracy, further theorizing it to bridge the gap between empirical online deliberation and polarization research. It introduces and refines the concept of destructive polarization and its symptoms—manifested in user-generated content on social media platforms—and applies connective democracy theory to examine these symptoms’ underlying causes. The framework shifts from the dominant focus on the quality of individual communication acts to a focus on the quality of connections, particularly within dyadic communication. Through this relational perspective, the article explores how reciprocity and listening can serve as remedies to destructive polarization, fostering high-quality connections between citizens online. Reciprocity and listening are discussed as communicative mechanisms that should be nurtured as part of depolarization strategies. Finally, the article offers insights into what platform providers and community managers can learn from this theoretical exercise to promote democratic discourse online.

Details

Links

DOI

", "_academic": { "doi": "10.1177/20563051251332421", "citation_count": 1, "reference_count": 88, "type": "article", "publisher": "SAGE Publications", "volume": "11", "pages": "20563051251332421", "metadata_source": "crossref", "confidence_score": 0.8999999999999999, "quality_score": 100 } }, { "id": "bibtex:Bastos2025-ya", "title": "So long twitter, and thanks for all the tweets", "content_text": "This chapter reviews the historical contribution of Twitter before it was rebranded as X in July 2023. Twitter was an open platform for social sciences research, particularly political communication, a source of social data so prevalent in the early 21st century that researchers referred to this scholarship as ‘Twitter studies.’ We revisit the many Application Programming Interfaces that Twitter offered to developers and researchers, including the REST, Search, Streaming, Academic, and Complianc...", "date_published": "2025-04-15T00:00:00Z", "_discovery_date": "2025-04-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.2139/ssrn.5206365", "external_url": "https://doi.org/10.2139/ssrn.5206365", "authors": [ { "name": "Marco T. Bastos" } ], "tags": [ "SSRN Electronic Journal", "Article" ], "content_html": "

Abstract

This chapter reviews the historical contribution of Twitter before it was rebranded as X in July 2023. Twitter was an open platform for social sciences research, particularly political communication, a source of social data so prevalent in the early 21st century that researchers referred to this scholarship as ‘Twitter studies.’ We revisit the many Application Programming Interfaces that Twitter offered to developers and researchers, including the REST, Search, Streaming, Academic, and Compliance APIs in addition to databases of political communication the company curated and shared with the research community before its contentious acquisition by Elon Musk in late 2022. The chapter concludes with an assessment of the research approaches developed for ‘Twitter research’ and the extent to which they are transferable to the ‘post-API era.’

Details

Links

DOI

", "_academic": { "doi": "10.2139/ssrn.5206365", "citation_count": 0, "reference_count": 50, "type": "article", "publisher": "Routledge", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 100 } }, { "id": "bibtex:Arora2025-tx", "title": "Multi-Modal Framing Analysis of News", "content_text": "Automated frame analysis of political communication is a popular task in computational social science that is used to study how authors select aspects of a topic to frame its reception. So far, such studies have been narrow, in that they use a fixed set of pre-defined frames and focus only on the text, ignoring the visual contexts in which those texts appear. Especially for framing in the news, this leaves out valuable information about editorial choices, which include not just the written artic...", "date_published": "2025-03-26T00:00:00Z", "_discovery_date": "2025-03-15T00:00:00Z", "url": "http://arxiv.org/abs/2503.20960v3", "external_url": "http://arxiv.org/abs/2503.20960v3", "authors": [ { "name": "Arnav Arora" }, { "name": "Srishti Yadav" }, { "name": "Maria Antoniak" }, { "name": "Serge Belongie" }, { "name": "Isabelle Augenstein" } ], "tags": [ "cs.CL", "FOS: Computer and information sciences", "cs.CY", "Computers and Society (cs.CY)", "Machine Learning (cs.LG)", "Computation and Language (cs.CL)", "arXiv [cs.CL]", "cs.LG", "Misc" ], "content_html": "

Abstract

Automated frame analysis of political communication is a popular task in computational social science that is used to study how authors select aspects of a topic to frame its reception. So far, such studies have been narrow, in that they use a fixed set of pre-defined frames and focus only on the text, ignoring the visual contexts in which those texts appear. Especially for framing in the news, this leaves out valuable information about editorial choices, which include not just the written article but also accompanying photographs. To overcome such limitations, we present a method for conducting multi-modal, multi-label framing analysis at scale using large (vision-) language models. Grounding our work in framing theory, we extract latent meaning embedded in images used to convey a certain point and contrast that to the text by comparing the respective frames used. We also identify highly partisan framing of topics with issue-specific frame analysis found in prior qualitative work. We demonstrate a method for doing scalable integrative framing analysis of both text and image in news, providing a more complete picture for understanding media bias.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "misc", "publisher": "arXiv", "subjects": [ "cs.CL", "cs.CY", "cs.LG" ], "metadata_source": "arxiv", "confidence_score": 0.7272727272727272, "quality_score": 100 } }, { "id": "bibtex:Brown2025-jk", "title": "Evaluating how LLM annotations represent diverse views on contentious topics", "content_text": "Researchers have proposed the use of generative large language models (LLMs) to label data for research and applied settings. This literature emphasizes the improved performance of these models relative to other natural language models, noting that generative LLMs typically outperform other models and even humans across several metrics. Previous literature has examined bias across many applications and contexts, but less work has focused specifically on bias in generative LLMs' responses to subj...", "date_published": "2025-03-29T00:00:00Z", "_discovery_date": "2025-03-15T00:00:00Z", "url": "http://arxiv.org/abs/2503.23243v2", "external_url": "http://arxiv.org/abs/2503.23243v2", "authors": [ { "name": "Megan A. Brown" }, { "name": "Shubham Atreja" }, { "name": "Libby Hemphill" }, { "name": "Patrick Y. Wu" } ], "tags": [ "Article", "cs.CL", "cs.CY", "cs.AI", "arXiv [cs.CL]" ], "content_html": "

Abstract

Researchers have proposed the use of generative large language models (LLMs) to label data for research and applied settings. This literature emphasizes the improved performance of these models relative to other natural language models, noting that generative LLMs typically outperform other models and even humans across several metrics. Previous literature has examined bias across many applications and contexts, but less work has focused specifically on bias in generative LLMs' responses to subjective annotation tasks. This bias could result in labels applied by LLMs that disproportionately align with majority groups over a more diverse set of viewpoints. In this paper, we evaluate how LLMs represent diverse viewpoints on these contentious tasks. Across four annotation tasks on four datasets, we show that LLMs do not show systematic substantial disagreement with annotators on the basis of demographics. Rather, we find that multiple LLMs tend to be biased in the same directions on the same demographic categories within the same datasets. Moreover, the disagreement between human annotators on the labeling task -- a measure of item difficulty -- is far more predictive of LLM agreement with human annotators. We conclude with a discussion of the implications for researchers and practitioners using LLMs for automated data annotation tasks. Specifically, we emphasize that fairness evaluations must be contextual, model choice alone will not solve potential issues of bias, and item difficulty must be integrated into bias assessments.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.CL", "cs.AI", "cs.CY" ], "metadata_source": "arxiv", "confidence_score": 0.725, "quality_score": 100 } }, { "id": "bibtex:Bennett2025-xs", "title": "Platforms, politics, and the crisis of democracy: Connective action and the rise of illiberalism", "content_text": "Democratic backsliding, the slow erosion of institutions, processes, and norms, has become more pronounced in many nations. Most scholars point to the role of parties, leaders, and institutional changes, along with the pursuit of voters through what Daniel Ziblatt has characterized as alliances with more extremist party surrogate organizations. Although insightful, the institutionalist literature offers little reflection about the growing role of social technologies in organizing and mobilizing ...", "date_published": "2025-03-15T00:00:00Z", "_discovery_date": "2025-03-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1017/s1537592724002123", "external_url": "https://doi.org/10.1017/s1537592724002123", "authors": [ { "name": "W. Lance Bennett" }, { "name": "Steven Livingston" } ], "tags": [ "Perspectives on Politics", "Article" ], "content_html": "

Abstract

Democratic backsliding, the slow erosion of institutions, processes, and norms, has become more pronounced in many nations. Most scholars point to the role of parties, leaders, and institutional changes, along with the pursuit of voters through what Daniel Ziblatt has characterized as alliances with more extremist party surrogate organizations. Although insightful, the institutionalist literature offers little reflection about the growing role of social technologies in organizing and mobilizing extremist networks in ways that present many challenges to traditional party gatekeeping, institutional integrity, and other democratic principles. We present a more integrated framework that explains how digitally networked publics interact with more traditional party surrogates and electoral processes to bring once-scattered extremist factions into conservative parties. When increasingly reactionary parties gain power, they may push both institutions and communication processes in illiberal directions. We develop a model of communication as networked organization to explain how Donald Trump and the Make America Great Again (MAGA) movement rapidly transformed the Republican Party in the United States, and we point to parallel developments in other nations.

Details

Links

DOI

", "_academic": { "doi": "10.1017/s1537592724002123", "citation_count": 9, "reference_count": 164, "type": "article", "publisher": "Cambridge University Press (CUP)", "pages": "1--20", "metadata_source": "crossref", "confidence_score": 0.85, "quality_score": 100 } }, { "id": "bibtex:Humprecht2025-ml", "title": "Advancing the study of political misinformation across countries and platforms—introduction to the special issue", "content_text": "The global spread of political misinformation poses serious challenges to democracies, eroding trust and distorting public discourse. However, research has largely focused on WEIRD countries—Western, Educated, Industrialized, Rich, and Democratic—limiting our understanding of how misinformation operates across diverse political, cultural, and technological contexts. This special issue addresses these gaps through comparative, cross-platform, and interdisciplinary perspectives. The articles explo...", "date_published": "2025-03-22T00:00:00Z", "_discovery_date": "2025-03-15T00:00:00Z", "url": "https://doi.org/10.1177/19401612251327530", "external_url": "https://doi.org/10.1177/19401612251327530", "authors": [ { "name": "Edda Humprecht" }, { "name": "Sebastián Valenzuela" }, { "name": "Frank Esser" }, { "name": "Edson Tandoc" } ], "tags": [ "Article", "The International Journal of Press/Politics" ], "content_html": "

Abstract

The global spread of political misinformation poses serious challenges to democracies, eroding trust and distorting public discourse. However, research has largely focused on WEIRD countries—Western, Educated, Industrialized, Rich, and Democratic—limiting our understanding of how misinformation operates across diverse political, cultural, and technological contexts. This special issue addresses these gaps through comparative, cross-platform, and interdisciplinary perspectives. The articles explore how political and media systems shape misinformation, the role of individual resilience, and how platform-specific features—across social media, messaging apps, and traditional media—affect the spread of false information. Studies from non-WEIRD regions offer insights into distinct vulnerabilities, emphasizing the need for context-sensitive approaches. Together, these contributions advance our understanding of misinformation as a global challenge and offer guidance for strengthening democratic resilience in varied information environments.

Details

Links

DOI

", "_academic": { "doi": "10.1177/19401612251327530", "citation_count": 3, "reference_count": 24, "type": "article", "publisher": "SAGE Publications", "pages": "19401612251327530", "metadata_source": "crossref", "confidence_score": 0.8333333333333333, "quality_score": 100 } }, { "id": "bibtex:Gaw2025-ru", "title": "Influence operations as brokerage: Political-economic infrastructures of manipulation in the 2022 Philippine elections", "content_text": "This study conceptualizes influence operations (IOs), an enterprise that orchestrates manipulative and inauthentic activities to achieve political advantage, as a contemporary form of brokerage during elections. It investigates the empirical case of IOs engaged in covert political campaigning in the 2022 Philippine General Elections through qualitative field research. Drawing from 22 in-depth interviews with IO leads and staff, we define IOs’ broker attributes, their brokerage processes, and the...", "date_published": "2025-03-15T00:00:00Z", "_discovery_date": "2025-03-15T00:00:00Z", "_date_estimated": true, "url": "https://scholar.google.com/scholar?q=Influence%20operations%20as%20brokerage%3A%20Political-economic%20infrastructures%20of%20manipulation%20in%20the%202022%20Philippine%20elections", "external_url": "https://scholar.google.com/scholar?q=Influence%20operations%20as%20brokerage%3A%20Political-economic%20infrastructures%20of%20manipulation%20in%20the%202022%20Philippine%20elections", "authors": [ { "name": "Gaw, Fatima" }, { "name": "Agonos, Mariam Jayne" }, { "name": "Ruijgrok, Kris" }, { "name": "Suarez, Gerard Martin" } ], "tags": [ "disinformation", "Philippines", "Article", "trolls", "brokerage", "computational propaganda", "Int. J. Commun.", "influence operations", "elections" ], "content_html": "

Abstract

This study conceptualizes influence operations (IOs), an enterprise that orchestrates manipulative and inauthentic activities to achieve political advantage, as a contemporary form of brokerage during elections. It investigates the empirical case of IOs engaged in covert political campaigning in the 2022 Philippine General Elections through qualitative field research. Drawing from 22 in-depth interviews with IO leads and staff, we define IOs’ broker attributes, their brokerage processes, and the capital and value they generate through brokerage. We identify four mechanisms of brokerage by IOs: infrastructural capacity, reputation manipulation, relationship building at scale, and obscured accountability. These mechanisms complement the brokerage work by aboveboard campaigns and other brokers by compensating for their limitations and innovating campaign strategies. We argue that IOs are not extraneous deviations but are logical extensions of existing political infrastructures and should be understood as operating with other normative forms of political campaigning.

Details

", "_academic": { "type": "article", "volume": "19", "pages": "21", "quality_score": 80, "quality_issues": [ "missing_link", "not_enriched" ] } }, { "id": "bibtex:Pante2025-pq", "title": "Beyond interaction patterns: Assessing claims of coordinated inter-state information operations on twitter/X", "content_text": "Social media platforms have become key tools for coordinated influence operations, enabling state actors to manipulate public opinion through strategic, collective actions. While previous research has suggested collaboration between states, such research failed to leverage state-of-the-art coordination indicators or control datasets. In this study, we investigate inter-state coordination by analyzing multiple online behavioral traces and using sophisticated coordination detection models. By inco...", "date_published": "2025-02-24T00:00:00Z", "_discovery_date": "2025-02-15T00:00:00Z", "url": "http://arxiv.org/abs/2502.17344v1", "external_url": "http://arxiv.org/abs/2502.17344v1", "authors": [ { "name": "Valeria Pantè" }, { "name": "David Axelrod" }, { "name": "Alessandro Flammini" }, { "name": "Filippo Menczer" }, { "name": "Emilio Ferrara" }, { "name": "Luca Luceri" } ], "tags": [ "cs.SI", "Article", "arXiv [cs.SI]" ], "content_html": "

Abstract

Social media platforms have become key tools for coordinated influence operations, enabling state actors to manipulate public opinion through strategic, collective actions. While previous research has suggested collaboration between states, such research failed to leverage state-of-the-art coordination indicators or control datasets. In this study, we investigate inter-state coordination by analyzing multiple online behavioral traces and using sophisticated coordination detection models. By incorporating a control dataset to differentiate organic user activity from coordinated efforts, our findings reveal no evidence of inter-state coordination. These results challenge earlier claims and underscore the importance of robust methodologies and control datasets in accurately detecting online coordination.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.SI" ], "metadata_source": "arxiv", "confidence_score": 0.7230769230769231, "quality_score": 100 } }, { "id": "bibtex:Hurcombe2025-cs", "title": "The discursive function of Meta’s Newsroom: How Meta frames the problem of problematic online content", "content_text": "This article examines the social technology company Meta’s public communication on problematic content, via their official ‘Meta Newsroom’, within the context of growing regulatory scrutiny. For nearly a decade, the Meta Newsroom has been a major outlet for Meta company announcements, and since 2016, the Newsroom has increasingly become a key source for company responses to concerns regarding mis/disinformation and other kinds of problematic content on Meta’s platforms. Using a mixed-methods app...", "date_published": "2025-02-15T00:00:00Z", "_discovery_date": "2025-02-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1177/13548565251315521", "external_url": "https://doi.org/10.1177/13548565251315521", "authors": [ { "name": "Edward Hurcombe" }, { "name": "Ehsan Dehghan" }, { "name": "Laura Vodden" }, { "name": "Daniel Angus" } ], "tags": [ "Article", "Convergence: The International Journal of Research into New Media Technologies" ], "content_html": "

Abstract

This article examines the social technology company Meta’s public communication on problematic content, via their official ‘Meta Newsroom’, within the context of growing regulatory scrutiny. For nearly a decade, the Meta Newsroom has been a major outlet for Meta company announcements, and since 2016, the Newsroom has increasingly become a key source for company responses to concerns regarding mis/disinformation and other kinds of problematic content on Meta’s platforms. Using a mixed-methods approach informed by discourse analysis, this article critically examines Newsrooms posts from 2016 to early 2021. It asks: how is Meta framing ‘problems’ on its platforms? How is Meta identifying ‘solutions’ to those problems? And is Meta ‘nudging’ policymakers in specific conceptual directions? Overall, we find that Meta is framing content moderation issues through four key frames – ‘authenticity’, ‘political advertising’, ‘technological solutions’, and ‘enforcement’ – that benefit Meta, as they shift responsibility while also demonstrating that Meta is an active and capable problem-solver.

Details

Links

DOI

", "_academic": { "doi": "10.1177/13548565251315521", "citation_count": 1, "reference_count": 81, "type": "article", "publisher": "SAGE Publications", "pages": "13548565251315521", "metadata_source": "crossref", "confidence_score": 0.8333333333333333, "quality_score": 100 } }, { "id": "bibtex:Le-Mens2025-qz", "title": "Positioning political texts with large language models by asking and averaging", "content_text": "Abstract We use instruction-tuned large language models (LLMs) like GPT-4, Llama 3, MiXtral, or Aya to position political texts within policy and ideological spaces. We ask an LLM where a tweet or a sentence of a political text stands on the focal dimension and take the average of the LLM responses to position political actors such as US Senators, or longer texts such as UK party manifestos or EU policy speeches given in 10 different languages. The correlations between the position estimates obt...", "date_published": "2025-01-15T00:00:00Z", "_discovery_date": "2025-01-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1017/pan.2024.29", "external_url": "https://doi.org/10.1017/pan.2024.29", "authors": [ { "name": "Gaël Le Mens" }, { "name": "Aina Gallego" } ], "tags": [ "Political Analysis", "Article" ], "content_html": "

Abstract

Abstract We use instruction-tuned large language models (LLMs) like GPT-4, Llama 3, MiXtral, or Aya to position political texts within policy and ideological spaces. We ask an LLM where a tweet or a sentence of a political text stands on the focal dimension and take the average of the LLM responses to position political actors such as US Senators, or longer texts such as UK party manifestos or EU policy speeches given in 10 different languages. The correlations between the position estimates obtained with the best LLMs and benchmarks based on text coding by experts, crowdworkers, or roll call votes exceed .90. This approach is generally more accurate than the positions obtained with supervised classifiers trained on large amounts of research data. Using instruction-tuned LLMs to position texts in policy and ideological spaces is fast, cost-efficient, reliable, and reproducible (in the case of open LLMs) even if the texts are short and written in different languages. We conclude with cautionary notes about the need for empirical validation.

Details

Links

DOI

", "_academic": { "doi": "10.1017/pan.2024.29", "citation_count": 12, "reference_count": 19, "type": "article", "publisher": "Cambridge University Press (CUP)", "pages": "1--9", "metadata_source": "crossref", "confidence_score": 0.85, "quality_score": 100 } }, { "id": "bibtex:Green2025-ap", "title": "Curation bubbles", "content_text": "Information on social media is characterized by networked curation processes in which users select other users from whom to receive information, and those users in turn share information that promotes their identities and interests. We argue this allows for partisan “curation bubbles” of users who share and consume content with consistent appeal drawn from a variety of sources. Yet, research concerning the extent of filter bubbles, echo chambers, or other forms of politically segregated informat...", "date_published": "2025-01-15T00:00:00Z", "_discovery_date": "2025-01-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1017/s0003055424000984", "external_url": "https://doi.org/10.1017/s0003055424000984", "authors": [ { "name": "JON GREEN" }, { "name": "STEFAN MCCABE" }, { "name": "SARAH SHUGARS" }, { "name": "HANYU CHWE" }, { "name": "LUKE HORGAN" }, { "name": "SHUYANG CAO" }, { "name": "DAVID LAZER" } ], "tags": [ "Article", "American Political Science Review" ], "content_html": "

Abstract

Information on social media is characterized by networked curation processes in which users select other users from whom to receive information, and those users in turn share information that promotes their identities and interests. We argue this allows for partisan “curation bubbles” of users who share and consume content with consistent appeal drawn from a variety of sources. Yet, research concerning the extent of filter bubbles, echo chambers, or other forms of politically segregated information consumption typically conceptualizes information’s partisan valence at the source level as opposed to the story level. This can lead domain-level measures of audience partisanship to mischaracterize the partisan appeal of sources’ constituent stories—especially for sources estimated to be more moderate. Accounting for networked curation aligns theory and measurement of political information consumption on social media.

Details

Links

DOI

", "_academic": { "doi": "10.1017/s0003055424000984", "citation_count": 5, "reference_count": 79, "type": "article", "publisher": "Cambridge University Press (CUP)", "pages": "1--19", "metadata_source": "crossref", "confidence_score": 0.82, "quality_score": 100 } }, { "id": "bibtex:Munger2025-cz", "title": "What did we learn about political communication from the Meta2020 partnership?", "content_text": "Published in Polit. Commun. | Year: 2025 | Authors: Munger, Kevin", "date_published": "2025-01-02T00:00:00Z", "_discovery_date": "2025-01-15T00:00:00Z", "url": "https://doi.org/10.1080/10584609.2024.2446351", "external_url": "https://doi.org/10.1080/10584609.2024.2446351", "authors": [ { "name": "Kevin Munger" } ], "tags": [ "Article", "Political Communication" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1080/10584609.2024.2446351", "citation_count": 1, "reference_count": 10, "type": "article", "publisher": "Informa UK Limited", "volume": "42", "pages": "201--207", "metadata_source": "crossref", "confidence_score": 0.8999999999999999, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:Graham2025-gp", "title": "How propaganda exploits the infrastructure of truth: A case study of \\#IStandWithPutin", "content_text": "Published in Crit. Stud. Media Commun. | Year: 2025 | Authors: Graham, Timothy", "date_published": "2025-01-15T00:00:00Z", "_discovery_date": "2025-01-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1080/15295036.2025.2473002", "external_url": "https://doi.org/10.1080/15295036.2025.2473002", "authors": [ { "name": "Timothy Graham" } ], "tags": [ "Article", "Critical Studies in Media Communication" ], "content_html": "

Details

Links

DOI

", "_academic": { "doi": "10.1080/15295036.2025.2473002", "citation_count": 1, "reference_count": 19, "type": "article", "publisher": "Informa UK Limited", "volume": "42", "pages": "75--82", "metadata_source": "crossref", "confidence_score": 0.8999999999999999, "quality_score": 80, "quality_issues": [ "missing_abstract" ] } }, { "id": "bibtex:McNally2025-dn", "title": "The news feed is not a black box: A longitudinal study of Facebook’s algorithmic treatment of news", "content_text": "This study examines the effects of a series of significant algorithm changes within Facebook’s News Feed on user engagement with news content on the platform between 2011-2020. By tracking public announcements, industry research, and leaks to the press, we constructed a timeline of algorithm changes and collected data on 1 million news articles from The Guardian over the 10-year period, alongside their associated Facebook engagement metrics (likes, comments, shares, etc.) using the CrowdTangle A...", "date_published": "2025-08-09T00:00:00Z", "_discovery_date": "2025-01-15T00:00:00Z", "url": "https://doi.org/10.1080/21670811.2025.2450623", "external_url": "https://doi.org/10.1080/21670811.2025.2450623", "authors": [ { "name": "Naoise McNally" }, { "name": "Marco Bastos" } ], "tags": [ "Article", "Digital Journalism" ], "content_html": "

Abstract

This study examines the effects of a series of significant algorithm changes within Facebook’s News Feed on user engagement with news content on the platform between 2011-2020. By tracking public announcements, industry research, and leaks to the press, we constructed a timeline of algorithm changes and collected data on 1 million news articles from The Guardian over the 10-year period, alongside their associated Facebook engagement metrics (likes, comments, shares, etc.) using the CrowdTangle API. Using time series analysis techniques including cross-correlation, Granger causality, and anomaly detection, we modeled this data to test for the relationship between significant algorithmic ranking updates to Facebook’s News Feed algorithms and user engagement with Guardian articles on the platform. Our results show that strategic interventions to the News Feed algorithm significantly impacted engagement with hard news items, whereas opinion, lifestyle, sports, and arts content were less affected. This study challenges the notion of algorithms as ‘black boxes’ by demonstrating how Facebook’s deliberate adjustments influence user engagement with news content. We conclude by outlining the limitations and challenges for systemic auditing of social media algorithms, advocating for greater data access, and discussing the opportunities afforded by the EU’s Digital Services Act to advance this research agenda.

Details

Links

DOI

", "_academic": { "doi": "10.1080/21670811.2025.2450623", "citation_count": 4, "reference_count": 81, "type": "article", "publisher": "Informa UK Limited", "pages": "1--20", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 100 } }, { "id": "bibtex:Sarmiento2025-as", "title": "Unsupervised framing analysis for social media discourse in polarizing events", "content_text": "This study investigates the concept of frames in the realm of online polarization, with a focus on social media platforms. The research extends the understanding of how frames–emerging, complex, and often subtle concepts–become prominent in online conversations that are polarized. The study proposes a comprehensive methodology for identifying and characterizing these frames, integrating machine learning techniques, network analysis algorithms, and natural language processing tools. This method a...", "date_published": "2025-11-30T00:00:00Z", "_discovery_date": "2025-01-15T00:00:00Z", "url": "https://doi.org/10.1145/3711912", "external_url": "https://doi.org/10.1145/3711912", "authors": [ { "name": "Hernan Sarmiento" }, { "name": "Ricardo Córdova" }, { "name": "Jorge Ortiz" }, { "name": "Felipe Bravo-Marquez" }, { "name": "Marcelo Santos" }, { "name": "Sebastián Valenzuela" } ], "tags": [ "Article", "ACM Transactions on the Web" ], "content_html": "

Abstract

This study investigates the concept of frames in the realm of online polarization, with a focus on social media platforms. The research extends the understanding of how frames–emerging, complex, and often subtle concepts–become prominent in online conversations that are polarized. The study proposes a comprehensive methodology for identifying and characterizing these frames, integrating machine learning techniques, network analysis algorithms, and natural language processing tools. This method aims for generalizability across multiple platforms and types of user engagement. Two novel metrics, homogeneity and relevancy are introduced for the rigorous evaluation of identified frame candidates. Grounded in several foundational presumptions, including the role of topics and multi-word expressions in framing, the study sheds light on how frames emerge and gain significance within digital communities. The research questions explored include the methods for identifying frames, the variability and significance of these frames, and the effectiveness of different computational techniques in this context. To validate the approach, we present a case study of the 2021 Chilean presidential election, using data from both \\(\\mathbb {X}\\) (formerly known as Twitter) and WhatsApp platforms. This real-world application allows for the examination of how frames fluctuate in response to events and the specific mechanisms of platforms. Overall, the study makes several key contributions to the field, offering new insights and methodologies for analyzing the complexities of online polarization. It serves as groundwork for future research on the dynamics of online communities, especially those associated with distinctly polarized events.

Details

Links

DOI

", "_academic": { "doi": "10.1145/3711912", "citation_count": 0, "reference_count": 104, "type": "article", "publisher": "Association for Computing Machinery (ACM)", "pages": "3711912", "metadata_source": "crossref", "confidence_score": 0.823076923076923, "quality_score": 100 } }, { "id": "bibtex:Tornberg2025-ir", "title": "When do parties lie? Misinformation and radical-right populism across 26 countries", "content_text": "The spread of misinformation has emerged as a global concern. Academic attention has recently shifted to emphasize the role of political elites as drivers of misinformation. Yet, little is known of the relationship between party politics and the spread of misinformation—in part due to a dearth of cross-national empirical data needed for comparative study. This article examines which parties are more likely to spread misinformation, by drawing on a comprehensive database of 32M tweets from parlia...", "date_published": "2025-01-13T00:00:00Z", "_discovery_date": "2025-01-15T00:00:00Z", "url": "https://doi.org/10.1177/19401612241311886", "external_url": "https://doi.org/10.1177/19401612241311886", "authors": [ { "name": "Petter Törnberg" }, { "name": "Juliana Chueri" } ], "tags": [ "Article", "The International Journal of Press/Politics" ], "content_html": "

Abstract

The spread of misinformation has emerged as a global concern. Academic attention has recently shifted to emphasize the role of political elites as drivers of misinformation. Yet, little is known of the relationship between party politics and the spread of misinformation—in part due to a dearth of cross-national empirical data needed for comparative study. This article examines which parties are more likely to spread misinformation, by drawing on a comprehensive database of 32M tweets from parliamentarians in 26 countries, spanning 6 years and several election periods. The dataset is combined with external databases such as Parlgov and V-Dem, linking the spread of misinformation to detailed information about political parties and cabinets, thus enabling a comparative politics approach to misinformation. Using multilevel analysis with random country intercepts, we find that radical-right populism is the strongest determinant for the propensity to spread misinformation. Populism, left-wing populism, and right-wing politics are not linked to the spread of misinformation. These results suggest that political misinformation should be understood as part and parcel of the current wave of radical right populism, and its opposition to liberal democratic institution.

Details

Links

DOI

", "_academic": { "doi": "10.1177/19401612241311886", "citation_count": 17, "reference_count": 69, "type": "article", "publisher": "SAGE Publications", "pages": "19401612241311886", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 100 } }, { "id": "bibtex:Nenno2025-xa", "title": "All the (fake) news that’s fit to share? News values in perceived misinformation across twenty-four countries", "content_text": "While there is a strong scholarly interest surrounding the content of political misinformation online, much of this research concerns misinformation in Western, Educated, Industrialized, Rich and Democratic (WEIRD) countries. Although such research has investigated the topical and stylistic characteristics of misinformation, its findings are frequently not interpreted systematically in relation to properties that journalists rely on to capture the attention of audiences, that is, in relation to ...", "date_published": "2025-01-23T00:00:00Z", "_discovery_date": "2025-01-15T00:00:00Z", "url": "https://doi.org/10.1177/19401612241311893", "external_url": "https://doi.org/10.1177/19401612241311893", "authors": [ { "name": "Sami Nenno" }, { "name": "Cornelius Puschmann" } ], "tags": [ "Article", "The International Journal of Press/Politics" ], "content_html": "

Abstract

While there is a strong scholarly interest surrounding the content of political misinformation online, much of this research concerns misinformation in Western, Educated, Industrialized, Rich and Democratic (WEIRD) countries. Although such research has investigated the topical and stylistic characteristics of misinformation, its findings are frequently not interpreted systematically in relation to properties that journalists rely on to capture the attention of audiences, that is, in relation to news values. We close the gap on comparative studies of news values in misinformation with a perspective that emphasizes non-WEIRD countries. Relying on a dataset of URLs that were shared on Facebook in twenty-four countries and reported by users as containing false news, we compile a large corpus of online news items and use an array of computational tools to analyze its content with respect to a set of five news values (conflict, negativity, proximity, individualization, and informativeness). We find salient differences for almost all news values and regarding the WEIRD/non-WEIRD and flagged/unflagged distinction. Moreover, the prevalence of individual news values differs strongly for individual countries. However, while almost all differences are significant, the effects we encounter are mostly small.

Details

Links

DOI

", "_academic": { "doi": "10.1177/19401612241311893", "citation_count": 2, "reference_count": 65, "type": "article", "publisher": "SAGE Publications", "pages": "19401612241311893", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 100 } }, { "id": "bibtex:Xue2025-bp", "title": "Facts or feelings? Leveraging emotionality as a fact-checking strategy on social media in the United States", "content_text": "Emotionality is a well-established strategy for boosting audience engagement on social media. While fact-checking is positioned to provide objective information, fact-checking posts on social media often involve heightened emotionality. How much emotionality is present and how emotionality influences audience engagement and public sentiment toward fact-checked targets remain largely understudied. Informed by social psychological frameworks explicating message-level factors influencing public eng...", "date_published": "2025-01-15T00:00:00Z", "_discovery_date": "2025-01-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1177/20563051251318172", "external_url": "https://doi.org/10.1177/20563051251318172", "authors": [ { "name": "Haoning Xue" }, { "name": "Jingwen Zhang" }, { "name": "Xinzhi Zhang" } ], "tags": [ "Article", "Social Media + Society" ], "content_html": "

Abstract

Emotionality is a well-established strategy for boosting audience engagement on social media. While fact-checking is positioned to provide objective information, fact-checking posts on social media often involve heightened emotionality. How much emotionality is present and how emotionality influences audience engagement and public sentiment toward fact-checked targets remain largely understudied. Informed by social psychological frameworks explicating message-level factors influencing public engagement and sentiment, the present study examines emotionality in 49,270 fact-checking posts created by 10 United States fact-checking organizations on Facebook from 2017 to 2022. Results showed that emotionality in fact-checking posts significantly increased by 13.5% over the years. Editorial fact-checkers (e.g., Washington Post) used higher levels of emotionality than independent fact-checkers (e.g., snopes.com). Emotionality positively indicated public engagement as predicted. However, in both fact-checked true and false information, emotionality was negatively associated with the public’s sentiment toward fact-checked targets, suggesting a potential spillover effect on stories verified to be true. This study reveals that emotionality in fact-checking posts boosts social media engagement yet with the potential of compromising fact-checking effectiveness.

Details

Links

DOI

", "_academic": { "doi": "10.1177/20563051251318172", "citation_count": 0, "reference_count": 75, "type": "article", "publisher": "SAGE Publications", "volume": "11", "pages": "20563051251318172", "metadata_source": "crossref", "confidence_score": 0.85, "quality_score": 100 } }, { "id": "bibtex:Bastos2025-ol", "title": "Visual identities in troll farms: The Twitter Moderation Research Consortium", "content_text": "The Twitter Moderation Research Consortium is a database of network propaganda and influence operations that includes 115,474 unique Twitter accounts, millions of tweets, and over one terabyte of media removed from the platform between 2017 and 2022. We probe this database using Google’s Vision API and Keras with TensorFlow to test whether foreign influence operations can be identified based on the visual presentation of fake user profiles emphasizing gender, race, camera angle, sensuality, and ...", "date_published": "2025-01-15T00:00:00Z", "_discovery_date": "2025-01-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1177/20563051251323652", "external_url": "https://doi.org/10.1177/20563051251323652", "authors": [ { "name": "Marco Bastos" } ], "tags": [ "Article", "Social Media + Society" ], "content_html": "

Abstract

The Twitter Moderation Research Consortium is a database of network propaganda and influence operations that includes 115,474 unique Twitter accounts, millions of tweets, and over one terabyte of media removed from the platform between 2017 and 2022. We probe this database using Google’s Vision API and Keras with TensorFlow to test whether foreign influence operations can be identified based on the visual presentation of fake user profiles emphasizing gender, race, camera angle, sensuality, and emotion. Our results show that sensuality is a variable associated with operations that replicate the Kremlin-linked Internet Research Agency campaign, being particularly prevalent in influence operations that targeted communities in North and South America, but also in Indonesia, Turkey, and Pakistan. Our results also show that the visual identities of fake social media profiles are predictive of influence operations given their reliance on selfies, sensual young women, K-pop aesthetics, or alternatively nationalistic iconography overlaid with text to convey ideological positioning.

Details

Links

DOI

", "_academic": { "doi": "10.1177/20563051251323652", "citation_count": 0, "reference_count": 58, "type": "article", "publisher": "SAGE Publications", "volume": "11", "pages": "20563051251323652", "metadata_source": "crossref", "confidence_score": 0.8999999999999999, "quality_score": 100 } }, { "id": "bibtex:Simeone2025-vo", "title": "Network ripple effects: How Twitter deplatforming flipped authority structure and discourse of the Arizona Election Review community", "content_text": "Content moderation decisions can have variable impacts on the events and discourses they aim to regulate. This study analyzes Twitter data from before and after the removal of key Arizona Election Audit Twitter accounts in March of 2021. After collecting tweets that refer to the election audit in Arizona in this designated timeframe, a before/after comparison examines the structure of the networks, the volume of the participating population, and the themes of their discourse. Several significant...", "date_published": "2025-01-15T00:00:00Z", "_discovery_date": "2025-01-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1177/21582440251314538", "external_url": "https://doi.org/10.1177/21582440251314538", "authors": [ { "name": "Michael Simeone" }, { "name": "Steven R. Corman" } ], "tags": [ "Article", "Sage Open" ], "content_html": "

Abstract

Content moderation decisions can have variable impacts on the events and discourses they aim to regulate. This study analyzes Twitter data from before and after the removal of key Arizona Election Audit Twitter accounts in March of 2021. After collecting tweets that refer to the election audit in Arizona in this designated timeframe, a before/after comparison examines the structure of the networks, the volume of the participating population, and the themes of their discourse. Several significant changes are observed, including a drop in participation from accounts that were not deplatformed and a de-centralization of the Twitter network. Conspiracy theories remain in the discourse, but their themes become more diffuse, and their calls to action more abstract. Recruiting calls to join in on promoting and publicizing the audit mostly come to an end. The decision by Twitter to deplatform key election audit accounts appears to have greatly disrupted the hub structure at the center of the emergent network that formed as a response to the election audit. By intervening in the network, moderators successfully defused much of the Twitter-based participation in the Arizona Election Review of 2021. This instance demonstrates the efficacy of network-driven interventions in platform moderation, specifically for events or accounts that use social media to organize or encourage bad-faith attacks on civic instituions.

Details

Links

DOI

", "_academic": { "doi": "10.1177/21582440251314538", "citation_count": 0, "reference_count": 28, "type": "article", "publisher": "SAGE Publications", "volume": "15", "pages": "21582440251314538", "metadata_source": "crossref", "confidence_score": 0.8428571428571427, "quality_score": 100 } }, { "id": "bibtex:DiGiuseppe2025-es", "title": "Scaling open-ended survey responses using LLM-paired comparisons", "content_text": "Survey researchers rely heavily on closed-ended questions to measure latent respondent characteristics like knowledge, policy positions, emotions, ideology, and various other traits. While closed-ended questions ease analysis and data collection, they necessarily limit the depth and variability of responses. Open-ended responses allow for greater depth and variability in responses but are labor-intensive to code. Large Language Models (LLMs) can solve some of these problems, but existing approac...", "date_published": "2025-01-15T00:00:00Z", "_discovery_date": "2025-01-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.31235/osf.io/39ajg_v2", "external_url": "https://doi.org/10.31235/osf.io/39ajg_v2", "authors": [ { "name": "Matthew DiGiuseppe" }, { "name": "Michael E Flynn" } ], "tags": [ "Article", "SocArXiv" ], "content_html": "

Abstract

Survey researchers rely heavily on closed-ended questions to measure latent respondent characteristics like knowledge, policy positions, emotions, ideology, and various other traits. While closed-ended questions ease analysis and data collection, they necessarily limit the depth and variability of responses. Open-ended responses allow for greater depth and variability in responses but are labor-intensive to code. Large Language Models (LLMs) can solve some of these problems, but existing approaches to using LLMs have a number of limitations. In this paper, we propose and test a pairwise comparison method to scale open-ended survey responses on a continuous scale. The approach relies on LLMs to make pairwise comparisons of statements that identify which statement ``wins'' and ``loses''. With this information, we employ a Bayesian Bradley-Terry model to recover a `score' on a the relevant latent dimension for each statement. This approach allows for finer discrimination between items, better measures of uncertainty, reduces anchoring bias, and is more flexible than methods relying on Maximum Likelihood Estimation techniques. We demonstrate the utility of this approach on an open-ended question probing knowledge of interest rates in the US economy. A comparison of 6 LLMs of various sizes reveals that pairwise comparisons show greater consistency than zero-shot 0-10 ratings with larger models (> 9-billion parameters). Further, comparison of pairwise decisions are consistent with high-knowledge crowd source workers.

Details

Links

DOI

", "_academic": { "doi": "10.31235/osf.io/39ajg_v2", "citation_count": 0, "reference_count": 0, "type": "article", "metadata_source": "crossref", "confidence_score": 0.8999999999999999, "quality_score": 100 } }, { "id": "bibtex:Luhring2025-od", "title": "Best practices for source-based research on misinformation and news trustworthiness using NewsGuard", "content_text": "Researchers need reliable and valid tools to identify cases of untrustworthy information when studying the spread of misinformation on digital platforms. A common approach is to assess the trustworthiness of sources rather than individual pieces of content. One of the most widely used and comprehensive databases for source trustworthiness ratings is provided by NewsGuard. Since creating the database in 2019, NewsGuard has continually added new sources and reassessed existing ones. While NewsGuar...", "date_published": "2025-01-14T00:00:00Z", "_discovery_date": "2025-01-15T00:00:00Z", "url": "https://doi.org/10.51685/jqd.2025.003", "external_url": "https://doi.org/10.51685/jqd.2025.003", "authors": [ { "name": "Jula Lühring" }, { "name": "Hannah Metzler" }, { "name": "Ruggero Lazzaroni" }, { "name": "Apeksha Shetty" }, { "name": "Jana Lasser" } ], "tags": [ "Article", "Journal of Quantitative Description: Digital Media" ], "content_html": "

Abstract

Researchers need reliable and valid tools to identify cases of untrustworthy information when studying the spread of misinformation on digital platforms. A common approach is to assess the trustworthiness of sources rather than individual pieces of content. One of the most widely used and comprehensive databases for source trustworthiness ratings is provided by NewsGuard. Since creating the database in 2019, NewsGuard has continually added new sources and reassessed existing ones. While NewsGuard initially focused only on the US, the database has expanded to include sources from other countries. In addition to trustworthiness ratings, the NewsGuard database contains various contextual assessments of the sources, which are less often used in contemporary research on misinformation. In this work, we provide an analysis of the content of the NewsGuard database, focusing on the temporal stability and completeness of its ratings across countries, as well as the usefulness of information on political orientation and topics for misinformation studies. We find that trustworthiness ratings and source coverage have remained relatively stable since 2022, particularly for the US, France, Italy, Germany, and Canada, with US-based sources consistently scoring lower than those from other countries. Additional information on the political orientation and topics covered by sources is comprehensive and provides valuable assets for characterizing sources beyond trustworthiness. By evaluating the database over time and across countries, we identify potential pitfalls that compromise the validity of using NewsGuard as a tool for quantifying untrustworthy information, particularly if dichotomous "trustworthy"/"untrustworthy" labels are used. Lastly, we provide recommendations for digital media research on how to avoid these pitfalls and discuss appropriate use cases for the NewsGuard database and source-level approaches in general.

Details

Links

DOI

", "_academic": { "doi": "10.51685/jqd.2025.003", "citation_count": 4, "reference_count": 0, "type": "article", "publisher": "Journal of Quantitative Description: Digital Media", "volume": "5", "metadata_source": "crossref", "confidence_score": 0.8272727272727272, "quality_score": 100 } }, { "id": "bibtex:Unknown2025-qj", "title": "Red-Teaming in the Public Interest", "content_text": "This report offers a vision for red-teaming in the public interest: a process that goes beyond system-centric testing of already built systems to consider the full range of ways the public can be involved in evaluating genAI harms.", "date_published": "2025-01-01T00:00:00Z", "_discovery_date": "2025-01-01T00:00:00Z", "_date_estimated": true, "url": "https://datasociety.net/library/red-teaming-in-the-public-interest/", "external_url": "https://datasociety.net/library/red-teaming-in-the-public-interest/", "authors": [ { "name": "Ranjit Singh" }, { "name": "Borhane Blili-Hamelin" }, { "name": "Carol Anderson" }, { "name": "Emnet Tafesse" }, { "name": "Briana Vecchione" }, { "name": "Beth Duckles" }, { "name": "Jacob Metcalf" } ], "tags": [ "Techreport", "Data & Society" ], "content_html": "

Abstract

This report offers a vision for red-teaming in the public interest: a process that goes beyond system-centric testing of already built systems to consider the full range of ways the public can be involved in evaluating genAI harms.

Details

Links

PDF

", "_academic": { "open_access": true, "type": "techreport", "metadata_source": "datasociety", "quality_score": 70, "quality_issues": [ "missing_authors", "missing_link" ] } }, { "id": "bibtex:Kristensen2025-ni", "title": "Platform polarization. Do alternative platforms drive discursive polarization?", "content_text": "Published in Comun. Politica | Year: 2025 | Authors: Kristensen, Jakob Bæk, Kristensen, Jakob Bæk", "date_published": "2025-01-01T00:00:00Z", "_discovery_date": "2025-01-01T00:00:00Z", "_date_estimated": true, "url": "https://scholar.google.com/scholar?q=Platform%20polarization.%20Do%20alternative%20platforms%20drive%20discursive%20polarization%3F", "external_url": "https://scholar.google.com/scholar?q=Platform%20polarization.%20Do%20alternative%20platforms%20drive%20discursive%20polarization%3F", "authors": [ { "name": "Kristensen, Jakob Bæk" }, { "name": "Kristensen, Jakob Bæk" } ], "tags": [ "Article", "Comun. Politica" ], "content_html": "

Details

", "_academic": { "type": "article", "quality_score": 60, "quality_issues": [ "missing_abstract", "missing_link", "not_enriched" ] } }, { "id": "bibtex:Minici2024-tf", "title": "IOHunter: Graph foundation model to uncover online information operations", "content_text": "Social media platforms have become vital spaces for public discourse, serving as modern agoràs where a wide range of voices influence societal narratives. However, their open nature also makes them vulnerable to exploitation by malicious actors, including state-sponsored entities, who can conduct information operations (IOs) to manipulate public opinion. The spread of misinformation, false news, and misleading claims threatens democratic processes and societal cohesion, making it crucial to deve...", "date_published": "2024-12-19T00:00:00Z", "_discovery_date": "2024-12-15T00:00:00Z", "url": "http://arxiv.org/abs/2412.14663v2", "external_url": "http://arxiv.org/abs/2412.14663v2", "authors": [ { "name": "Marco Minici" }, { "name": "Luca Luceri" }, { "name": "Francesco Fabbri" }, { "name": "Emilio Ferrara" } ], "tags": [ "Article", "arXiv [cs.SI]", "cs.AI", "cs.LG", "cs.SI" ], "content_html": "

Abstract

Social media platforms have become vital spaces for public discourse, serving as modern agoràs where a wide range of voices influence societal narratives. However, their open nature also makes them vulnerable to exploitation by malicious actors, including state-sponsored entities, who can conduct information operations (IOs) to manipulate public opinion. The spread of misinformation, false news, and misleading claims threatens democratic processes and societal cohesion, making it crucial to develop methods for the timely detection of inauthentic activity to protect the integrity of online discourse. In this work, we introduce a methodology designed to identify users orchestrating information operations, a.k.a. IO drivers, across various influence campaigns. Our framework, named IOHunter, leverages the combined strengths of Language Models and Graph Neural Networks to improve generalization in supervised, scarcely-supervised, and cross-IO contexts. Our approach achieves state-of-the-art performance across multiple sets of IOs originating from six countries, significantly surpassing existing approaches. This research marks a step toward developing Graph Foundation Models specifically tailored for the task of IO detection on social media platforms.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.SI", "cs.AI", "cs.LG" ], "metadata_source": "arxiv", "confidence_score": 0.7333333333333333, "quality_score": 100 } }, { "id": "bibtex:Cabbuag2024-me", "title": "TikTok ‘dogshows’ and the amplification of online incivility among Gen Z influencers in the Philippines", "content_text": "Studies on digital platforms and online incivility have established that uses of humour can lean towards cyberbullying and hate speech. Focusing on TikTok's affordances and cultures of online incivility, this paper studies how TikTok influencers and their audiences manoeuvre legal-but-harmful humour. Specifically, we study how online incivility has become an accepted and negotiated practice in the Filipino context through the phenomenon of ‘dogshows’, where users throw jabs at individuals using ...", "date_published": "2024-12-15T00:00:00Z", "_discovery_date": "2024-12-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1177/13678779241302826", "external_url": "https://doi.org/10.1177/13678779241302826", "authors": [ { "name": "Samuel I. Cabbuag" }, { "name": "Crystal Abidin" } ], "tags": [ "International Journal of Cultural Studies", "Article" ], "content_html": "

Abstract

Studies on digital platforms and online incivility have established that uses of humour can lean towards cyberbullying and hate speech. Focusing on TikTok's affordances and cultures of online incivility, this paper studies how TikTok influencers and their audiences manoeuvre legal-but-harmful humour. Specifically, we study how online incivility has become an accepted and negotiated practice in the Filipino context through the phenomenon of ‘dogshows’, where users throw jabs at individuals using derogatory humour and provocative memes. Through online observation and textual analysis of TikTok posts and their corresponding comment sections, we demonstrate how online incivility is subtly amplified through humour and play, and how Gen Z and young children became both objects and producers of these dogshows. We argue that while there is already peer surveillance at work on TikTok, there needs to be more deliberation between TikTok's policies and at-risk groups to make the platform a more civil space.

Details

Links

DOI

", "_academic": { "doi": "10.1177/13678779241302826", "citation_count": 1, "reference_count": 67, "type": "article", "publisher": "SAGE Publications", "pages": "13678779241302826", "metadata_source": "crossref", "confidence_score": 0.85, "quality_score": 100 } }, { "id": "bibtex:Gonzalez-Bailon2024-rq", "title": "The diffusion and reach of (mis)information on Facebook during the U.s. 2020 election", "content_text": "Social media creates the possibility for rapid, viral spread of content, but how many posts actually reach millions? And is misinformation special in how it propagates? We answer these questions by analyzing the virality of and exposure to information on Facebook during the U.S. 2020 presidential election. We examine the diffusion trees of the approximately 1 B posts that were re-shared at least once by U.S.-based adults from July 1, 2020, to February 1, 2021. We differentiate misinformation fro...", "date_published": "2024-12-15T00:00:00Z", "_discovery_date": "2024-12-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.15195/v11.a41", "external_url": "https://doi.org/10.15195/v11.a41", "authors": [ { "name": "Sandra González-Bailón" }, { "name": "David Lazer" }, { "name": "Pablo Barberá" }, { "name": "William Godel" }, { "name": "Hunt Allcott" }, { "name": "Taylor Brown" }, { "name": "Adriana Crespo-Tenorio" }, { "name": "Deen Freelon" }, { "name": "Matthew Gentzkow" }, { "name": "Andrew Guess" }, { "name": "Shanto Iyengar" }, { "name": "Young Kim" }, { "name": "Neil Malhotra" }, { "name": "Devra Moehler" }, { "name": "Brendan Nyhan" }, { "name": "Jennifer Pan" }, { "name": "Carlos Rivera" }, { "name": "Jaime Settle" }, { "name": "Emily Thorson" }, { "name": "Rebekah Tromble" }, { "name": "Arjun Wilkins" }, { "name": "Magdalena Wojcieszak" }, { "name": "Chad Kiewiet de Jonge" }, { "name": "Annie Franco" }, { "name": "Winter Mason" }, { "name": "Natalie Stroud" }, { "name": "Joshua Tucker" } ], "tags": [ "Article", "Sociological Science" ], "content_html": "

Abstract

Social media creates the possibility for rapid, viral spread of content, but how many posts actually reach millions? And is misinformation special in how it propagates? We answer these questions by analyzing the virality of and exposure to information on Facebook during the U.S. 2020 presidential election. We examine the diffusion trees of the approximately 1 B posts that were re-shared at least once by U.S.-based adults from July 1, 2020, to February 1, 2021. We differentiate misinformation from non-misinformation posts to show that (1) misinformation diffused more slowly, relying on a small number of active users that spread misinformation via long chains of peer-to-peer diffusion that reached millions; non-misinformation spread primarily through one-to-many affordances (mainly, Pages); (2) the relative importance of peer-to-peer spread for misinformation was likely due to an enforcement gap in content moderation policies designed to target mostly Pages and Groups; and (3) periods of aggressive content moderation proximate to the election coincide with dramatic drops in the spread and reach of misinformation and (to a lesser extent) political content.

Details

Links

DOI

", "_academic": { "doi": "10.15195/v11.a41", "citation_count": 7, "reference_count": 22, "type": "article", "publisher": "Society for Sociological Science", "volume": "11", "pages": "1124--1146", "metadata_source": "crossref", "confidence_score": 0.8103448275862069, "quality_score": 100 } }, { "id": "bibtex:Mosleh2024-op", "title": "Divergent patterns of engagement with partisan and low-quality news across seven social media platforms", "content_text": "In recent years, social media has become increasingly fragmented, as platforms evolve and new alternatives emerge. Yet most research studies a single platform—typically Twitter/X, or occasionally Facebook—leaving little known about the broader social media landscape. Here we shed new light on patterns of cross-platform variation in the high-stakes context of news sharing. We examine the relationship between user engagement and news domains’ political orientation and quality across seven platform...", "date_published": "2024-12-15T00:00:00Z", "_discovery_date": "2024-12-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.31234/osf.io/9csy3_v4", "external_url": "https://doi.org/10.31234/osf.io/9csy3_v4", "authors": [ { "name": "Mohsen Mosleh" }, { "name": "Jennifer Nancy Lee Allen" }, { "name": "David Gertler Rand" } ], "tags": [ "PsyArXiv", "Article" ], "content_html": "

Abstract

In recent years, social media has become increasingly fragmented, as platforms evolve and new alternatives emerge. Yet most research studies a single platform—typically Twitter/X, or occasionally Facebook—leaving little known about the broader social media landscape. Here we shed new light on patterns of cross-platform variation in the high-stakes context of news sharing. We examine the relationship between user engagement and news domains’ political orientation and quality across seven platforms: Twitter/X, BlueSky, TruthSocial, Gab, GETTR, Mastodon, and LinkedIn. Using an exhaustive sampling strategy, we analyze all (over 10 million) posts containing links to news domains shared on these platforms during January 2024. We find that the news shared on platforms with more conservative user bases is significantly lower quality on average. Turning to patterns of engagement, we find—contrary to hypotheses of a consistent “right wing advantage” on social media—that the relationship between political lean and engagement is strongly heterogeneous across platforms. Conservative new posts receive more engagement on platforms where most content is conservative, and vice versa for liberal news posts, consistent with an “echo platform” perspective. In contrast, the relationship between news quality and engagement is strikingly consistent: across all platforms examined, lower-quality news posts received higher average engagement even though higher quality news is substantially more prevalent and garners far more total engagement across posts. This pattern holds despite accounting for poster-level variation, and is observed even in the absence of ranking algorithms, suggesting user preferences – not algorithmic – bias may underlie the underperformance of higher-quality news.

Details

Links

DOI

", "_academic": { "doi": "10.31234/osf.io/9csy3_v4", "citation_count": 1, "reference_count": 0, "type": "article", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 100 } }, { "id": "bibtex:Ulloa2024-jm", "title": "Beyond time delays: How web scraping distorts measures of online news consumption", "content_text": "As the exploration of digital behavioral data revolutionizes communication research, understanding the nuances of data collection methodologies becomes increasingly pertinent. This study focuses on one prominent data collection approach, web scraping, and more specifically, its application in the growing field of research relying on web browsing data. We investigate discrepancies between content obtained directly during user interaction with a website (in-situ) and content scraped using the URLs...", "date_published": "2024-11-30T00:00:00Z", "_discovery_date": "2024-11-15T00:00:00Z", "url": "http://arxiv.org/abs/2412.00479v1", "external_url": "http://arxiv.org/abs/2412.00479v1", "authors": [ { "name": "Roberto Ulloa" }, { "name": "Frank Mangold" }, { "name": "Felix Schmidt" }, { "name": "Judith Gilsbach" }, { "name": "Sebastian Stier" } ], "tags": [ "cs.CY", "Article", "arXiv [cs.CY]" ], "content_html": "

Abstract

As the exploration of digital behavioral data revolutionizes communication research, understanding the nuances of data collection methodologies becomes increasingly pertinent. This study focuses on one prominent data collection approach, web scraping, and more specifically, its application in the growing field of research relying on web browsing data. We investigate discrepancies between content obtained directly during user interaction with a website (in-situ) and content scraped using the URLs of participants' logged visits (ex-situ) with various time delays (0, 30, 60, and 90 days). We find substantial disparities between the methodologies, uncovering that errors are not uniformly distributed across news categories regardless of classification method (domain, URL, or content analysis). These biases compromise the precision of measurements used in existing literature. The ex-situ collection environment is the primary source of the discrepancies (~33.8%), while the time delays in the scraping process play a smaller role (adding ~6.5 percentage points in 90 days). Our research emphasizes the need for data collection methods that capture web content directly in the user's environment. However, acknowledging its complexities, we further explore strategies to mitigate biases in web-scraped browsing histories, offering recommendations for researchers who rely on this method and laying the groundwork for developing error-correction frameworks.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.CY" ], "metadata_source": "arxiv", "confidence_score": 0.7272727272727272, "quality_score": 100 } }, { "id": "bibtex:Gagrcin2024-dl", "title": "Algorithmic media use and algorithm literacy: An integrative literature review", "content_text": "Algorithms profoundly shape user experiences on digital platforms, raising concerns about their negative impacts and highlighting the importance of algorithm literacy. Research on individuals’ understanding of algorithms and their effects is expanding rapidly but lacks a cohesive framework. We conducted a systematic integrative literature review across social sciences and humanities (n = 169), addressing algorithm literacy in terms of its key conceptualizations and the endogenous, exogenous, and...", "date_published": "2024-11-15T00:00:00Z", "_discovery_date": "2024-11-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1177/14614448241291137", "external_url": "https://doi.org/10.1177/14614448241291137", "authors": [ { "name": "Emilija Gagrčin" }, { "name": "Teresa K. Naab" }, { "name": "Maria F. Grub" } ], "tags": [ "Article", "New Media & Society" ], "content_html": "

Abstract

Algorithms profoundly shape user experiences on digital platforms, raising concerns about their negative impacts and highlighting the importance of algorithm literacy. Research on individuals’ understanding of algorithms and their effects is expanding rapidly but lacks a cohesive framework. We conducted a systematic integrative literature review across social sciences and humanities (n = 169), addressing algorithm literacy in terms of its key conceptualizations and the endogenous, exogenous, and personal factors that influence it. We argue that existing research can be framed in terms of experiential learning cycles and outline how this approach can be beneficial for acquiring algorithm literacy. Finally, we propose a future research agenda that includes defining core competencies relevant to algorithm literacy, standardization of measures, integrating subjective and factual aspects of algorithm literacy, and task- and domain-specific approaches.

Details

Links

DOI

", "_academic": { "doi": "10.1177/14614448241291137", "citation_count": 7, "reference_count": 98, "type": "article", "publisher": "SAGE Publications", "pages": "14614448241291137", "metadata_source": "crossref", "confidence_score": 0.83, "quality_score": 100 } }, { "id": "bibtex:Costello2024-kp", "title": "Durably reducing conspiracy beliefs through dialogues with AI", "content_text": "Conspiracy theory beliefs are notoriously persistent. Influential hypotheses propose that they fulfill important psychological needs, thus resisting counterevidence. Yet previous failures in correcting conspiracy beliefs may be due to counterevidence being insufficiently compelling and tailored. To evaluate this possibility, we leveraged developments in generative artificial intelligence and engaged 2190 conspiracy believers in personalized evidence-based dialogues with GPT-4 Turbo. The interven...", "date_published": "2024-09-13T00:00:00Z", "_discovery_date": "2024-09-15T00:00:00Z", "url": "https://doi.org/10.1126/science.adq1814", "external_url": "https://doi.org/10.1126/science.adq1814", "authors": [ { "name": "Thomas H. Costello" }, { "name": "Gordon Pennycook" }, { "name": "David G. Rand" } ], "tags": [ "Science", "Article" ], "content_html": "

Abstract

Conspiracy theory beliefs are notoriously persistent. Influential hypotheses propose that they fulfill important psychological needs, thus resisting counterevidence. Yet previous failures in correcting conspiracy beliefs may be due to counterevidence being insufficiently compelling and tailored. To evaluate this possibility, we leveraged developments in generative artificial intelligence and engaged 2190 conspiracy believers in personalized evidence-based dialogues with GPT-4 Turbo. The intervention reduced conspiracy belief by ~20%. The effect remained 2 months later, generalized across a wide range of conspiracy theories, and occurred even among participants with deeply entrenched beliefs. Although the dialogues focused on a single conspiracy, they nonetheless diminished belief in unrelated conspiracies and shifted conspiracy-related behavioral intentions. These findings suggest that many conspiracy theory believers can revise their views if presented with sufficiently compelling evidence.

Details

Links

DOI

", "_academic": { "doi": "10.1126/science.adq1814", "citation_count": 158, "reference_count": 89, "type": "article", "publisher": "American Association for the Advancement of Science (AAAS)", "volume": "385", "pages": "eadq1814", "metadata_source": "crossref", "confidence_score": 0.83, "quality_score": 100 } }, { "id": "bibtex:Freelon2024-sc", "title": "The post-API age of social media data access: Past, present, and future", "content_text": "Social media data have become a mainstay of social science research since the first application programming interfaces (APIs) debuted in the mid-2000s. Over time, platforms have radically altered their data offerings, substantially determining the kinds of research that can be conducted. This article presents historical and normative analyses of the current state of platform data precarity, defined by Freelon (2018) as the post-API age . We recount a periodized history of social media data acces...", "date_published": "2024-09-15T00:00:00Z", "_discovery_date": "2024-09-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1177/00027162251372557", "external_url": "https://doi.org/10.1177/00027162251372557", "authors": [ { "name": "Deen Freelon" }, { "name": "Cristina Monzer" }, { "name": "Gayoung Jeon" }, { "name": "Cameron Moy" }, { "name": "Natasha Williams" } ], "tags": [ "Article", "The ANNALS of the American Academy of Political and Social Science" ], "content_html": "

Abstract

Social media data have become a mainstay of social science research since the first application programming interfaces (APIs) debuted in the mid-2000s. Over time, platforms have radically altered their data offerings, substantially determining the kinds of research that can be conducted. This article presents historical and normative analyses of the current state of platform data precarity, defined by Freelon (2018) as the post-API age . We recount a periodized history of social media data access spanning nearly 20 years, characterize the data access options currently offered by six prominent platforms, and make recommendations for improving platform data access. Our primary aim is to help social media researchers understand how access to social media data has evolved over the years and consider how platforms might help them conduct more rigorous research moving forward.

Details

Links

DOI

", "_academic": { "doi": "10.1177/00027162251372557", "citation_count": 2, "reference_count": 85, "type": "article", "publisher": "SAGE Publications", "volume": "715", "pages": "16--37", "metadata_source": "crossref", "confidence_score": 0.8272727272727272, "quality_score": 100 } }, { "id": "bibtex:Bosch2024-hj", "title": "The sound of disinformation: TikTok, computational propaganda, and the invasion of Ukraine", "content_text": "TikTok has emerged as a powerful platform for the dissemination of mis- and disinformation about the war in Ukraine. During the initial three months after the Russian invasion in February 2022, videos under the hashtag #Ukraine garnered 36.9 billion views, with individual videos scaling up to 88 million views. Beyond the traditional methods of spreading misleading information through images and text, the medium of sound has emerged as a novel, platform-specific audiovisual technique. Our analysi...", "date_published": "2024-09-15T00:00:00Z", "_discovery_date": "2024-09-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.1177/14614448241251804", "external_url": "https://doi.org/10.1177/14614448241251804", "authors": [ { "name": "Marcus Bösch" }, { "name": "Tom Divon" } ], "tags": [ "Article", "New Media & Society" ], "content_html": "

Abstract

TikTok has emerged as a powerful platform for the dissemination of mis- and disinformation about the war in Ukraine. During the initial three months after the Russian invasion in February 2022, videos under the hashtag #Ukraine garnered 36.9 billion views, with individual videos scaling up to 88 million views. Beyond the traditional methods of spreading misleading information through images and text, the medium of sound has emerged as a novel, platform-specific audiovisual technique. Our analysis distinguishes various war-related sounds utilized by both Ukraine and Russia and classifies them into a mis- and disinformation typology. We use computational propaganda features—automation, scalability, and anonymity—to explore how TikTok’s auditory practices are exploited to exacerbate information disorders in the context of ongoing war events. These practices include reusing sounds for coordinated campaigns, creating audio meme templates for rapid amplification and distribution, and deleting the original sounds to conceal the orchestrators’ identities. We conclude that TikTok’s recommendation system (the “for you” page) acts as a sound space where exposure is strategically navigated through users’ intervention, enabling semi-automated “soft” propaganda to thrive by leveraging its audio features.

Details

Links

DOI

", "_academic": { "doi": "10.1177/14614448241251804", "citation_count": 22, "reference_count": 99, "type": "article", "publisher": "SAGE Publications", "volume": "26", "pages": "5081--5106", "metadata_source": "crossref", "confidence_score": 0.86, "quality_score": 100 } }, { "id": "bibtex:Budak2024-ef", "title": "Misunderstanding the harms of online misinformation", "content_text": "The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is ...", "date_published": "2024-06-06T00:00:00Z", "_discovery_date": "2024-06-15T00:00:00Z", "url": "https://doi.org/10.1038/s41586-024-07417-w", "external_url": "https://doi.org/10.1038/s41586-024-07417-w", "authors": [ { "name": "Ceren Budak" }, { "name": "Brendan Nyhan" }, { "name": "David M. Rothschild" }, { "name": "Emily Thorson" }, { "name": "Duncan J. Watts" } ], "tags": [ "Article", "Nature" ], "content_html": "

Abstract

The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems such as polarization. In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek out such information. In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and the risk of real-world harm is greatest. We also call for increased platform transparency, including collaborations with outside researchers, to better evaluate the effects of online misinformation and the most effective responses to it. Taking these steps is especially important outside the USA and Western Europe, where research and data are scant and harms may be more severe.

Details

Links

DOI

", "_academic": { "doi": "10.1038/s41586-024-07417-w", "citation_count": 70, "reference_count": 154, "type": "article", "publisher": "Springer Science and Business Media LLC", "volume": "630", "pages": "45--53", "metadata_source": "crossref", "confidence_score": 0.8214285714285714, "quality_score": 100 } }, { "id": "bibtex:Tan2024-vl", "title": "Large Language Models for data annotation and synthesis: A survey", "content_text": "Data annotation and synthesis generally refers to the labeling or generating of raw data with relevant information, which could be used for improving the efficacy of machine learning models. The process, however, is labor-intensive and costly. The emergence of advanced Large Language Models (LLMs), exemplified by GPT-4, presents an unprecedented opportunity to automate the complicated process of data annotation and synthesis. While existing surveys have extensively covered LLM architecture, trai...", "date_published": "2024-02-21T00:00:00Z", "_discovery_date": "2024-02-15T00:00:00Z", "url": "http://arxiv.org/abs/2402.13446v3", "external_url": "http://arxiv.org/abs/2402.13446v3", "authors": [ { "name": "Zhen Tan" }, { "name": "Dawei Li" }, { "name": "Song Wang" }, { "name": "Alimohammad Beigi" }, { "name": "Bohan Jiang" }, { "name": "Amrita Bhattacharjee" }, { "name": "Mansooreh Karami" }, { "name": "Jundong Li" }, { "name": "Lu Cheng" }, { "name": "Huan Liu" } ], "tags": [ "arXiv [cs.CL]", "Article", "cs.CL" ], "content_html": "

Abstract

Data annotation and synthesis generally refers to the labeling or generating of raw data with relevant information, which could be used for improving the efficacy of machine learning models. The process, however, is labor-intensive and costly. The emergence of advanced Large Language Models (LLMs), exemplified by GPT-4, presents an unprecedented opportunity to automate the complicated process of data annotation and synthesis. While existing surveys have extensively covered LLM architecture, training, and general applications, we uniquely focus on their specific utility for data annotation. This survey contributes to three core aspects: LLM-Based Annotation Generation, LLM-Generated Annotations Assessment, and LLM-Generated Annotations Utilization. Furthermore, this survey includes an in-depth taxonomy of data types that LLMs can annotate, a comprehensive review of learning strategies for models utilizing LLM-generated annotations, and a detailed discussion of the primary challenges and limitations associated with using LLMs for data annotation and synthesis. Serving as a key guide, this survey aims to assist researchers and practitioners in exploring the potential of the latest LLMs for data annotation, thereby fostering future advancements in this critical field.

Details

Links

arXiv | PDF

", "_academic": { "open_access": true, "type": "article", "subjects": [ "cs.CL" ], "metadata_source": "arxiv", "confidence_score": 0.715, "quality_score": 100 } }, { "id": "bibtex:Lai2024-to", "title": "Estimating the ideology of political YouTube videos", "content_text": "Abstract We present a method for estimating the ideology of political YouTube videos. The subfield of estimating ideology as a latent variable has often focused on traditional actors such as legislators, while more recent work has used social media data to estimate the ideology of ordinary users, political elites, and media sources. We build on this work to estimate the ideology of a political YouTube video. First, we start with a matrix of political Reddit posts linking to YouTube videos and ap...", "date_published": "2024-02-15T00:00:00Z", "_discovery_date": "2024-02-15T00:00:00Z", "_date_estimated": true, "url": "https://doi.org/10.2139/ssrn.4088828", "external_url": "https://doi.org/10.2139/ssrn.4088828", "authors": [ { "name": "Angela Lai" }, { "name": "Megan Brown" }, { "name": "James Bisbee" }, { "name": "Richard Bonneau" }, { "name": "Joshua Aaron Tucker" }, { "name": "Jonathan Nagler" } ], "tags": [ "SSRN Electronic Journal", "Article" ], "content_html": "

Abstract

Abstract We present a method for estimating the ideology of political YouTube videos. The subfield of estimating ideology as a latent variable has often focused on traditional actors such as legislators, while more recent work has used social media data to estimate the ideology of ordinary users, political elites, and media sources. We build on this work to estimate the ideology of a political YouTube video. First, we start with a matrix of political Reddit posts linking to YouTube videos and apply correspondence analysis to place those videos in an ideological space. Second, we train a language model with those estimated ideologies as training labels, enabling us to estimate the ideologies of videos not posted on Reddit. These predicted ideologies are then validated against human labels. We demonstrate the utility of this method by applying it to the watch histories of survey respondents to evaluate the prevalence of echo chambers on YouTube in addition to the association between video ideology and viewer engagement. Our approach gives video-level scores based only on supplied text metadata, is scalable, and can be easily adjusted to account for changes in the ideological landscape.

Details

Links

DOI

", "_academic": { "doi": "10.2139/ssrn.4088828", "citation_count": 1, "reference_count": 29, "type": "article", "publisher": "Cambridge University Press (CUP)", "volume": "32", "pages": "1--16", "metadata_source": "crossref", "confidence_score": 0.8214285714285714, "quality_score": 100 } }, { "id": "bibtex:Bakshy2015-rn", "title": "Political science. Exposure to ideologically diverse news and opinion on Facebook", "content_text": "Not getting all sides of the news? People are increasingly turning away from mass media to social media as a way of learning news and civic information. Bakshyet al.examined the news that millions of Facebook users' peers shared, what information these users were presented with, and what they ultimately consumed (see the Perspective by Lazer). Friends shared substantially less cross-cutting news from sources aligned with an opposing ideology. People encountered roughly 15% less cross-cutting con...", "date_published": "2015-06-05T00:00:00Z", "_discovery_date": "2015-06-15T00:00:00Z", "url": "https://doi.org/10.1126/science.aaa1160", "external_url": "https://doi.org/10.1126/science.aaa1160", "authors": [ { "name": "Eytan Bakshy" }, { "name": "Solomon Messing" }, { "name": "Lada A. Adamic" } ], "tags": [ "Article", "newsfeed", "homophily", "facebook", "Science", "news" ], "content_html": "

Abstract

Not getting all sides of the news? People are increasingly turning away from mass media to social media as a way of learning news and civic information. Bakshyet al.examined the news that millions of Facebook users' peers shared, what information these users were presented with, and what they ultimately consumed (see the Perspective by Lazer). Friends shared substantially less cross-cutting news from sources aligned with an opposing ideology. People encountered roughly 15% less cross-cutting content in news feeds due to algorithmic ranking and clicked through to 70% less of this cross-cutting content. Within the domain of political news encountered in social media, selective exposure appears to drive attention. Science, this issue p.1130; see also p.1090

Details

Links

DOI

", "_academic": { "doi": "10.1126/science.aaa1160", "citation_count": 2012, "reference_count": 35, "type": "article", "publisher": "American Association for the Advancement of Science (AAAS)", "volume": "348", "pages": "1130--1132", "metadata_source": "crossref", "confidence_score": 0.8024999999999999, "quality_score": 100 } } ] }