De- and recoding algorithmic systems: The case of fact checkers and fact checked users

Research output: Contribution to journalJournal articleResearchpeer-review

With the recent development of debunking on social media as a dominant agenda, fact checkers have increasingly used machine learning (ML) to identify, verify and correct factual claims, as ML promises the scaling of fact checking practices. However, it also places a new actor in between the fact checkers and the fact checked users. In this paper, we conducted a contrasted analysis of how fact checkers and fact checked users understand, evaluate and act towards the algorithmic systems and the data flows in Meta’s Third-Party Fact-Checking Program. For both professional users and end users, the algorithmic system is experienced as a black box in which they have limited insight, and their sense-making practices happen based on the data and metrics that are made visible to them. In the paper, we draw on and expanded theory on decoding algorithms by not only exploring how the two user groups engage in decoding the algorithmic system, but also actively engage in forms of recoding by attempting to adapt or modify the algorithmic system to better fit within their cultural and social context, which is characterised by both varying epistemic cultures and societal positions. While the fact checkers from their hegemonic (sometimes negotiated) position understand the program as a (sometimes stupid) tool and primarily engage in passive acts of recoding, fact checked users, from their oppositional position, understand the program as an unpredictable censoring machine and engage primarily in more active acts of recoding. Based on the analysis, we end the paper with a discussion in which we argued for understanding data reflexivity as highly relational and processual.
Original languageEnglish
JournalConvergence: The International Journal of Research into New Media Technologies
ISSN1354-8565
Publication statusAccepted/In press - 2024

ID: 402164548