Applying Misinformation Interventions to Library Instruction and Outreach

This narrative review examines what misinformation interventions — both before misconception (prebunking) and after misconception (debunking) — are effective, and how they can be applied to library information literacy instruction, outreach, and programming. To conduct this review, the researcher conducted carefully considered searches in several library science, education, psychology


Introduction
Researchers in the fields of psychology and education have published dozens of studies exploring methods of combating misinformation, some with measurable success. The research in some cases shows that intuitive misinformation-countering strategies, such as providing people with correct information that is easy to access and based on scientific evidence, are not very effective. Other strategies, such as warning people how they might be manipulated in the future, thereby raising their skepticism, have proven to be more reliably successful. However, there is little evidence that libraries are employing research-based misinformation-adoption prevention methods in their interactions with patrons. This study reviews the literature regarding misinformation interventions and explores their potential application to library instruction, programming, and outreach.

Definitions
In this review, I use the definition of misinformation developed by : "information that is initially presented as true but later found to be false." Reliance on misinformation is different from ignorance, or lack of knowledge. Beliefs based on misinformation, unlike ignorance, are often strongly held and very difficult to correct . Reliance on misinformation cannot, unfortunately, be corrected by the introduction of more information .
The results of this study reveal two common approaches to addressing the problem of misinformation: one is to attempt to debunk the misinformation after it has already been accepted, and the other is to attempt to "prebunk" the misinformation before it is accepted and prevent its adherence in the first place. Both strategies employ misinformation interventions. In this study, I define an intervention as an activity that attempts to prevent the acceptance of misinformation and that can be implemented by a librarian or library worker through instruction, outreach, and/or programming efforts.
For the purposes of this study, I define instruction, outreach, and programming as follows in Table 1: o Sources about social media and algorithmic interventions were excluded due to their lack of applicability to library instruction, outreach, and programming • Works that were peer-reviewed o The quality of the study construction was considered o Both qualitative and quantitative data were considered • Works written in English • Meta-analyses and literature reviews were prioritized, due to their increased rigor I used the following combinations of keywords: • Misinformation AND subj: meta-analysis (all except Google Scholar) • Subj: misinformation AND meta-analysis OR systematic review OR literature review or meta-analysis (all except Google Scholar) • Subj: misinformation AND interventions OR strategies OR best practices (all except Google Scholar) • misinformation AND meta-analysis AND intervention (only Google Scholar) Where possible, I searched the full text of the sources. Book chapters were also included, provided they met the initial inclusion parameters. Sources that did not meet these parameters were eliminated. The results of my search can be found in the Appendix. After the initial review using these parameters, I included additional sources from the references list of the selected resources if they met the initial study parameters and either consisted of a meta-analysis or literature review, or they were cited in at least two of the sources from the initial selection. The resulting list of sources for this narrative review, with duplicates eliminated, consisted of 58 sources, plus an additional 13 obtained through secondary searching. Most of the articles came from journals related to psychology and communication. None of the articles came from journals in the field of librarianship, underlining the need for more research and activity in this area among librarians.
After selecting the articles and books for inclusion, I coded them based on the intervention strategy that was used by each researcher. After an initial screening, I was able to identify two larger coding categories: interventions that occur before the research participant had encountered misinformation, and those that occur afterward and attempt to correct the misinformation. The specific strategies and parameters of their success are explored in the Findings section below.

Findings
The codes from the articles reviewed generally fall into two categories: strategies undertaken after misinformation has been adopted to attempt to correct it (debunking) and strategies undertaken before misinformation has been adopted to prevent it from being accepted in the first place (prebunking). In addition to describing these strategies and the most successful parameters for their design and implementation, I also provide some general context about why misinformation correction is so challenging.

Context
While the primary goal of this study was to explore misinformation interventions, some information about the research context of misinformation adoption helps to frame the subject. This section briefly explores the psychology behind misinformed beliefs.
Intuitively, it would seem that combating misinformation simply requires telling those with misinformed beliefs that they are mistaken. However, correcting existing beliefs can be extremely challenging, and this simple strategy is largely ineffective. When individuals continue to believe misinformation despite being presented with evidence disproving their erroneous beliefs, it is called the continued influence effect . Researchers have identified at least four theories, some of which overlap, to explain the prevalence of the continued influence effect. Described in more detail below, these include the mental model theory, memory-based judgment theory, motivated reasoning theory, and societal impacts, among other explanatory models.
One theory to explain the continued influence effect is the mental model theory, which posits that individuals create a mental model of an event or situation and are reluctant to modify the model with new information when the existing belief has sufficient explanatory power . When misinformation is integrated into a person's mental model for a topic, unless corrective information is presented to the person simultaneously, it becomes very challenging to change their mental model later; for this reason, repetition of misinformation tends to increase fluency (familiarity and easy retrieval of an idea), but not repetition of corrective information (Walter & Tukachinsky, 2020).
The memory-based judgment theory, closely related to the mental model theory, proposes that people retrieve relevant information from memory when making judgments about a claim , and if the claim's message is consistent with references in memory, it seems true. Misinformation persists because it becomes familiar, and familiarity and fluency are mental heuristics that humans often rely upon . It is for this reason that misinformation corrections that increase exposure to the false claim can sometimes actually have a backfire effect and inadvertently increase confidence in a false belief .
Finally, even if someone encounters a corrective message about misinformation, they may fail to correctly retrieve that information when they remember it later. The "tag" that categorizes the information as untrue may "fall off," with only the misinformation itself being retrieved . Along the same lines, when someone does not sufficiently integrate a correction into memory, it is not co-activated along with the misinformation when that subject is called to mind, even if the correction exists in memory .
Motivated reasoning theory suggests that human minds also strive for consistency, and corrective information that challenges someone's worldview or tribal identity is often met with motivated reasoning to explain away the new evidence . In a way, political ideology can also be considered a mental model, and an ideological schema can be used to fill in information gaps or rationalize counterarguments to corrective information that conflicts with someone's worldview (Ecker & Ang, 2019;Walter & Tukachinsky, 2020). Along the same lines, misinformation with minimal or no evidence that aligns with someone's worldview may be adopted without question, even when experts or authorities insist it is false , although the role of worldview on misinformation detection is still unsettled in the literature . The continued influence effect can also arise because people reject new information due to psychological reactance, or resistance to being told what to do . Even if acknowledging the false nature of a misinformed belief is in the best interest of the believer, that person might be resistant to the feeling of being forced to admit that their belief was false .
In addition to reliance on heuristics related to familiarity with, and negativity toward, information, people also rely on societal contexts to determine the credibility of a source to judge its reliability Margolin et al., 2018). What people consider credible varies, but evidence shows that the claims of a credible source are more likely to be accepted (Nadaravic et al., 2020;Walter & Tukachinsky, 2020). Generally, experts and figures of authority are considered more trustworthy, and have the power to mislead many when they make false claims . To some extent, whether the information source is an "in-group" or "out-group" member influences a person's likelihood to trust the source's message . However, even sources considered untrustworthy can convey an influential message, if the message is memorable and if the recipient lacks the motive and/or expertise to evaluate it carefully themselves .
All of these contributors to the continued influence effect make correction of misinformation very challenging, and, even when correction is possible, the effects may not be long-lasting. However, knowledge of these effects can help scientists and instructors develop corrective messages that are more impactful. In fact, research has shown that, although difficult to construct, carefully crafted corrections can be successful .

Debunking Strategies
Debunking strategies involve targeting specific misinformation claims and taking steps to reduce a target population's belief in them, and research suggests that debunking strategies can be successful . The primary debunking strategy that researchers have explored is the use of corrective messaging. A corrective message is a statement, image, or other communication that attempts to correct a false belief after it has been adopted.
How the correction is presented may be key to its success. Walter and Tukachinsky (2020) found through a meta-analysis that corrections that were coherent, consistent with an audience's worldview, and delivered by the misinformation source itself were most successful. There is mixed evidence regarding whether a simple rebuttal or a factual elaboration is more effective . While there is some evidence that factual elaboration is more effective at times (van der Meer & Jin, 2020), it may depend on the person's previous knowledge of the issue; for correction of beliefs that the person has relatively little previous knowledge about, a simple rebuttal may be sufficient . Repeating and emphasizing the correction-and refraining from repeating the misinformation-can help to reinforce the correct information in the subject's memory .
The content of the correction itself is also important. There is disagreement in the literature about whether concurrently presenting both the misinformation and the correct information in a corrective message effectively combats the misinformation, or whether it actually makes adherence to the misinformation stronger -a phenomenon called the backfire effect . The backfire effect was once thought to be very prevalent and an important factor when designing an effective correction. However, more recent studies have shown that the backfire effect is much less common than once thought. Growing evidence shows that it is actually absent or rare in many cases (Swire-Thomson et al., 2021;. Additional research shows that strategies that involve sharing the misinformation first, followed by the correction, or vice versa, had equal effectiveness, as long as both elements were present in the corrective message . While evidence about the effectiveness of a simple rebuttal is mixed, research shows that adding an alternative narrative to a rebuttal can help fill in the cognitive gap left when someone begins questioning a misinformed belief . For the alternative narrative to be successful, it should be plausible and, ideally, explain why the misinformation existed or was spread in the first place . The correction may also be more effective if it includes a claim that the spreader of the misinformation was doing so deliberately (Campos-Castillo & Shuster, 2021).
In addition, it helps if the correction fits into someone's worldview and psychological dispositions and avoids focusing on conflicts with their own worldview (Walter & Tukachinsky, 2020). For example, corrective information about genetically modified food misinformation could emphasize benefits for farmers and the economy, rather than their safety as established by science and engineering -topics which can feed fears about "big Ag" companies (Lunz Trujillo et al., 2021;Walter & Tukachinsky, 2020). Unfortunately, evidence in line with motivated reasoning theory concludes that corrections can have limited effectiveness when they are in dissonance with partisan attitudes, especially for those on the political Right (Ecker & Ang, 2018).
A logic-based approach to correction uses instruction about logical fallacies found in misinformation messages to undermine them . By teaching others about logical fallacies, there is potential for logic-based corrections to have broader applicability to future messaging using the same tactics . However, this approach may have mixed effectiveness depending on the topic of the misinformation being corrected (Vraga, Kim, & Cook, 2019). There is some limited evidence that narrative corrective messages that rely on emotion, combined with data and facts, are effective in specific contexts (Lazic & Zezelj, 2021;Sangalang, Ophir, & Cappella, 2019). Other research shows that narrative elements of corrections do not impact their effectiveness .
There is evidence that corrections that emphasize social norms may have more success . For example, descriptive norms (which demonstrate that many others share the belief), injunctive norms (socially expected behavior), and consensus among experts all may contribute to a more impactful correction. Emphasizing social benefit may be one of many strategies that can improve the strength of a correction, depending on the motivations of the correction's audience . Making attempts to customize the correction to the audience's false beliefs, sometimes called psychological microtargeting, while challenging, may be the most successful strategy of all Trujillo et al., 2021).
The source of correction also matters, although evidence about the most convincing sources is mixed. When it comes to health information, misinformation shared by peers is more difficult to correct than misinformation shared by news sources . This is true even though peers are often unlikely to possess medical expertise. People who have low trust in institutions are also generally more susceptible to misinformation; according to , "Low trust in medical experts coincides with believing vaccine misinformation." For medical information, perceptions of severity and susceptibility can motivate someone to pay more attention to a message, regardless of its accuracy ; however, people may also be more attentive to corrections for topics they care more about. If the claim may be shared with others, people also consider whether their audience will consider it credible as part of their evaluation .
Conversely, more recent studies show that corrections from experts, including government agencies and news sources, are more effective than corrections from non-experts, despite recent fears about decreasing trust in expertise . In addition, studies that approach debunking by first establishing source credibility have shown success . Clearly, authority of the source does play a role in someone's correction evaluation process, even while social connections can interfere and complicate matters. More research about the role of a correction's source is needed.
Side-stepping the issue entirely, some researchers advise against relying on messaging about the credibility of experts in corrections and instead recommend criticizing the credibility of the misinformation source (Walter & Tukachinsky, 2020). Research has shown that the credibility of the source of corrective information is often ignored, but it is not for misinformation. In addition, describing how a source is untrustworthy helps explain why the misinformation exists in the first place, making the correction more credible (Walter & Tukachinsky, 2020).

Prebunking Strategies
While debunking strategies remain an important focus of study, some researchers conclude that it is much more challenging to correct beliefs based on misinformation than it is to prevent people from adhering to those beliefs in the first place . It can be difficult to extricate pre-and debunking strategies since someone receiving a corrective message may have encountered the false belief already or may just be hearing about it for the first time. Here, I describe research that specifically addresses prebunking strategies.
Prebunking consists of warning people about how they might be manipulated by others in the future, rather than trying to correct someone's existing misinformed beliefs. Prebunking strategies can be simple. At their most basic, pre-exposure warnings can serve the function of heightening peoples' skepticism, slowing down their thought processes to encourage more careful reasoning, and increasing their ability to more accurately judge between truth and falsehood Ecker et al., 2022;. Even just encouraging reflection and the engagement of metacognitive skills before exposure to misinformation can improve source evaluation .
Misinformation "inoculation," an increasingly common approach to prebunking, uses vaccination as a metaphor for increasing misinformation resistance ). An inoculation intervention usually begins with a warning that others may try to trick the participant, which can activate the person's "immune system defenses" against the misinformation to come. The warning may be particular to a specific kind of misinformation message or describe general misinformation-spreading tactics . This is followed by an explanation of the persuasion tactic in a "weakened" form -the context of a study or classroom . The "immune" response of the participant's mind may help them prepare to avoid similar, genuine misinformation persuasion tactics they encounter in their everyday lives. Inoculation serves to raise skepticism about new information, and generally makes people more alert to the quality of the information being shared. Some manipulation tactics identified by researchers Roozenbeek, van der Linden, and Nygren that prebunking strategies can flag against include impersonation, emotion, polarization, conspiracy, discredit, and trolling (see Table 2, Roozenbeek, van der Linden, & Nygren, 2020). Prebunking techniques can be categorized in several ways. One way is to consider active techniques, such as games, versus passive techniques, such as graphics. Evidence shows that both active and passive prebunking techniques can be effective . Another way to categorize prebunking strategies is in terms of issue-based interventions, which address a specific piece of misinformation, or in terms of technique-based interventions (also called "broad-spectrum interventions"), which generally raise skepticism without addressing a specific misinformation claim . In general, while both techniques can be effective, technique-based interventions can be effective across a wide variety of existing and developing misinformation claims, making them generally more useful.
One passive prebunking technique is to simply raise someone's skepticism so that they are more careful in their judgment of information. Simple interventions that "nudge" people to consider the veracity of information can improve their identification of misinformation . This can be as simple as talking with people about the ways that sources can deceive. While it may seem logical that people are more susceptible to misinformation because of ideologically motivated reasoning, the research appears to show that it is a lack of reasoning that results in poor judgments of claims (Pennycook & Rand, 2019). For this reason, simply raising subjects' awareness can increase detection of misinformation.
One issue with nudges that encourage healthy skepticism is that they may raise awareness, but they do not offer any training to help people make a good judgment once their skepticism has been activated . For this reason, some researchers advise pairing passive prebunking techniques with better education and activities that increase trust in the media.
One promising active inoculation intervention approach is the use of games to help raise skepticism among the players. One well-researched game is the "Bad News" game developed by Roozenbeek and van der Linden (TILT & Cambridge Social Sciences Decision-Making Lab). This game was found to significantly improve participants' misinformation identification techniques, as well as increase their confidence about their evaluations (Basol, Roozenbeek, & van der Linden, 2020). Unlike an intervention that focuses on a specific issue or topic, the Bad News game is an active, technique-based inoculation strategy that teaches about misinformationspreading techniques without being limited to specific topics . Studies of the effectiveness of the game have found that a gamified approach such as this works across multiple misinformation examples, regardless of political ideology, gender, and age group . Research has even shown the game to be effective in other languages with participants from a variety of countries .
Several other researchers have tried the game-based learning tactic for inoculating against misinformation. Another game that has been shown to both improve players' identification of misinformation and increase their confidence in spotting misinformation is called "Go Viral" . Playing the game increases players' overall skepticism about news, real or manipulative, though the skepticism about real news was found to fade after about a week. However, the researchers concluded that more rigorous research is needed to determine whether the observed changes are caused by item effects rather than real underlying skepticism . Yang et al. developed an online game called "Trustme!" that was found to have significant positive impacts on participants' ability to evaluate information, although it did not have an impact on participants' skepticism toward online information in general . Some research has shown a relationship between information literacy skills and the ability to detect misleading information, as well as a decreased tendency to share misinformation (Jones-Jang, Mortensen, & Liu, 2019). Some of these same researchers call more generally for public education and widespread teaching initiatives that encourage "healthy skepticism" . Teaching students how to identify misinformation is a promising intervention, but research shows that educational interventions that provide feedback and guidance have more success than those that simply teach students fact-checking skills .
One example of an intervention that can serve as either a debunking or prebunking technique, depending on the viewer, is the use of warnings on social media platforms. Warnings, in addition to social media platform policies and algorithms that discourage, rather than encourage, the spread of misinformation, are important strategies for fighting misinformation. However, these interventions are not explored in depth here because of their limited usefulness in designing library outreach, programming, and instruction interventions. There is widespread consensus that solutions that target the supply of misinformation (such as improved social media platform algorithms), as well as solutions that target the consumption of misinformation (such as information literacy education), must be employed in simultaneously to have a hope of combating misinformation spread .
While effective prebunking interventions have been developed, some researchers contend that no single intervention is sufficient on its own to combat misinformation and encourage the use of multiple strategies . In addition, no inoculation strategy is a silver bullet that will work in all situations; the information landscape is complex, and human responses to information vary widely (Roozenbeek, van der Linden, & Hygren, 2020). Combining inoculation strategies with critical thinking and civic online reasoning interventions may have the best chance at success (Roozenbeek, van der Linden, & Hygren, 2020). Repeating strategies will also be important to combat the fading of individual gains over time.

Discussion
There are many factors that contribute to the failure of interventions to correct beliefs based on misinformation: faulty retrieval of information (or the "false" label on information), reliance on familiarity and fluency in making judgements, and social reactance, or the dislike for being told what to do . While research about misinformation interventions like prebunking is still relatively new, the existing literature helps provide evidence-based approaches to combating misinformation, both before and after it has been adopted. The following are some suggestions for how these approaches could be applied to instruction, outreach, and programming efforts in libraries of all types.
First, it is important to recognize that simply providing access to accurate information is not sufficient to help people avoid misinformation. The mental model and the memory-based judgment theories show that people are unlikely to modify their existing, strongly held beliefs, even if evidence and facts that contradict those beliefs exist. Even directly presenting people with evidence is not always effective, because it can be easily disregarded when mistakenly considered irrelevant.
At the same time, messages and programming from the library that raise healthy skepticism do have the potential to improve patrons' ability to detect misinformation. This is more likely to be successful when it encourages reflection, uses engaging techniques like gamification, and follows prebunking best practices. Misinformation inoculation strategies in particular show great potential, because they can present misinformation strategies in a vague enough way to raise skepticism without teaching people how to take part in the manipulation. Importantly, misinformation examples shared in the controlled, educational context of library outreach can provide a "vaccine" of sorts against real misinformation that patrons are likely to encounter in their lives. For example, programming or education that introduces patrons to ways that others might try to trick them into accepting and sharing misinformation can alert them to such manipulative techniques in the future. Some examples of how the takeaways from this review could be applied to library instruction, outreach, and programming are explored below.

Library Instruction
Academic librarians in particular may find themselves able to apply pre-or debunking to information literacy instruction in a traditional classroom context. This might consist of incorporating these concepts into a regular library instruction session, or in soliciting the opportunity to offer instruction that focuses specifically on misinformation detection and avoidance. In the former situation, a potent way to introduce debunking when conducting sample searches would be to use an example that is commonly the subject of misinformation messages, accompanied by an explanation of why the misinformation is so widespread. For example, when doing a demonstration search in the library catalog or database, the librarian could use the topic of "vaccines and autism" and take the opportunity to explain why so much misinformation about the false link between vaccines and autism spectrum disorder exists. This approach is most likely to be effective if the claim that the librarian chooses to investigate isn't so controversial that many of the students will already have strongly held and/or inaccurate beliefs about it. Considering what is currently a hot topic in the news may also help when deciding what topic to select.
If the librarian has the opportunity to devote an entire instruction session (or, even more fortuitously, an entire credit-bearing course) to the subject of countering misinformation, some or all of the content could consist of describing persuasion techniques used by misinformation purveyors, along with real-world examples. Giving students opportunities to identify what persuasion tactic is being employed can give them valuable practice that they can apply in authentic encounters with misinformation online. Another approach that the author has used with success is having the students play an existing or original misinformation identification game, such the "Bad News" game (TILT & Cambridge Social Sciences Decision-Making Lab) or "Go Viral" (Cambridge Social Decision-Making Lab et. al.). Students can be directed to play the game alone or in small groups, competing against one another. After they are given a chance to play, they can be encouraged to reflect on the experience and how it has impacted their ability to recognize misinformation. When it comes to library instruction, there is considerable room for librarians to develop new and creative ways to "inoculate" students against misinformation.

Outreach
Librarians of all types often provide patron outreach that takes place informally and in non-classroom environments. For example, social media communication and other marketing outlets (e.g., flyers, monitor displays, signs, etc.) could be used to raise awareness about common misinformation persuasion techniques. Librarians can reach patron audiences where they encounter misinformation the most by focusing on popular social media platforms; for example, TikTok videos about how people have been tricked by influencers who are being paid; quick tips on Instagram for how to identify a fake Instagram account; or videos about how content creators on YouTube can develop a steady stream of misleading videos that the YouTube algorithm heavily promotes. This could be done in conjunction with a relevant library program (see below) or for a themed week or day, like U.S. Media Literacy Week, Digital Citizenship Week, or International Fact-Checking Day.
Librarians can also use displays to get patrons to think about why misinformation is widespread about certain topics, or to give them the skills to identify and avoid misinformation.

Programming
Librarians and library workers also often offer workshops and events that do not take place in a formal classroom but are scheduled and themed. These events could be educational and consist of activities similar to a formal information literacy instruction session (see above), or they could take a more creative approach. For example, the library could invite an author, journalist, or other expert (or panel of experts) to speak about misinformation persuasion tactics in general or as they relate to a specific issue. Other program types could be combined with a focus on misinformation detection, such as a Wikipedia Edit-a-Thon, a film showing, a trivia night, or any event related to health or financial literacy. Inoculation techniques could also be used to briefly preface existing programs about controversial subjects. For instance, attendees at a program on the topic of climate change could be warned about the ways in which misinformation is spread about the subject before the formal program begins.
Programming could also be asynchronous and virtual, such as a virtual misinformation email challenge or competitive game (see the discussion of misinformation games above). If the library interacts with teens or adults in online spaces, such as Facebook, Discord, or Reddit, it could serve as host to discussions about common misinformation persuasion tactics using real world examples. Libraries could host Ask Me Anything events with fact-checkers, journalists, or librarians themselves, with an emphasis on how these experts regularly evaluate claims.

Other Applications
While librarians may hesitate to correct a patron in a one-on-one interaction (although this should certainly be a consideration when the misinformation they are sharing is dangerous to themselves or others), these interactions may provide opportunities for sharing corrective statements with patrons as a debunking technique. Librarians should carefully consider the worldview of their audience, and ask themselves: "Are there ways to frame this issue that will both explain a faulty belief that audience members might have and replace it with an accurate alternative?" Corrective statements should repeat the true claim and avoid overly emphasizing the misinformation; focus on potential reasons for the misinformation that align with the audience's worldview; point out flaws in the credibility of the source of the misinformation; explain why the misinformation exists in the first place; and avoid partisan wording and ideas when possible.
Here is an example corrective statement that employs these principles: While one-time responses to misinformation, such as corrections or prebunking strategies, are useful and important approaches that librarians should consider, the work to combat misinformation is a long-term goal that will likely require multiple strategies, including the more complicated task of teaching patrons how various kinds of information is created. This can help avoid the trap of patrons becoming overly skeptical to the point of cynicism, which can be a gateway to conspiracy theory thinking (Hawley, 2019). Information literacy is essential to our long-term success in fighting misinformation, although its implementation is likely to require much more sustained effort over time, as opposed to a quick misinformation inoculation tip or a corrective statement. Librarians can play an important role in this endeavor by partnering with educators, providing reliable educational resources, and learning more about media literacy ourselves.

Limitations
There were several limitations to this study that impacted its reliability and validity. Because only one researcher selected and coded the literature that was reviewed, there is potential for bias in the articles that were included and in the interpretation of the research. In addition, the narrative review was necessarily limited in scope; as a topic of interest across many fields, a great deal of new work on misinformation interventions is published on a regular basis, making it challenging for a single review to capture trends in this area of study. In addition, there is still considerable conflicting evidence about certain aspects of misinformation interventions, from how to construct an effective corrective message to the role that partisanship and ideology play in misinformation interventions. The study of misinformation interventions is still young, and new conclusions about best practices will emerge over time. This review was also very focused on research conducted in the United States, and such a review in the future could benefit from the deliberate inclusion of international perspectives.
Future, similar review articles could benefit from multiple coders who use inter-rater reliability measures, as well as an even more systematic approach to reviewing the literature (e.g. following a systematic review framework). Studies that explore the effect of culture, partisan views, and demographic features on misinformation interventions as they apply to library work would also be valuable. The concepts explored in the "Context" section of this study, such as motivated reasoning and the continued influence effect, also warrant further review in the library science literature.

Conclusion
This study is meant to provide a preliminary synthesis for future research. There is much room for research related to librarians' role in promoting misinformation detection and avoidance -all of the strategies discussed here could be applied in a library environment and subjected to rigorous study. The role of librarians in promoting misinformation prebunking is especially promising and exciting, given its effectiveness and given librarians' identities as information literacy teachers. Misinformation is everywhere, but librarians have the opportunity to play an important societal role, lending their expertise, enthusiasm, and information literacy knowledge to addressing this important problem.