A spoonful of misinformation helps the medicine go viral. How misinformation spreads and who bears the consequences.
Editorial Assistants: Chenhao Zhou and Maren Giersiepen.
Note: An earlier version of this article has been published in the Dutch version of In-Mind.
Back to January 2021: you are at home because of the COVID-19 pandemic and decide to scroll through social media for some much-needed distraction. Within seconds, you come across posts about microchips in vaccines, COVID spreading through 5G networks, and President Trump suggesting that injecting disinfectants could be a cure for COVID-19. Social media has become a hotbed of armchair experts, doom-mongers, and conspiracy theorists. You laugh off the misinformation easily, after all, nobody really believes this... Right?
Let’s call it what it is. But what is ‘it’?
When we encounter falsehoods or inaccurate information, it is easy to label them as “misinformation” or “fake news.” But what do these terms actually mean? Misinformation is a broad term that needs to be distinguished from concepts like disinformation, malinformation, and fake news. These terms are often mixed up, both in everyday life and scientific literature. This muddling complicates the work of researchers who are trying to clarify the distinctions [1].
Disinformation refers to falsehoods that are deliberately created with the intent to mislead people and cause harm [2, 3]. The idea that injecting bleach or disinfectants could cure COVID-19 is a concrete example. Malinformation also involves information deliberately spread to harm people or organizations, but the key distinction is the manipulation of facts. Malinformation takes information that is correct and removes it from its original context to create a distorted or misleading image [4]. A well-known example is the leaked emails from Hillary Clinton’s campaign manager during the 2016 presidential campaign. The information itself was accurate but was used strategically to damage Clinton’s reputation and influence the election.
Fake news is harder to define because it can involve both intentional and unintentional dissemination of false information. It is most commonly used in the context of media communication and political debates, and less in scientific publications. President Donald Trump is probably the best-known example of someone who frequently uses the term, helping to popularize it [5].
Then there is misinformation. Like disinformation, misinformation is based on falsehoods, but the distinction lies in intent: misinformation is created and spread without the explicit aim of misleading others [3, 5, 6]. Because a person’s intent is often difficult to determine, misinformation is more broadly defined as information that contradicts the best available evidence [7]. This term can therefore refer to factual inaccuracies, as well as misleading information that, without malicious intent, can influence people [8]. Building on the work of researchers active in this field, this article uses “misinformation” as an umbrella term for all forms of false or misleading information. Unless malicious intent can be established with certainty, we assume the false information was spread unintentionally [5, 9].
Misinformation, a problem of this generation?
Sharing information, and by extension, misinformation, is nothing new. The spread of misinformation played a key role in events such as the Salem witch trials in 1692, the mass hysteria during the 1918 Spanish flu, and the controversy surrounding the Measles, Mumps, and Rubella (MMR) vaccine around the turn of the century [10]. Misinformation has existed throughout history, but social media has amplified its reach and potential impact [11].
Social media connects people worldwide, allowing them to expand their social networks without ever leaving home. Whereas someone’s “in-group” used to consist mainly of those in close physical proximity, it can now include people located far away – simply because we share an online space with them [12]. This shift not only broadens our reach but also accelerates the spread of ideas and information. It is precisely this dynamic that makes social media fertile ground for the spread of misinformation. An example of this in the health domain is the spread of sociogenic illnesses. Sociogenic illnesses are a form of social contagion, where physical symptoms spread rapidly within a tight-knit group without there being any underlying medical cause [13]. While this phenomenon predates the internet, online platforms have amplified the reach and speed at which these symptoms can spread [14, 15, 16].
To illustrate, in 2019, Jan Zimmermann, a well-known German YouTuber, claimed to suffer from Tourette syndrome. Even though specialists considered his tics implausible and bizarre, Zimmermann’s popularity grew. As his following increased, German clinics saw a significant rise in young people believing they had Tourette syndrome, displaying unusual tic-like symptoms closely resembling – or even identical to – Zimmermann’s. In some cases, these youths fully recovered once doctors clarified that their symptoms did not meet the criteria for a Tourette diagnosis. This suggests that at least part of their symptoms was fuelled by the social media-sparked conviction that they had Tourette syndrome [16].
Doctor Google: Friend or foe?
The unbounded nature of the internet offers countless advantages, including large-scale cultural exchange, access to online education, and the rapid dissemination of accurate information. Unfortunately, it allows misleading information, pseudoscience, and so-called “alternative truths” to spread just as quickly. This is particularly true for health-related content, which is only a few clicks away at any given moment nowadays.
After Finland, the Netherlands even appears to be top of the class in Europe when it comes to online health information searches [17]. As long as people rely on credible websites, the internet can help relieve some of the pressure on healthcare professionals [18, 19]. If users could access reliable first-line guidance online without having to schedule an appointment, physicians could dedicate more time to urgent cases, an appealing prospect in theory. However, most of the information available online is unregulated and unverified, leaving ample room for misleading, overly simplistic, or even false claims [20]. Moreover, online symptom searches, even when they concern harmless ailments, tend to spiral quickly toward catastrophic diseases [21]. A simple Google search for a headache is just one click away from brain tumours and two clicks from death. This overload of alarming content can easily trigger an escalation of perceived symptoms.
The lack of regulation surrounding health-related websites – and the risk of symptom escalation that comes with it – could end up having the opposite effect on our healthcare system once people start accepting online misinformation as fact. On the one hand, people may postpone medical appointments, leaving them to cope with anxiety, stress and potentially serious conditions [5]. On the other hand, it might push people to seek professional care for trivial symptoms or question their physician’s diagnosis when it does not match what they read online [5, 22]. A survey of roughly 700 physicians by the Royal Dutch Medical Association (KNMG) and the Dutch Broadcasting Foundation (NOS) found that misinformation about medical complaints is a major issue in Dutch consultation rooms. Over 80% of physicians reported encountering such misinformation regularly, and nearly half said it increases their workload because they must spend time countering inaccurate claims. It is therefore not surprising that physicians identified the internet and social media as two key sources of misinformation [23].
To make matters more complicated, this spread of misinformation is not limited to people who actively search for health information. Social media algorithms can shape exposure and reinforce existing vulnerabilities. For example, TikTok’s algorithms expose users with eating disorders to significantly more content promoting disordered eating, even when they never searched for or engaged with such content. Increased exposure to these videos correlated with greater symptom severity [24]. Algorithms do not just create echo chambers that validate maladaptive and harmful behaviours; they can also actively exacerbate underlying symptoms.
Given that the internet has become an integral part of daily life, and that many people – especially younger generations – spend a substantial amount of time in online environments, it is crucial to understand who is most susceptible to misinformation. Young people, for instance, spend an average of six hours a day behind a screen [25], suggesting that their health beliefs and potentially their health decisions are increasingly shaped by online environments.
Who is susceptible?
Several factors contribute to why some people are more susceptible to misinformation than others. These include age, media literacy, educational level, where information is obtained, how easily it is accessed, and certain personality traits. Each can influence the extent to which a person accepts misinformation. However, the full picture – what combination of factors carries the most weight for which groups – remains unclear. To date, there simply has not been enough research to draw strong conclusions, and susceptibility to misinformation may even depend on the specific content being shared, making generalisation difficult. Most studies, for instance, suggest that older adults are generally more vulnerable to misinformation. Some researchers link this to lower levels of media literacy among older populations [26, 27, 28]. Yet, these findings did not seem to hold up in the case of COVID-19 misinformation, where younger people were more likely to be misled than older adults [29]. So, how age shapes people’s acceptance of misinformation seems to depend on the context in which that information appears. Education and income add another layer to the picture: both are linked to stronger analytical thinking, which can help people spot misinformation more readily [28, 30].
When it comes to personality, there is little doubt that individual traits play a role in how people process and accept misinformation [30, 31, 32]. Conscientious individuals, who tend to be disciplined, and enjoy analytical and careful reasoning, are less likely to believe or spread misinformation than those lower in conscientiousness [31]. Extroverts, on the other hand, thrive on interaction and excitement, are action-oriented and impulsive, are among the heaviest social media users, and appear more prone to accepting misinformation than those scoring lower on this trait [33, 34, 35]. Similarly, individuals high in neuroticism, who tend to worry more and experience stronger negative emotions, are more likely to accept misinformation, particularly when it concerns alarming or distressing topics [31, 36].
To get a better understanding of how various factors interact to shape susceptibility to misinformation, more extensive empirical research is necessary. It is particularly crucial to identify more potential factors, the mechanisms behind their impact, and how the acceptance of health-related misinformation affects people’s health perceptions and experiences. When it comes to health, these questions are not merely academic, as they affect people’s understanding of their bodies, their interpretation of symptoms, and their help-seeking behaviour.
What can we do?
Although much research is still needed to develop targeted and effective strategies against misinformation spread via social media, there are a few measures that can already help. Making it standard practice to verify the accuracy of information before sharing it online is one step. Actively correcting misinformation with accurate information is another. And of course, public campaigns that raise awareness about the dangers of misinformation also play an important role [37, 38]. In addition, holding websites and platforms accountable for moderating content to curb the spread of misinformation is another critical step [39].
In line with the idea that prevention is better than cure, there is now a stronger focus on stopping misinformation before it takes hold. A common approach draws on inoculation theory, also known as ‘prebunking,’ which recommends warning people in advance about potential misinformation to make them more resilient [40, 41]. According to two recent meta-analyses, prebunking increases people’s ability to discern reliable from unreliable information without making them universally sceptical towards all information and decreases their intention to share misinformation [42, 43]. Several countries have already put preventive measures into action, though the scope varies widely. At the extreme end, Australia recently introduced a ban on social media use for anyone under the age of 16 [44]. Exactly how this will be enforced, what platforms will be affected, and whether such a drastic measure will work remains to be seen. Similarly, in Europe, several countries are considering raising the minimum age for social media use and even discussing EU-wide regulations [45].
While no solution will be foolproof, insights from this growing body of research can help shape more informed responses at individual, as well as societal levels. Understanding how misinformation spreads, who is susceptible, and why certain narratives take hold provides a good basis for navigating today’s online landscape. Therefore, while research on this topic continues to advance yet still falls short of identifying an ultimate cure, it has already provided valuable tools for recognizing misleading content and, in doing so, has outlined potential prevention mechanisms that could help shield the online space from misinformation.
Bibliography
[1] S. van der Linden, “Misinformation: susceptibility, spread, and interventions to immunize the public,” Nature Medicine, vol. 28, no. 3, pp. 460–467, 2022, https://doi.org/10.1038/s41591-022-01713-6.
[2] E. Humprecht, F. Esser, and P. Van Aelst, “Resilience to online
disinformation: A framework for cross-national comparative research,” The International Journal of Press/Politics, vol. 25, no. 3, pp. 493–516, 2020, https://doi.org/10.1177/1940161219900126.
[3] Q. E. Qinyu, O. Sakura, and G. Li, “Mapping the
field of
misinformation correction and its effects: A review of four decades of research,” Social Science Information, vol. 60, pp. 522–547, 2021, https://doi.org/10.1177/05390184211053759.
[4] I. K. El Mikati et al., “Defining
misinformation and related terms in health-related literature: A scoping review,” Journal of Medical Internet Research, vol. 25, e45731, 2023, https://doi.org/10.2196/45731
[5] Y. Wang, M. McKee, A. Torbica, and D. Stuckler, “Systematic literature review on the spread of health-related
misinformation on
social media,” Social Science & Medicine, vol. 240, p. 112552, 2019, https://doi.org/10.1016/j.socscimed.2019.112552.
[6] S. Lewandowsky, U. K. H. Ecker, C. M. Seifert, N. Schwarz, and J. Cook, “Misinformation and its correction: Continued influence and successful debiasing,” Psychological Science in the Public Interest, vol. 13, no. 3, pp. 106–131, 2012, https://doi.org/10.1177/1529100612451018
[7] E. K. Vraga and L. Bode, “Defining
misinformation and understanding its bounded nature,” Political Communication, vol. 37, no. 1, pp. 136–144, 2020, https://doi.org/10.1080/10584609.2020.1716500
[8] S. van der Linden and Y. Kyrychenko, “A broader view of
misinformation reveals potential for
intervention,” Science, vol. 384, no. 6699, pp. 959–960, 2024.
[9] L. Wu, F. Morstatter, K. M. Carley, and H. Liu, “Misinformation in
social media: Definition,
manipulation, and detection,” ACM SIGKDD Explorations Newsletter, vol. 21, no. 2, pp. 80–90, 2019, https://doi.org/10.1145/3373464.3373475.
[10] F. DeStefano and R. T. Chen, “Negative association between MMR and autism,” The Lancet, vol. 353, no. 9169, pp. 1987–1988, 1999.
[11] E. Denniss and R. Lindberg, “Social media and the spread of
misinformation: Infectious and a threat to public health,” Health Promotion International, vol. 40, no. 2, 2025, https://doi.org/10.1093/heapro/daaf023.
[12] C. Olvera, G. T. Stebbins, C. G. Goetz, and K. Kompoliti, “TikTok tics: A pandemic within a pandemic,” Movement Disorders Clinical Practice, vol. 8, no. 8, pp. 1200–1205, 2021, https://doi.org/10.1002/mdc3.13316.
[13] R. E. Bartholomew and S. Wessely, “Protean nature of mass sociogenic illness,” The British Journal of Psychiatry, vol. 180, no. 4, pp. 300–306, 2002, https://doi.org/10.1192/bjp.180.4.300.
[14] J. Frey, K. J. Black, and I. A. Malaty, “TikTok Tourette’s,” Psychology Research and Behavior Management, pp. 3575–3585, 2022, https://doi.org/10.2147/PRBM.S359977.
[15] A. Giedinghagen, “The tic in TikTok,” Clinical Child Psychology and Psychiatry, vol. 28, no. 1, pp. 270–278, 2023, https://doi.org/10.1177/13591045221098522.
[16] K. R. Müller-Vahl et al., “Stop that! It’s not Tourette’s,” Brain, vol. 145, no. 2, pp. 476–480, 2022, https://doi.org/10.1093/brain/awab316.
[17] Eurostat, “EU citizens: Over half seek health information online,” Apr. 6, 2022. [Online]. Available: https://ec.europa.eu/eurostat/en/web/products-eurostat-news/-/edn-202204...
[18] T. Hughes et al., “Medically unexplained symptoms in children,” Behavioural and Cognitive Psychotherapy, vol. 49, pp. 91–103, 2020, https://doi.org/10.1017/S1352465820000752.
[19] X. Jia, Y. Pang, and L. S. Liu, “Online health information seeking behavior,” Healthcare, vol. 9, no. 12, 2021.
[20] M. Faraon et al., “Fake news and aggregated credibility,” International Journal of Ambient Computing and Intelligence, vol. 11, no. 4, pp. 93–117, 2020, http://doi.org/10.4018/IJACI.20201001.oa1.
[21] R. W. White and E. Horvitz, “Cyberchondria,” ACM Transactions on Information Systems, vol. 27, no. 4, 2009, https://doi.org/10.1145/1629096.1629101.
[22] S. M. Jungmann, S. Brand, J. Kolb, and M. Witthöft, “Do Dr. Google and health apps have (comparable) side effects? An experimental study,” Clinical Psychological Science, vol. 8, no. 2, pp. 306–317, 2020, https://doi.org/10.1177/2167702619894904.
[23] Koninklijke Nederlandsche Maatschappij tot bevordering der Geneeskunst, “Medische desinformatie: een veelvoorkomend fenomeen in de spreekkamer,” May 20, 2024. [Online]. Available: https://www.knmg.nl/actueel/nieuws/nieuwsbericht/medische-desinformatie-...
[24] S. Griffiths et al., “Does TikTok contribute to eating disorders? A comparison of the TikTok algorithms belonging to individuals with eating disorders versus healthy controls,”
Body Image, vol. 51, p. 101807, 2024, https://doi.org/10.1016/j.bodyim.2024.101807.
[25] Trimbos-instituut, “Hoeveel tijd brengen jongeren door achter een scherm? Wat is het sociale mediagebruik van kinderen? Alle feiten en cijfers over schermgebruik op een rij,” 2020. [Online]. Available: https://www.trimbos.nl/kennis/digitale-media-gokken/expertisecentrum-dig....
[26] N. M. Brashier and D. L. Schacter, “Aging in an Era of
Fake News,” Current Directions in Psychological Science, vol. 29, no. 3, pp. 316–323, 2020, https://doi.org/10.1177/0963721420915872.
[27] A. Guess, J. Nagler, and J. Tucker, “Less than you think: Prevalence and predictors of
fake news dissemination on Facebook,” Science Advances, vol. 5, no. 1, eaau4586, 2019, http://doi.org/10.1126/sciadv.aau4586.
[28] W. Pan, D. Liu, and J. Fang, “An examination of
factors contributing to the acceptance of online health
misinformation,” Frontiers in Psychology, vol. 12, p. 630268, 2021, https://doi.org/10.3389/fpsyg.2021.630268.
[29] J. Roozenbeek et al., “Susceptibility to
misinformation about COVID-19 around the world,” Royal Society Open Science, vol. 7, no. 10, 201199, 2020, https://doi.org/10.1098/rsos.201199.
[30] C. Wolverton and D. Stevens, “The impact of personality in recognizing
disinformation,” Online Information Review, vol. 44, no. 1, pp. 181–191, 2020, https://doi.org/10.1108/OIR-04-2019-0115.
[31] D. P. Calvillo, A. León, and A. M. Rutchick, “Personality and
misinformation,” Current Opinion in Psychology, vol. 55, p. 101752, 2024, https://doi.org/10.1016/j.copsyc.2023.101752.
[32] A. Taurino et al., “To believe or not to believe: Personality, cognitive, and emotional
factors involving
fake news perceived accuracy,” Applied Cognitive Psychology, vol. 37, no. 6, pp. 1444–1454, 2023, https://doi.org/10.1002/acp.4136.
[33] A. Aluja, O. García, and L. F. García, “Relationships among
extraversion,
openness to experience, and
sensation seeking,” Personality and Individual Differences, vol. 35, no. 3, pp. 671–680, 2003, https://doi.org/10.1016/S0191-8869(02)00244-1.
[34] C. Huang, “Social network site use and Big Five personality traits: A
meta-analysis,” Computers in Human Behavior, vol. 97, pp. 280–290, 2019, https://doi.org/10.1016/j.chb.2019.03.009.
[35] L. Sharma, K. E. Markon, and L. A. Clark, “Toward a theory of distinct types of ‘impulsive’ behaviors: A
meta-analysis of self-report and behavioral measures,” Psychological Bulletin, vol. 140, no. 2, pp. 374–408, 2014, http://doi.org/10.1037/a0034418.
[36] M. Aldinger et al., “Neuroticism developmental courses-implications for
depression,
anxiety and everyday emotional experience; a prospective study from adolescence to young adulthood,” BMC Psychiatry, vol. 14, p. 1, 2014, https://doi.org/10.1186/s12888-014-0210-2.
[37] J. Roozenbeek and S. van der Linden, “How to combat health
misinformation: a psychological approach,” American Journal of Health Promotion, vol. 36, no. 3, pp. 569–575, 2022, https://doi.org/10.1177/08901171211070958.
[38] N. Walter and S. T. Murphy, “How to unring the bell: A meta-
analytic approach to correction of
misinformation,” Communication Monographs, vol. 85, no. 3, pp. 423–441, 2018, https://doi.org/10.1080/03637751.2018.1443483.
[39] K. Lerman, M. D. Chu, C. Bickham, L. Luceri, and E. Ferrara, “Safe Spaces or Toxic Places? Content Moderation and Social Dynamics of Online
Eating Disorder Communities,” arXiv preprint arXiv:2412.15721, 2024.
[40] W. J. McGuire and D. Papageorgis, “The relative efficacy of various types of prior belief-defense in producing immunity against persuasion,” The Journal of Abnormal and Social Psychology, vol. 62, no. 2, p. 327, 1961.
[41] R. Maertens et al., “Psychological booster shots targeting
memory increase long-term resistance against
misinformation,” Nature Communications, vol. 16, no. 1, p. 2062, 2025, https://doi.org/10.1038/s41467-025-57205-x.
[42] C. Lu, B. Hu, Q. Li, C. Bi, and X. D. Ju, “Psychological inoculation for credibility assessment, sharing intention, and discernment of
misinformation: Systematic review and
meta-analysis,” Journal of Medical Internet Research, vol. 25, e49255, 2023, https://doi.org/10.2196/49255.
[43] A. Simchon, T. Zipori, L. Teitelbaum, S. Lewandowsky, and S. Van Der Linden, “A Signal Detection Theory
Meta-Analysis of Psychological Inoculation Against
Misinformation,” Current Opinion in Psychology, 2025, art. no. 102194, https://doi.org/10.1016/j.copsyc.2025.102194.
[44] Department of Infrastructure, Transport, Regional Development, Communications and the Arts, “Online
Safety Amendment (
Social Media Minimum Age) Bill 2024 – Fact sheet,” Dec. 4, 2024. [Online]. Available: https://www.infrastructure.gov.au/department/media/publications/online-s....
[45] L. Verhaeghe and R. Arnoudt, “Het is zoals bij sigaretten: als de overheid er zich niet mee bemoeit, verandert er weinig: experts vragen strenger smartphonebeleid,” VRT nws, May 6, 2025. [Online]. Available: https://www.vrt.be/vrtnws/nl/2025/05/05/wetenschappers-vragen-strenge-re....
Figure Sources
Figure 1: https://www.pexels.com/nl-nl/foto/apparaatje-appel-apple-fruit-48603/
Figure 2: Generated by ChatGPT
Figure 3: https://www.pexels.com/nl-nl/foto/licht-fel-luchtig-handen-11354240/
article author(s)
article keywords
article glossary
- social media
- misinformation
- disinformation
- malinformation
- fake news
- manipulation
- field
- mass hysteria
- domain
- sociogenic illnesses
- significant
- anxiety
- factors
- media literacy
- screening of passengers by observation techniques
- conscientiousness
- trait
- neuroticism
- empirical research
- prebunking
- intervention
- body image
- extraversion
- openness
- sensation
- meta-analysis
- depression
- analytic
- eating disorder
- Memory
- safety






