top of page

How Bioethics Can Improve Responses to Social Media Health Disinformation

By: Gianna Strand

Gianna R. Strand, MS is a doctoral student in research ethics and clinical ethics consultation at Loyola University Chicago. She received a master’s degree in clinical bioethics from Columbia University in the City of New York. Her clinical work focuses on experimental therapeutics in early-phase oncology and transplantation research.


The rapid rise and continued evolution of social media has created a new and uniquely modern digital vector for the spread of health information. Without the oversight or integrity clauses of print and digital media, social media has proven culpable of perpetuating dangerous public health threats. This has notably included anti-vaccination (“anti-vaxx”) and COVID-19 disinformation campaigns, and recent reinvigoration of dangerous diet culture trends. Efforts to curb the viral spread of social media disinformation have been necessary and supported by ethical principlism, but ultimately insufficient in examining the ablest narratives and value-laden judgments these online movements propagate. Bioethics can contribute to existing efforts to address health disinformation beyond a myopic focus on information provision to call attention to the inter- and intra-personal moral claims that underlie messaging and motivate health behaviors.

Social Media and Healthcare Decision-Making Behaviors

Social media can directly impact health when content that is intentionally or inadvertently incorrect affects the behaviors or decisions of users. Global vaccine hesitancy, spread and reinforced through online anti-vaxx content, leads to 1.5 million avoidable deaths annually and has been labeled the 8th largest threat to global health, ranking higher than HIV and dengue, and just behind Ebola. (1)

Misinformation occurs when incorrect information is spread regardless of intent. Disinformation occurs when false information is deliberately spread to promote a specific viewpoint or to manipulate a narrative. (2) Each of these processes undermine confidence and reduce trust in the social-medical enterprise. Disinformation can proliferate across social media when platforms and creators, seeking to increase user engagement, repetitively reinforce specific viewpoints. Social media is capable of amplifying health content to levels not previously possible through standard media outlets, particularly in the large and growing spaces of unsubstantiated nutritional claims and the reinvigoration of diet culture trends. For example, hashtags related to the ketogenic diet, a high-fat, low-carbohydrate meal plan originally designed for epilepsy patients now touted as a quick weight loss method, have amassed more than 10.3 billion TikTok views. (3,4)

The high rates of social media use in healthcare decision-making can be attributed in part to how platforms appeal to a variety of decision-making models. Social media appeals to the more individualistic decision-making models often practiced in Western bioethical principlism by providing the space for independent information provision with seemingly minimal outside interference. Other social media users may favor shared or family-based models of decision-making. These practices call for individuals to uphold ethical responsibilities towards familial or communal well-being and for subsequent health choices to respect these external relationships rather than prioritize individual interests. (5) A post or tweet asking for support regarding health topics can garner instant responses, augmenting one’s sense of community belonging and affirming that a decision is approved by others in a shared value network.

To make an autonomous decision, however, one must have access to an appropriate scope of information and must weigh that information insightfully against one’s own values without coercion. This is often incommensurate with the design and interests of social media.

Platforms are typically driven to increase user engagement rather than focus on user well-being or content reputability. This incentivizes algorithms to funnel users towards high-volume, interactive creators whose content generates more views, more engagement, and ultimately more revenue in the form of advertising or subscriptions to a platform. Disinformation can flourish when engagement goals reward persuasive speakers with tailored messaging and visually attractive formats over accuracy. Creators also stand to profit from users engaging with their accounts. Producing short-format content that is in line with a personal brand or image makes one unique in the crowded social media world, but the resulting online space comes to resemble less of a diverse open forum and rather a positive-loop echo chamber that repeatedly reflects back insular, often polarizing views.

Previous Attempts to Curb Disinformation

This is not to say that large social media platforms have ignored their role in perpetuating health disinformation, but rather that existing responses have been insufficient. Many platforms took an initial hard-line approach to disinformation by limiting or outright banning specific content. In 2019, YouTube banned advertising on anti-vaxx channels citing conflict with its policy prohibiting the monetization of videos containing “dangerous and harmful” material. (6) This prevented videos from generating ad revenue, but did not actually remove harmful content. That same year, Amazon pulled books from its online marketplace which proffered unfounded autism “cures'' through dangerous treatments like electroconvulsive therapy and bathing in bleach-like chemical substances. (7) This response was more in line with the principle of non-maleficence as it identified and actively prevented access to content which posed a direct threat to the body, but raised justice concerns over which content is restricted and by whom. Any policy governing the dissemination of health information should be formulated in coordination with trusted experts in that field, not by tech platforms alone. To uphold justice, restrictions must be transparent, politically neutral, evidence based, and uniformly applied.

As online health disinformation continued to proliferate despite platform-led restrictions, policy makers turned to specifically targeting creators themselves. California approved a bill that makes spreading false or misleading medical information to patients subject to medical license sanctions in line with the American Medical Association’s ethics policy against physicians who spread disinformation. (8, 9) Such policies remain insufficient at addressing the wide scope of health disinformation as they do not apply to anyone located outside the United States or to the large number of health influencers who are not licensed clinicians. In a meta-review of 100 popular nutrition advice books, nearly one third of authors identified as personal trainers, actors/TV personalities, bloggers, journalists, entrepreneurs, or – most confusingly to consumers – nutritionists. (10) Although the title of nutritionist evokes a sense of professional credentialing, it can apply to anyone who offers general nutritional advice without requiring any standardized education or professional training. (11) None of these popular health authors can be held accountable to the guidelines and ethical codes of professional societies to which they do not belong.

Internalization of Health Messaging

This focus on creation and dissemination of content has mostly ignored the underlying ethics of consuming health information. Healthcare decisions are not made solely on objective data or evidence, but through beliefs and motivations that are important to that individual. (12) Historically, the motivations behind social health behaviors have not been solely rooted in concern about a disease or disorder itself, but rather from fear that poor health would associate or label oneself as part of a socially marginalized class. (13)

Ethical health communications therefore require not only correction of objectively false information, but attention to how the viewpoints evoked by disinformation reinforce familiar but stigmatizing socio-medical narratives including connotations of ableism. The narrative power of anti-vaxx disinformation content is deeply rooted in phobias that neurodivergence is a worse outcome than any sequelae of a viral or bacterial illness. Factual arguments about whether vaccines cause autism (data resoundingly affirms they do not) ignores the contextual nuance motivating vaccination behaviors. It is not only about what is right, but about what is important: the social stigma surrounding disability, no matter how improbable a risk, is powerful enough to abscond truth or reason. (14) Efforts to dismantle the social marginalization of disabled lives can seek to reclaim the disinformation narrative conveyed by anti-vaxx content that exploits fear over fact in order to influence an individual’s health behaviors.

These often unaddressed connotations of ableism are also present in nutrition disinformation content. When good health comes to signify virtue, the public is encouraged, by contrast, to view those in poor health as deviant or unworthy. (15) This leads to marginalization as it imbues diets with implicit character judgements that certain bodies and the foods that fuel them are more moral than others. Many popular contemporary social media nutrition accounts promote viewpoints that are subtly yet powerfully steeped in these marginalizing forces of food politics. The pervasive 1200-calorie-per-day ethos of diet culture – a hashtag with over 165,000 posts on Instagram and 56 million views on TikTok – originated in a guidebook that labeled fatness as unpatriotic. (16) Content endorsing this diet ethos is both factually dangerous as it promotes inadequately low food intake for most adults and contextually problematic as uses discriminatory fat phobia to advance a shared identity.

The connotations conveyed through the growing trend of “clean eating” – whose over 48 million posts on Instagram promote eating foods as close to their natural state as possible – also exalt classism and ableism. Clean eating commends individuals who prepare labor intensive meals from whole foods without assistance for seeking a more healthful, and thus more virtuous, lifestyle. This dichotomization belies the factual reality that packaged foods and foods prepared with assistance can be nutritious, and ignores how individual food choices can be constrained by personal tastes, affordability, access, physical ability, and cultural traditions. The fear that one’s body will become moralized through food choices as good or bad, clean or dirty can drive health behaviors irrespective of evidence. It is essential to address and correct the errant assumptions communicated by health disinformation trends so as not to reinforce existing disparate and marginalizing health narratives. (15)

Previous attempts by large social media platforms to curb health disinformation have represented a necessary step, supported by bioethical rationale, in promoting public health. Proper moderation of false claims can satisfy principled critiques of justice, reinforce respect for persons, and augment non-maleficence by protecting vulnerable users from the promotion of dangerous health behaviors. What has been missing thus far, however, is the inclusion of a robust ethics perspective to address the stigmatizing narratives that disinformation perpetuates. The field of bioethics can help health professionals and policymakers recognize the presence of these value-laden claims and offer ways to navigate through these dilemmas between evidence and individually held ideals. (17) The current approach of moderating information provision can be augmented by shifting the dialogue to simultaneously contend with content that conveys marginalizing viewpoints. Individuals are unlikely to change their health behaviors or beliefs without first feeling understood. Difficult conversations about health behaviors require a response not only to what is true, but to what is important to individuals and to society.


(1) Ten Threats to Global Health in 2019: World Health Organization; 2019. Available from:

(2) Misinformation vs. Disinformation: Get Informed On The Difference: August 15, 2022. Available from:

(3) These are the Most Popular Diet Trends on TikTok in 2022: YorkTest; 2022 [updated 02/18/2022.] Available from:

(4) O'Neill B, Raggi P. The ketogenic diet: Pros and cons. Atherosclerosis. 2020;292:119-26.

(5) Nortje N, Jones-Bonofiglio K, Sotomayor CR. Exploring values among three cultures from a global bioethics perspective, Global Bioethics. 2021; 32(1): 1-14.

(6) Porter T. YouTube Bans Adverts on Anti-Vaccination Video Channels. NewsWeek. February 23, 2019.

(7) Hsu T. Amazon Pulls 2 Books That Promote Unscientific Autism ‘Cures’. The New York Times. March 13, 2019.

(8) Myers SL. California Approves Bill to Punish Doctors Who Spread False Information. The New York Times. August 29, 2022.

(9) AMA adopts new policy aimed at addressing public health disinformation [press release]. American Medical Association, June 13, 2022.

(10) Marton RM, Wang X, Barabási A-L, Ioannidis JPA. Science, advocacy, and quackery in nutritional books: an analysis of conflicting advice and purported claims of nutritional best-sellers. Palgrave Communications. 2020; 6(43).

(11) Santiago AC. What is a Nutritionist? : VerwellHealth; 2021 [updated February 4, 2021.] Available from:

(12) Stone D, Patton B, Hen S. Difficult Conversations: How To Discuss What Matters Most. 2 ed. New York, New York: Penguin Books; 2010.

(13) Black K. A Healing Homiletic. Nashville, Tennessee: Abingdon Pre; 1996.

(14) Waghorn L. Vaccines Don't Cause Autism, But That's Not the Point. The Scientific Parent; March 7, 2016.

(15) Guttman N, Salmon CT. Guilt, fear, stigma and knowledge gaps: ethical issues in public health communication interventions. Bioethics. 2004; 18(6): 531-552.

(16) Peters LH. Diet and Health with Key to the Calories: Reilly & Lee Co.; 1918.

(17) Kass NE. An ethics framework for public health. Am J Public Health. 2001; 91(11): 1776-82.

133 views0 comments


bottom of page