When fake news concerns abound, this seems like a timely blog to share. While I don’t get into fake news specifically, here are some other issues to consider when social media drives health information seeking. This is part of my Social Media & Public Health series and was completed as part of my written general exams for my dissertation research. Citations may be reviewed here.
Brace yourself, because I have something shocking to share. Are you ready? Braced? Okay. Here goes.
Not everything you read online is fact.
Whoa, right? Social media, in particular, is a haven for misinformation. The current political climate certainly drives this point home. But in the context of public health, it turns out that this may, or may not, actually be a problem.
While much of the content available online and over social media lacks accuracy, as it turns out this may not pose an immediate threat to the individual user (Cole et al., 2016). However, misinformation over social media may still play a negative role in the context of public health. A study evaluating the impact of misinformation over Twitter found that Twitter users exposed to tweets expressing negative opinions about HPV vaccines were more likely to subsequently post negative opinions. While this specific study didn’t go on to examine the immunization records of these users, content of social media posts of this nature suggest that their negative attitudes may equate to action, or in this case failure to adhere to recommended vaccination schedules, posing potential harm (Dunn et al., 2015).
This problem is compounded further by what Quattrociocchi et al. (2016) referred to as the “Echo Chambers” on social media. The concept of the “Echo Chambers” is based on the idea that social media users tend to promote their favorite narratives. This enables users to process information from the Internet selectively, in an unbalanced, biased fashion, forming a self-serving image of their own health (Sassenberg et al., 2016). While this may aid users in coping with a challenging health status, it also impedes informed decision making to a certain degree.
Within the “Echo Chambers,” users from different communities tend to not interact as much, tending towards like-minded friends who promote preferred narratives (Quattrociocchi et al., 2016). A look at my own social media page illustrates this point nicely. Over the last few weeks, my Facebook feed has evolved into a fairly one-sided interpretation of the recent election. This slant is a combination of the preferences I have exhibited over my personal social media page, but also the preferences of those whom I interact with. This equation is solidified further with the help of online behavioral advertising (OBA).
OBA allows companies to launch highly specialized and targeted advertising campaigns, using information collected about internet users and stored by third parties. This information includes what they user posts about, including keywords and topics, what they have searched for, along with other demographic information. Companies that purchase the rights to this information through advertising campaigns, can select who they want to see information about their service or product. While this ensures they are spending their advertising budget wisely, it also contributes to the “echo chambers” — not only do we incidentally curate what content we are exposed to over social media to better meet our own interests and preferences, but outside parties are paying to make sure we see their information, because the data says we’ll like that too. The end result — we will see the health information that most likely aligns with our preferences and beliefs, not necessarily our actual health needs (Sassenberg et al., 2016). This makes informed decision making a difficult aspiration.
Currently, there is a more pressing concern when considering harms that may result when social media drives health information seeking. While our own bias may certainly impact what health information we note and interact with over social media, censorship could have much wider implications about what type of information we have access to. A recent New York Times article revealed that Facebook’s chief executive, Mark Zuckerberg, has struck a deal with China’s leaders that would permit Facebook, a previously banned social media platform, into the country (Isaac, 2016). The catch is that Facebook must create a new feature that prevents certain types of content from appearing in feeds in China. If this censorship feature was adopted across other regions, a tool once toted for it’s ability to improve access to information would be stifled, negatively impacting the public’s ability to obtain health information from a preferred source.
On a similar note, President-Elect Trump has caused concern for the future of free exchange of information over social media when he appointed two anti-net neutrality advocates to oversee the Federal Communications Commission (FCC) transition team (Pressman, 2016). Net neutrality is the principle that Internet service providers should ensure all content is accessible, regardless of the source. Under the Obama administration, the FCC focused on protecting net neutrality. But it doesn’t appear as though the Trump administration will take the same safeguards to protect net neutrality. The fall of net neutrality would mean that internet service providers would have the option to block and slow the transmission of content as they saw fit — requiring people to pay more for certain content in certain places (Pressman, 2016). While that’s annoying if you’re talking about Netflix, if the topic of choice is health information, then the issue evolves from annoyance into a hurdle that may contribute to health disparities. This is particularly the case given social media has had in reaching demographics that are typically difficult to reach through traditional health communication efforts, but who do seem to respond to health communication efforts over social media.