Distinguishing factual data from fake news has become challenging nowadays since users are constantly overloaded with information from the web. It would not be a significant issue if it did not affect individual psychology and understanding of politics (Pennycook and Rand 388). Studies show that people are more likely to believe statements that align with their political beliefs, placing them at risk of becoming the victims of fake news (Pennycook and Rand 391). False data can easily trap those who are being manipulated online or are overly confident about their knowledge (Pennycook and Rand 392). Since human psychology plays an essential role in discerning truth from lie, this paper aims to review the study on how people decide about data accuracy by Gaozhao, published in Government Information Quarterly in 2021. This article strived to identify the media consumers’ ability to identify fake information based on unique tags because the inaccurate perception of a situation may result in harmful decisions and even violence (Gaozhao 2). The author concludes that implementing fact-checking flags may prevent individuals from believing false news and stop them from changing their viewpoints about a particular problem.
The Quality of the Abstract and the Author’s Credentials
Since the abstract is a brief overview of the entire research article, it should be able to incorporate the essential information about the study. The title “Flagging Fake News on Social Media: An Experimental Study of Media Consumers’ Identification of Fake News” also provides a general idea about this research’s objectives. This paper’s summary part included background information about the field of human psychology of decision-making in rating the quality of online information, the study’s methodology, results, and the researcher’s conclusion. The author, Dongfang Gaozhao, is a Ph.D. student at Florida State University whose research focuses on citizen engagement, digital governance, social equity, and performance management, making him qualified to conduct this study. Moreover, the paper was recently published in an international peer-reviewed journal with a high impact factor; thus, it can be considered credible.
This research’s two main focus areas were fact-checking flags vs. fake news and crowdsourcing vs. professional fact-checkers. The first hypothesis was that the participants who saw more accurate flags had a higher chance of identifying the authenticity of information (Gaozhao 5). On the other hand, those who were presented with inaccurate tags would be less likely to determine if the news was false (Gaozhao 6). The second hypothesis stated that people in the treatment group who were presented with the fact-checking flags would be more certain about the veracity of the data than the control group (Gaozhao 6). The third hypothesis was that the participants would identify more correct data when presented with the flags from crowdsources rather than professional fact-checking websites (Gaozhao 7). However, the author suggested that the reverse might also be true: individuals would correctly recognize fewer fake and accurate information if the crowd tags were incorrect.
This study randomly assigned the participants into one of the three groups: two treatments and one control. The latter, having 249 participants, received the news without any flags (Gaozhao 9). At the same time, the experimental groups, including 251 and 217 people, were primed with information tagged by a crowdsourcing or professional fact-checking source, respectively (Gaozhao 9). It was conducted as a preregistered survey on Amazon Mechanical Turk to identify the influence of fact-checking flags on users (Gaozhao 8). The researcher created two conditions to assess accurate and inaccurate data identification by first giving correct tags and then false flags, generating thirty items for the experiment. The final sample size was 717 individuals, 322 of whom were male, 387 were female, one was transgender, and seven did not disclose their sex (Gaozhao 8). All had a limited time of 60 seconds to give answers about the beliefs of the information authenticity to prevent them from searching the answers online.
Presentation of Results
The evaluation of the results was performed in two stages by the author. During the first step, the researcher determined using a chi-squared test that there is no statistically significant difference between such characteristics as age, gender, race, education, and prior political affiliations (Gaozhao 10). The second phase involved the identification of the participants’ “correctness and unsureness in means by groups and flag accuracy” (Gaozhao 10). These two variables were measured, evaluated using the ANOVA method, and presented in the form of charts. According to the study findings, the first, second, and third hypotheses were confirmed; however, the second part of the third hypothesis was not true. Moreover, the author performed the multilevel regression analysis to demonstrate that there was no correlation of such features as age, gender, race, and education with the participants’ ability to identify news correctly. This statistical method also allowed to show that people’s political knowledge and inclinations had no significant impact on making decisions about the information authenticity.
Explanation of Conclusions Made by the Researcher
The article’s author concluded that the application of fact-checking flags helps people discern authentic information from false news. Indeed, the study results confirmed the original hypotheses, which stated that marking the data as checked by a reliable source helped the participants to correctly differentiate between genuine and fake news more often (Gaozhao 16). At the same time, the researcher admits that various fake accounts that self-identify as professional services may mislead online users with their tags, resulting in an incorrect perception of the situation. Therefore, since strict censorship would be inconceivable for the democratic society of the United States, the author recommended implementing fact-checking flags to increase citizens’ awareness of incorrect information presented on the web. This measure may be beneficial for most people who are reluctant to conduct critical data assessments. In fact, the author showed that many previous neuroscience studies confirmed that the human brain tended to accept any information that aligned with the existing knowledge about the issue and one’s values.
A Personal Perspective on the Results
I reviewed this study’s results, and it appears that the findings can be considered reliable. I believe this is true not only because they confirm psychological research about human behavior but also due to the fact that the methodology and evaluation were correctly performed. Randomization and regression analysis allowed the author to control for confounding factors. Although all the statistical calculations showed that the results were significant, I think that the sample size of this research should have been larger to increase the power of the study. Nevertheless, the idea of introducing visual cues for people to rate the authenticity of information is a reasonable approach because it induces critical thinking in people, preventing the spread of fake news.
Pennycook, Gordon, and David G. Rand. “The Psychology of Fake News.” Trends in Cognitive Sciences, vol. 25, no. 5, 2021, pp. 388-402.
Gaozhao, Dongfang. “Flagging Fake News on Social Media: An Experimental Study of Media Consumers’ Identification of Fake News.” Government Information Quarterly, vol. 38, no. 3, 2021, pp. 1-24.