Unmasking Deception: AI and Its Ability to Recognize Reality
(Image Credit: Content Marketing)
(Image Credit: The Markup)
(Image Credit: Poynter)
December 28, 2024
Maggie Liu
11th Grade
Fountain Valley High School
The online world we live in today is dominated by fake news, misinformation, and AI-written works; we see it everywhere, from media headlines to even the music industry. To target these problems, companies leverage AI as both the solution and the problem. While AI plays a major role in today’s society, its ability to fight issues such as tackling fake news and AI-written works may be overestimated. This raises an essential question– can AI be a reliable detector?
AI has become increasingly popular in the educational aspect. Many schools have begun using Text-Matching Software such as Turnitin and Quetext to detect plagiarism in students' works. These softwares are originally noted for their ability to discover text similarities and duplicate content. But, these true abilities are soon undermined by the popular belief that these softwares are able to detect plagiarism. Although discovering text similarities may seem similar to detecting plagiarism, they are not the same. This becomes a problem when the two concepts get mixed together. For instance, students’ works have consistently been flagged no matter how niche or how broad the content and wording may be. While it is flagged for its similar content, many assume it is blatant plagiarism, leading to unjust consequences.
Accusations of plagiarism are more common and can be more serious than we may think. In January of 2024, former Harvard University President and political scientist Claudine Gay was questioned and accused of plagiarism in her writings, forcing her to step down from her position. She wrote, “The Voting Rights Act of 1965 is often cited as one of the most significant pieces of civil rights legislation passed in our nation’s history.” This singular line was flagged as similar text, and the consequences that followed were hefty.
However, these softwares still have a long way to go to be a reliable detector. A study in September 2023 about the efficacy of AI content detecting tools from the International Journal For Educational Integrity hints that many AI content detectors such as Writer, GPTZero, and Crossplag have had many faults when identifying GPT-3.5 generated contents. These faults were especially emphasized in GPTZero, which detected over 70% of GPT-3.5 generated contents as “very unlikely AI-Generated.”
AI’s role is not limited to detecting just plagiarism, however. It has also been used to detect fake news that has been spread around the Web. According to political scientists from various renowned universities, such as Princeton
(Image Credit: Journal for Education Integrity)
University, it has been revealed that 1 in 4 Americans have visited a fake news site around the 2016 election. With the constant spread of misinformation comes the utilization of AI to combat this spread. Many people have had doubts about its ability to detect fake news and misinformation. The reason behind this is that because truth is so nuanced, complex, and filled with irony or sarcasm, AI’s abilities are limited and may not be able to bypass these complications. AI centers around growth and learning. In essence, the process of determining if news are real or fake begins with the consumption of data to identify patterns. This process may seem simple, but it is far from that.
To understand why, it is crucial to understand what the ground truth is in this case and why it relates to this process. Ground truth refers to the actual, objective reality; in most cases, we do not know the ground truth. In such cases, human judgment is our basis in distinguishing between true and false. To put this into perspective, pretend there’s an algorithm trained to act as an umpire for a tennis match using videos of real matches. Its ability to determine if a hit was in or out of bounds is based on the umpire’s judgment in the videos. This creates room for error as its ability isn’t determined by ground truth but by the umpires.
Furthermore, because new scenarios are created so often, AI can’t accurately produce results. In late 2014, a group of computer scientists led by Chengkai Li, a computer scientist at the University of Texas at Arlington, created ClaimBuster, a fact-checking tool that used the same learning process as described previously. Nonetheless, its results were not as reliable. Out of sixteen faulty claims, it detected twelve of them -- a 75% accuracy.
As AI continues to conquer the Internet, the possibilities are limitless. Ongoing research allows AI to take its steps toward the challenge that lies ahead. Now, the main focus is on how effectively it can face the multitudes of intricate complexities that lie between the lines of truth and false. It is just a matter of time until AI can beat the abundance of plagiarism, fake news, and misinformation and become a detector we can rely on.
Reference Sources
Borel, Brooke. “Can AI Solve the Internet’s Fake News Problem? A Fact-Checker Investigates.” Popular Science, 21 Mar. 2018,
www.popsci.com/can-artificial-intelligence-solve-internets-fake-news-problem/.
Dharani, Naila , et al. “Can A.I. Stop Fake News?” The University of Chicago Booth School of Business, 18 Jan. 2023,
www.chicagobooth.edu/review/can-ai-stop-fake-news.
Elkhatat, Ahmed M, et al. “Evaluating the Efficacy of AI Content Detection Tools in Differentiating between Human and AI-Generated Text.”
International Journal for Educational Integrity, vol. 19, no. 1, 1 Sept. 2023,
https://doi.org/10.1007/s40979-023-00140-5.
Mathewson, Tara García . “Plagiarism Detection Tools Offer a False Sense of Accuracy – the Markup.” Themarkup.org, 10 Jan. 2024,
Myers, Andrew. “AI-Detectors Biased against Non-Native English Writers.” Stanford HAI, Stanford University, 15 May 2023,
https://hai.stanford.edu/news/ai-detectors-biased-against-non-native-english-writers.