Seeing Through AI's Lens: Enhancing Human Skepticism Towards LLM-Generated Fake News
Navid Ayoobi, Sadat Shahriar, Arjun Mukherjee·June 20, 2024
Summary
The paper investigates the growing concern of large language models (LLMs) being misused to create fake news. It introduces the Entropy-Shift Authorship Signature (ESAS) metric, a method based on information theory that ranks terms to distinguish human-written from LLM-generated content. ESAS, combined with TF-IDF and logistic regression, achieves high accuracy in detecting AI-generated articles, particularly from models like ChatGPT, Llama2, and Mistral. The study provides a novel dataset of 39,000 news articles, altered with LLMs, to research fake news detection. ESAS helps readers identify authenticity by analyzing top-ranked terms and entities. The research emphasizes the need for enhancing human skepticism and developing better detection systems to combat the spread of fake news from LLMs. Future work includes refining the ESAS metric and addressing the manipulation of LLM-generated content.
Advanced features