LLLMs: A Data-Driven Survey of Evolving Research on Limitations of Large Language Models
Aida Kostikova, Zhipin Wang, Deidamea Bajri, Ole Pütz, Benjamin Paaßen, Steffen Eger·May 25, 2025
Summary
In a 2022-2024 survey of 14,648 large language model (LLM) papers, reasoning, generalization, hallucination, bias, and security emerged as critical concerns. Research focused on LLM limitations, particularly safety, controllability, and multimodality. The study provided a dataset of annotated abstracts and a validated methodology, revealing trends in LLM research. AI emphasis shifted towards summarization, model evaluation, and addressing issues like factuality, reasoning, and gender bias. Key areas for improvement included LLM limitations in factuality, alignment, visual instruction tuning, mathematical reasoning, and security risks. Trends underscored a strong focus on trustworthiness, reasoning, generalization, and security across various LLM categories.
Introduction
Background
Overview of large language models
Importance of LLM research in AI
Objective
To analyze the critical concerns and research trends in LLM studies from 2022 to 2024
Critical Concerns in LLM Research
Reasoning
Challenges in logical and creative reasoning
Generalization
Limitations in applying learned knowledge to new situations
Hallucination
Issues with generating false or misleading information
Bias
Detection and mitigation of algorithmic biases
Security
Risks and vulnerabilities in LLM applications
Research Focus Areas
LLM Limitations
Safety and controllability
Multimodality and cross-modal interactions
Methodology
Dataset of annotated abstracts
Validated research approach
AI Emphasis and Trends
Research Areas
Summarization techniques
Model evaluation frameworks
Addressing factuality, reasoning, and gender bias
Key Improvements
Enhancing factuality in LLM outputs
Aligning models with ethical and societal norms
Visual instruction tuning for better multimodal understanding
Mathematical reasoning capabilities
Security measures against potential threats
Trends in LLM Research
Trustworthiness
Importance of reliable and trustworthy models
Reasoning and Generalization
Advances in improving model understanding and adaptability
Security
Development of robust security protocols for LLMs
LLM Categories and Specific Trends
Factuality
Enhancements in factual accuracy
Alignment
Ensuring models align with human values
Visual Instruction Tuning
Improving model performance with visual inputs
Mathematical Reasoning
Progress in handling complex mathematical tasks
Security Risks
Mitigating risks associated with LLM deployment
Conclusion
Summary of Findings
Future Directions
Emerging research challenges
Potential areas for innovation
Basic info
papers
computation and language
machine learning
artificial intelligence
Advanced features