LLM Augmentations to support Analytical Reasoning over Multiple Documents
Raquib Bin Yousuf, Nicholas Defelice, Mandar Sharma, Shengzhe Xu, Naren Ramakrishnan·November 25, 2024
Summary
The study investigates using large language models (LLMs) in intelligence analysis, emphasizing their role in connecting unrelated entities and events. A dynamic evidence tree (DET) memory module is introduced to support LLMs in managing multiple investigation threads. Experiments on various datasets highlight LLMs' limitations in analytical reasoning, leading to the development of an augmented LLM framework for intelligence tasks. This framework focuses on evidence marshaling, orchestration, and narrative generation. The research also explores the effectiveness of LLMs in intelligence analysis, considering factors like temperature, context size, and their performance in a three-step process. Preliminary experiments show LLMs struggle with connecting dots, summarizing reports superficially, and processing multiple documents due to context constraints. To address these limitations, an augmented architecture with dynamic evidence trees (DETs) is proposed to manage evolving investigation threads and improve context handling. DETs help orchestrate evidence as reasoning progresses, enhancing LLM performance in intelligence analysis.
Advanced features