LLMs Do Not Think Step-by-step In Implicit Reasoning
Yijiong Yu·November 24, 2024
Summary
The study examines LLMs' implicit reasoning vs. explicit CoT, revealing LLMs often use experience, showing instability & susceptibility. It explores solving arithmetic problems, finding LLMs rarely calculate intermediate steps during implicit reasoning, relying on intuitive thinking instead of step-by-step reasoning. This highlights the need for explicit CoT for complex tasks. The study uses simple problems, records intermediate results, and analyzes hidden states, showing models memorize inputs and final answers but struggle with intermediate calculations. It reveals LLMs exhibit 2-hop reasoning in implicit scenarios but not in complex ones, attributing to their strong abstraction and memory abilities during training. The research underscores the limitations of current LLMs in complex problem-solving, emphasizing the need for explicit CoT methods for better performance.
Advanced features