A Survey on Event-driven 3D Reconstruction: Development under Different Categories

Chuanzhi Xu, Haoxian Zhou, Haodong Chen, Vera Chung, Qiang Qu·March 25, 2025

Summary

Event-driven 3D reconstruction using asynchronous event cameras has gained attention for high temporal resolution and dynamic range. This survey reviews methods in stereo, monocular, and multimodal systems, categorized into geometric, learning-based, and hybrid approaches. It highlights emerging trends like neural radiance fields and 3D Gaussian splatting, emphasizing key research gaps and future directions. Event cameras detect brightness changes, generating data with coordinates, timestamps, and polarities. The text discusses advancements in event-based neural radiance fields, deblurring, and efficient rendering, focusing on works like "Evd-nerf," "Event3dgs," and "Mitigating motion blur." It also covers event-assisted 3D deblur reconstruction, event-based Gaussian splatting, and pose-free Gaussian splatting from single event cameras. The text discusses research papers on event cameras and event-based vision, including an open event camera simulator, methods for converting video to event data, and real-time photometric stereo using event cameras.

Introduction
Background
Overview of event cameras and their characteristics
Importance of high temporal resolution and dynamic range in 3D reconstruction
Objective
To review and categorize methods in event-driven 3D reconstruction
Highlight emerging trends and research gaps
Method
Geometric Approaches
Event-based stereo reconstruction
Monocular event-based 3D reconstruction
Multimodal event-based 3D reconstruction
Learning-based Approaches
Neural radiance fields for event-based 3D reconstruction
Deblurring techniques using event data
Efficient rendering methods for event cameras
Hybrid Approaches
Combining geometric and learning-based methods
Event-assisted 3D deblur reconstruction
Event-based Gaussian splatting and pose-free reconstruction
Emerging Trends
Neural Radiance Fields
Overview of neural radiance fields in event-based 3D reconstruction
Key works: "Evd-nerf," "Event3dgs," "Mitigating motion blur"
Deblurring and Rendering
Event-based deblurring techniques
Efficient rendering methods for event cameras
Applications and Tools
Research Papers
Open event camera simulator
Methods for converting video to event data
Real-time photometric stereo using event cameras
Future Directions
Research Gaps
Challenges in real-time processing and scalability
Integration with other sensor data for enhanced 3D reconstruction
Future Trends
Advancements in hardware for event cameras
Integration of AI and machine learning for improved accuracy and efficiency
Conclusion
Summary of Key Findings
Implications for Future Research
Call for Collaboration
Encouragement for interdisciplinary research
Potential for industry-academia partnerships
Basic info
papers
computer vision and pattern recognition
graphics
artificial intelligence
Advanced features
Insights
What are the main components of the event-driven 3D reconstruction systems discussed in the survey?
What innovative methods are highlighted in the survey for improving event-based 3D reconstruction?
How do neural radiance fields and 3D Gaussian splatting contribute to advancements in event-driven 3D reconstruction?
What are the challenges in integrating event cameras with existing 3D reconstruction methods?