AMO: Adaptive Motion Optimization for Hyper-Dexterous Humanoid Whole-Body Control
Jialong Li, Xuxin Cheng, Tianshu Huang, Shiqi Yang, Ri-Zhao Qiu, Xiaolong Wang·May 06, 2025
Summary
AMO, a real-time humanoid control framework, integrates sim-to-real RL & trajectory optimization. It uses a hybrid dataset for motion imitation, ensuring adaptability. Validated on a 29-DoF Unitree G1, AMO outperforms baselines in stability and workspace expansion. It enables autonomous task execution through imitation learning, showcasing system versatility and robustness.
Introduction
Background
Overview of humanoid control challenges
Importance of sim-to-real RL and trajectory optimization
Objective
Aim of the AMO framework
Key objectives in motion imitation and task execution
Method
Hybrid Dataset Utilization
Composition of the dataset
Role in motion imitation
Data Collection
Techniques for gathering data
Data Preprocessing
Methods for preparing data for use
Integration of Sim-to-Real RL & Trajectory Optimization
Explanation of the integration process
Benefits of combining these techniques
Validation
Platform Selection
Description of the Unitree G1
Suitability for AMO testing
Performance Metrics
Criteria for evaluating stability and workspace expansion
Comparison with Baselines
Baseline methods for comparison
Results highlighting AMO's superiority
Results
Stability Analysis
Quantitative and qualitative results
Workspace Expansion
Demonstration of expanded operational capabilities
Task Execution
Examples of autonomous task execution
System Versatility and Robustness
Discussion on the framework's adaptability and reliability
Conclusion
Summary of Findings
Future Work
Potential areas for further research
Impact and Applications
Real-world implications and potential uses
Basic info
papers
robotics
machine learning
artificial intelligence
Advanced features
Insights
What are the advantages of using a hybrid dataset in AMO for motion imitation?
In what ways does AMO ensure adaptability across different humanoid platforms?
How does AMO utilize sim-to-real reinforcement learning for motion imitation?
What are the main components of the AMO framework for humanoid control?