OmniBench: Towards The Future of Universal Omni-Language Models

Yizhi Li, Ge Zhang, Yinghao Ma, Ruibin Yuan, Kang Zhu, Hangyu Guo, Yiming Liang, Jiaheng Liu, Jian Yang, Siwei Wu, Xingwei Qu, Jinjie Shi, Xinyue Zhang, Zhenzhu Yang, Xiangzhou Wang, Zhaoxiang Zhang, Zachary Liu, Emmanouil Benetos, Wenhao Huang, Chenghua Lin·September 23, 2024

Summary

OmniBench, a benchmark for universal omni-language models, evaluates their ability to process visual, acoustic, and textual inputs simultaneously. It highlights limitations in open-source models' tri-modal instruction-following and reasoning capabilities, with proprietary models outperforming open-source ones. The benchmark aims to guide research and development in multimodal systems by identifying areas for improvement.

Key findings

1
  • header

Advanced features