VidEgoThink: Assessing Egocentric Video Understanding Capabilities for Embodied AI Paper • 2410.11623 • Published Oct 15, 2024 • 49
AV-Odyssey Bench: Can Your Multimodal LLMs Really Understand Audio-Visual Information? Paper • 2412.02611 • Published Dec 3, 2024 • 25
HuatuoGPT-II, One-stage Training for Medical Adaption of LLMs Paper • 2311.09774 • Published Nov 16, 2023 • 1
ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language Model Paper • 2402.11684 • Published Feb 18, 2024 • 2
Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications Paper • 2408.11878 • Published Aug 20, 2024 • 64
LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture Paper • 2409.02889 • Published Sep 4, 2024 • 54
HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale Paper • 2406.19280 • Published Jun 27, 2024 • 63
SEED-Bench-2: Benchmarking Multimodal Large Language Models Paper • 2311.17092 • Published Nov 28, 2023
SEED-Bench-2-Plus: Benchmarking Multimodal Large Language Models with Text-Rich Visual Comprehension Paper • 2404.16790 • Published Apr 25, 2024 • 10
SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension Paper • 2307.16125 • Published Jul 30, 2023 • 8
Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners Paper • 2303.02151 • Published Mar 3, 2023
Silkie: Preference Distillation for Large Visual Language Models Paper • 2312.10665 • Published Dec 17, 2023 • 11