DynVQA / README.md
zhzhen23's picture
Update README.md
c6e27c8 verified
|
raw
history blame
1.86 kB
metadata
license: apache-2.0
task_categories:
  - visual-question-answering
  - question-answering
language:
  - en
  - zh
size_categories:
  - 1K<n<10K
configs:
  - config_name: DynVQA_en
    data_files:
      - split: test
        path: test/DynVQA_en/DynVQA_en.202412.jsonl
    default: true
  - config_name: DynVQA_zh
    data_files:
      - split: test
        path: test/DynVQA_zh/DynVQA_zh.202412.jsonl

πŸ“š Dyn-VQA Dataset

πŸ“‘ Dataset for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent

🌟 This dataset is linked to GitHub at this URL.

The json item of Dyn-VQA dataset is organized in the following format:

{
    "image_url": "https://www.pcarmarket.com/static/media/uploads/galleries/photos/uploads/galleries/22387-pasewark-1986-porsche-944/.thumbnails/IMG_7102.JPG.jpg/IMG_7102.JPG-tiny-2048x0-0.5x0.jpg",
    "question": "What is the model of car from this brand?",
    "question_id": 'qid',
    "answer": ["保既捷 944", "Porsche 944."]
}

πŸ”₯ The Dyn-VQA will be updated regularly. Laset version: 202502.

πŸ“ Citation

@article{li2024benchmarkingmultimodalretrievalaugmented,
      title={Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent}, 
      author={Yangning Li and Yinghui Li and Xinyu Wang and Yong Jiang and Zhen Zhang and Xinran Zheng and Hui Wang and Hai-Tao Zheng and Pengjun Xie and Philip S. Yu and Fei Huang and Jingren Zhou},
      year={2024},
      eprint={2411.02937},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2411.02937}, 
}

When citing our work, please kindly consider citing the original papers. The relevant citation information is listed here.