Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -11,19 +11,15 @@ size_categories:
|
|
| 11 |
- n<1K
|
| 12 |
---
|
| 13 |
|
| 14 |
-
# MMToM-QA: Multimodal Theory of Mind Question Answering
|
| 15 |
-
|
| 16 |
-
<div align="center">
|
| 17 |
-
|
| 18 |
-
Outstanding Paper Award at ACL 2024
|
| 19 |
|
| 20 |
[\[🏠Homepage\]](https://chuanyangjin.com/mmtom-qa) [\[💻Code\]](https://github.com/chuanyangjin/MMToM-QA) [\[📝Paper\]](https://arxiv.org/abs/2401.08743)
|
| 21 |
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
MMToM-QA
|
| 25 |
-
|
| 26 |
-
|
| 27 |
|
| 28 |
Currently, only the text-only version of MMToM-QA is available on Hugging Face. For the multimodal or video-only versions, please visit the GitHub repository: https://github.com/chuanyangjin/MMToM-QA
|
| 29 |
|
|
@@ -32,11 +28,12 @@ Here is the [**leaderboard**](https://chuanyangjin.com/mmtom-qa-leaderboard) for
|
|
| 32 |
|
| 33 |
|
| 34 |
## Citation
|
| 35 |
-
|
| 36 |
-
|
| 37 |
@article{jin2024mmtom,
|
| 38 |
title={Mmtom-qa: Multimodal theory of mind question answering},
|
| 39 |
author={Jin, Chuanyang and Wu, Yutong and Cao, Jing and Xiang, Jiannan and Kuo, Yen-Ling and Hu, Zhiting and Ullman, Tomer and Torralba, Antonio and Tenenbaum, Joshua B and Shu, Tianmin},
|
| 40 |
journal={arXiv preprint arXiv:2401.08743},
|
| 41 |
year={2024}
|
| 42 |
}
|
|
|
|
|
|
| 11 |
- n<1K
|
| 12 |
---
|
| 13 |
|
| 14 |
+
## MMToM-QA: Multimodal Theory of Mind Question Answering <br> <sub>🏆 Outstanding Paper Award at ACL 2024</sub>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
[\[🏠Homepage\]](https://chuanyangjin.com/mmtom-qa) [\[💻Code\]](https://github.com/chuanyangjin/MMToM-QA) [\[📝Paper\]](https://arxiv.org/abs/2401.08743)
|
| 17 |
|
| 18 |
+
MMToM-QA is the first multimodal benchmark to evaluate machine Theory of Mind (ToM), the ability to understand people's minds.
|
| 19 |
+
It systematically evaluates Theory of Mind both on multimodal data and different unimodal data.
|
| 20 |
+
MMToM-QA consists of 600 questions.
|
| 21 |
+
The questions are categorized into seven types, evaluating belief inference and goal inference in rich and diverse situations.
|
| 22 |
+
Each belief inference type has 100 questions, totaling 300 belief questions; each goal inference type has 75 questions, totaling 300 goal questions.
|
| 23 |
|
| 24 |
Currently, only the text-only version of MMToM-QA is available on Hugging Face. For the multimodal or video-only versions, please visit the GitHub repository: https://github.com/chuanyangjin/MMToM-QA
|
| 25 |
|
|
|
|
| 28 |
|
| 29 |
|
| 30 |
## Citation
|
| 31 |
+
Please cite the paper if you find it interesting/useful, thanks!
|
| 32 |
+
```bibtex
|
| 33 |
@article{jin2024mmtom,
|
| 34 |
title={Mmtom-qa: Multimodal theory of mind question answering},
|
| 35 |
author={Jin, Chuanyang and Wu, Yutong and Cao, Jing and Xiang, Jiannan and Kuo, Yen-Ling and Hu, Zhiting and Ullman, Tomer and Torralba, Antonio and Tenenbaum, Joshua B and Shu, Tianmin},
|
| 36 |
journal={arXiv preprint arXiv:2401.08743},
|
| 37 |
year={2024}
|
| 38 |
}
|
| 39 |
+
```
|