_id
stringlengths 24
24
| id
stringlengths 4
121
| author
stringlengths 2
42
| cardData
stringlengths 2
1.09M
| disabled
bool 1
class | gated
stringclasses 3
values | lastModified
timestamp[ns]date 2021-02-05 16:03:35
2026-01-26 13:13:33
| likes
int64 0
9.57k
| trendingScore
float64 0
117
| private
bool 1
class | sha
stringlengths 40
40
| description
stringlengths 0
6.67k
⌀ | downloads
int64 0
1.78M
| downloadsAllTime
int64 0
143M
| tags
listlengths 1
7.92k
| createdAt
timestamp[ns]date 2022-03-02 23:29:22
2026-01-26 13:13:22
| paperswithcode_id
stringclasses 687
values | citation
stringlengths 0
10.7k
⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
69524c8ad001e56220ced9bc
|
Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b
|
Alibaba-Apsara
|
{"license": "cc-by-4.0", "task_categories": ["text-generation"], "language": ["en"], "tags": ["code", "math", "scientific-qa", "instruction-following", "reasoning", "thinking", "gpt-oss-120b", "distill"], "size_categories": ["435K"], "configs": [{"config_name": "stage1", "data_files": "Superior-Reasoning-SFT-gpt-oss-120b-stage1-train-data.jsonl", "features": [{"name": "uuid", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "meta", "dtype": "string"}]}, {"config_name": "stage2", "data_files": "Superior-Reasoning-SFT-gpt-oss-120b-stage2-train-data.jsonl", "features": [{"name": "uuid", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "meta", "dtype": "string"}]}]}
| false
|
False
| 2026-01-15T06:39:55
| 271
| 117
| false
|
e9d54e2a3f376fd5c62cafd3c4c99b304cdda698
|
Superior-Reasoning-SFT-gpt-oss-120b
🚀 Overview
The Superior-Reasoning-SFT-gpt-oss-120b dataset is a high-quality, open-source collection containing 435K samples designed to democratize the training of high-performance Long Chain-of-Thought (Long-CoT) models. Unlike standard distilled datasets that rely on random sampling or heuristic filtering, Superior-Reasoning-SFT-gpt-oss-120b is constructed using a principled Distribution-Aligned Sequence… See the full description on the dataset page: https://huggingface.co/datasets/Alibaba-Apsara/Superior-Reasoning-SFT-gpt-oss-120b.
| 20,051
| 20,051
|
[
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:polars",
"library:mlcroissant",
"arxiv:2601.09088",
"arxiv:2512.20908",
"region:us",
"code",
"math",
"scientific-qa",
"instruction-following",
"reasoning",
"thinking",
"gpt-oss-120b",
"distill"
] | 2025-12-29T09:40:26
| null | null |
696b2406e6c69ff4f49745f4
|
sojuL/RubricHub_v1
|
sojuL
|
{"license": "apache-2.0", "language": ["zh", "en"], "tags": ["medical", "science", "wirting", "isntruction", "chat", "general"], "pretty_name": "RubricHub", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "reinforcement-learning", "question-answering"]}
| false
|
False
| 2026-01-20T07:16:51
| 111
| 107
| false
|
bec50742963ed3672391fecbcc4b60067b9fa8bc
|
RubricHub_v1
RubricHub is a large-scale (approximately 110K), multi-domain dataset that provides high-quality rubric-based supervision for open-ended generation tasks. It is constructed via an automated coarse-to-fine rubric generation framework, which integrates principle-guided synthesis, multi-model aggregation, and difficulty evolution to produce comprehensive and highly discriminative evaluation criteria, overcoming the supervision ceiling of coarse or static rubrics.… See the full description on the dataset page: https://huggingface.co/datasets/sojuL/RubricHub_v1.
| 507
| 507
|
[
"task_categories:text-generation",
"task_categories:reinforcement-learning",
"task_categories:question-answering",
"language:zh",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"arxiv:2601.08430",
"region:us",
"medical",
"science",
"wirting",
"isntruction",
"chat",
"general"
] | 2026-01-17T05:54:14
| null | null |
6969078587ce326016ddda46
|
lightonai/LightOnOCR-mix-0126
|
lightonai
|
{"dataset_info": {"features": [{"name": "key", "dtype": "string"}, {"name": "page_idx", "dtype": "int64"}, {"name": "content", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "element_counts", "struct": [{"name": "formulas", "dtype": "int64"}, {"name": "images", "dtype": "int64"}, {"name": "tables", "dtype": "int64"}]}, {"name": "token_length", "dtype": "int64"}]}], "splits": [{"name": "pdfa_train", "num_bytes": 38584453222, "num_examples": 16428833}, {"name": "pdfa_validation", "num_bytes": 4689687, "num_examples": 2000}], "download_size": 21111271721, "dataset_size": 38589142909}, "configs": [{"config_name": "default", "data_files": [{"split": "pdfa_train", "path": "data/pdfa_train-*"}, {"split": "pdfa_validation", "path": "data/pdfa_validation-*"}]}], "license": "other", "task_categories": ["text-to-image", "object-detection"], "language": ["en", "fr", "de", "es", "it", "ja", "ru", "pl", "nl", "zh", "pt", "bg", "tr", "ur", "hi", "th", "ar", "sw", "el", "vi"], "tags": ["ocr"], "size_categories": ["10M<n<100M"], "pretty_name": "LightOnOCR-mix"}
| false
|
False
| 2026-01-23T08:39:35
| 82
| 82
| false
|
09e11af7f0aacde1553b4d164049831e5bb7adb7
|
LightOnOCR-mix-0126
LightOnOCR-mix-0126 is a large-scale OCR training dataset built via distillation: a strong vision–language model is prompted to produce naturally ordered full-page transcriptions (Markdown with LaTeX math spans and HTML tables) from rendered document pages. The dataset is designed as supervision for end-to-end OCR / document-understanding models that aim to output clean, human-readable text in a consistent format.
This repository releases the PDFA-derived… See the full description on the dataset page: https://huggingface.co/datasets/lightonai/LightOnOCR-mix-0126.
| 1,061
| 1,061
|
[
"task_categories:text-to-image",
"task_categories:object-detection",
"language:en",
"language:fr",
"language:de",
"language:es",
"language:it",
"language:ja",
"language:ru",
"language:pl",
"language:nl",
"language:zh",
"language:pt",
"language:bg",
"language:tr",
"language:ur",
"language:hi",
"language:th",
"language:ar",
"language:sw",
"language:el",
"language:vi",
"license:other",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"arxiv:2601.14251",
"region:eu",
"ocr"
] | 2026-01-15T15:28:05
| null | null |
69676b65aeecdadc87f8da8e
|
facebook/action100m-preview
|
facebook
|
{"license": "fair-noncommercial-research-license", "language": ["en"], "tags": ["video", "action"], "size_categories": ["10M<n<100M"]}
| false
|
False
| 2026-01-14T14:24:13
| 120
| 79
| false
|
c9404b5c9772d6883a2f062945273f171b585275
|
Action100M: A Large-scale Video Action Dataset
Our data can be loaded from the 🤗 huggingface repo at facebook/action100m-preview where we released 10% of the full Action100M for preview. For examples of loading from local parquet files (from cloned repo) and visualization, see our GitHub repo.
from datasets import load_dataset
dataset = load_dataset(
"parquet",
data_files=f"hf://datasets/facebook/Action100M-preview/data/*.parquet",
streaming=True,
)
it =… See the full description on the dataset page: https://huggingface.co/datasets/facebook/action100m-preview.
| 3,306
| 3,306
|
[
"language:en",
"license:fair-noncommercial-research-license",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"modality:video",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"region:us",
"video",
"action"
] | 2026-01-14T10:09:41
| null | null |
68ba0ffd343a84103b603c45
|
Pageshift-Entertainment/LongPage
|
Pageshift-Entertainment
|
{"pretty_name": "LongPage", "dataset_name": "LongPage", "library_name": "datasets", "language": ["en"], "license": ["cc-by-4.0", "other"], "task_categories": ["text-generation"], "task_ids": ["language-modeling", "text2text-generation"], "size_categories": ["n<1K"], "source_datasets": ["original"], "annotations_creators": ["machine-generated"], "language_creators": ["found"], "multilinguality": ["monolingual"], "tags": ["long-context", "cot", "reasoning", "creative-writing", "Cold start reasoning data"], "pretty_visual": "assets/cover_image.png"}
| false
|
False
| 2026-01-20T14:01:26
| 121
| 70
| false
|
27d907b6a9f92682110e68ef91f001b4812698d6
|
Overview 🚀📚
The first comprehensive dataset for training AI models to write complete novels with sophisticated reasoning.
🧠 Hierarchical Reasoning Architecture — Multi-layered planning traces including character archetypes, story arcs, world rules, and scene breakdowns. A complete cognitive roadmap for long-form narrative construction.
📖 Complete Novel Coverage — From 40,000 to 600,000+ tokens per book, spanning novellas to epic series with consistent quality throughout.
⚡… See the full description on the dataset page: https://huggingface.co/datasets/Pageshift-Entertainment/LongPage.
| 4,141
| 15,438
|
[
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:text2text-generation",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"license:other",
"size_categories:1K<n<10K",
"format:parquet",
"format:optimized-parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"region:us",
"long-context",
"cot",
"reasoning",
"creative-writing",
"Cold start reasoning data"
] | 2025-09-04T22:17:33
| null | null |
69660562d230db5333514344
|
FOMO-MRI/FOMO300K
|
FOMO-MRI
|
{"license": "other", "license_name": "license", "tags": ["brain", "mri", "ssl", "foundation_model", "3d", "image"], "pretty_name": "FOMO-300K", "size_categories": ["100K<n<1M"], "task_categories": ["image-feature-extraction", "zero-shot-classification"], "viewer": false, "extra_gated_prompt": "\nThis collection of datasets is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. Each individual dataset within the collection retains its original license, which is reported in the corresponding dataset folder. Some datasets are additionally subject to Data Use Agreements (DUAs), which are reported below and in the relevant dataset folders. Users must comply with the applicable license terms and any associated DUAs.\n\nYou are free to:\nShare \u2014 copy and redistribute the material in any medium or format\nAdapt \u2014 remix, transform, and build upon the material\nThe licensor cannot revoke these freedoms as long as you follow the license terms.\n\nUnder the following terms:\nAttribution \u2014 You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.\nNonCommercial \u2014 You may not use the material for commercial purposes.\nShareAlike \u2014 If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.\nNo additional restrictions \u2014 You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.\n\nNotices:\nYou do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation.\n\nNo warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.\n\nFull license: https://creativecommons.org/licenses/by-nc-sa/4.0/\n\nDUAs:\n\nOASIS Data Use Agreement\n\nThe OASIS data are distributed to the greater scientific community under the following terms:\n1. User will not use the OASIS datasets, either alone or in concert with any other information, to make any effort to identify or contact individuals who are or may be the sources of the information in the dataset. If User inadvertently receives identifiable information or otherwise identifies a subject, User will immediately notify OASIS and follow OASISs reasonable written instructions, which may include the return or destruction of identifiable information.\n2. User is strictly prohibited from generating or using images or comparable representations of the face, head, or body for facial recognition, re-identification, or other purposes that could allow the identities of research participants to be readily ascertained.\n3. User will not use or further disclose the OASIS-3 or OASIS-4 except as required by law. User shall not share, distribute, or otherwise make available the OASIS data, in whole or in part, to any third party, including collaborators, without prior written permission from OASIS. All collaborators must independently apply for access and agree to these terms. Additionally, User will not use or further disclose any derivative works or derivative data of the OASIS datasets, in any case in whole or in part, that could be used to reconstruct a facial image. User shall report to OASIS immediately upon Users discovery of any unauthorized use or disclosure not permitted by this Data Use Agreement. User shall provide the following information: (1) the nature of the use or disclosure; (2) the information used or disclosed; (3) the identity of the persons and/or entities that made the use or disclosure; and (4) what corrective action will be taken by User as a result of the use or disclosure. User shall take any other reasonable actions available to it to mitigate any detrimental effects of the use or disclosure.\n4. User agrees to implement appropriate administrative, physical, and technical safeguards to protect the OASIS data from unauthorized access, use or disclosure. OASIS data must be stored on secure, access-controlled systems, and only the User authorized under this Data Use Agreement may access the data.\n5. OASIS data are provided for non-commercial, academic research purposes only. Any commercial use, including but not limited to the sale of data or commercial consulting, is strictly prohibited without explicit, prior written authorization from OASIS.\n6. User agrees to retain OASIS data only for as long as necessary to fulfill the research purposes described in Users application. Upon completion of the research or upon request by OASIS, User will securely destroy or return all copies of the data.\n7. User will acknowledge the use of OASIS data and data derived from OASIS data when publicly presenting any results or algorithms that benefitted from their use. Papers, book chapters, books, posters, oral presentations, and all other printed\nand digital presentations of results derived from OASIS data should contain the following: \n - Acknowledgments: Data were provided [in part] by OASIS [insert appropriate OASIS source info]\n (a) OASIS-1: Cross-Sectional: Principal Investigators: D. Marcus, R, Buckner, J, Csernansky J. Morris; P50 AG05681, P01 AG03991, P01 AG026276, R01 AG021910, P20 MH071616, U24 RR021382\n (b) OASIS-2: Longitudinal: Principal Investigators: D. Marcus, R, Buckner, J. Csernansky, J. Morris; P50 AG05681, P01 AG03991, P01 AG026276, R01 AG021910, P20 MH071616, U24 RR021382\n (c) OASIS-3: Longitudinal Multimodal Neuroimaging: Principal Investigators: T. Benzinger, D. Marcus, J. Morris; NIH P30 AG066444, P50 AG00561, P30 NS09857781, P01 AG026276, P01 AG003991, R01 AG043434, UL1 TR000448, R01 EB009352. AV-45 doses were provided by Avid Radiopharmaceuticals, a wholly owned subsidiary of Eli Lilly.\n (d) OASIS-3_AV1451: Principal Investigators: T. Benzinger, J. Morris; NIH P30 AG066444, AW00006993. AV-1451 doses were provided by Avid Radiopharmaceuticals, a wholly owned subsidiary of Eli Lilly.\n (e) OASIS-4: Clinical Cohort: Principal Investigators: T. Benzinger, L. Koenig, P. LaMontagne\n - Citation: The specific publications that are appropriate to cite in any given study will depend on what OASIS data were used and for what purposes. An annotated and current list of OASIS publications is available at http://www.oasis- brains.org.\n (a) OASIS-1: Cross-Sectional: https://doi.org/10.1162/jocn.2007.19.9.1498\n (b) OASIS-2: Longitudinal: https://doi.org/10.1162/jocn.2009.21407\n (c) OASIS-3: Longitudinal Multimodal Neuroimaging: https://doi.org/10.1101/2019.12.13.19014902\n (d) OASIS-4: Clinical Cohort: https://doi.org/10.1016/j.nicl.2020.102248\n - All proposed publications or presentations using Florbetapir F18 (AV45) or Flortaucipir F18 (AV1451) PET data must be submitted to Avid Radiopharmaceuticals for review and comment thirty days prior to such presentation or publication for review of intellectual property interests. See Imaging data dictionary for contact information and details.\n8. User agree to provide the Knight ADRC with information on Users use of OASIS data, upon request.\n9. Failure to abide by these data use terms may result in termination of your right to access and use OASIS data. In the event of breach of this Data Use Agreement, OASIS reserves the right to pursue all remedies available at law or in equity, including but not limited to termination of access, notification of the Users institution, and legal action.\n\nBraTS-GEN Data Use Agreement\n\nYou are free to use and/or refer to the BraTS datasets in your own research, provided that you always cite the flagship manuscript (published or pre-published) resulting from the challenge, as well as the following challenge-specific manuscripts:\n\nDataset:\n- Any dataset and/or Med-Perf client\n - Citations Needed\n \u2022 A. Karargyris, R. Umeton, M.J. Sheller, A. Aristizabal, J. George, A. Wuest, S. Pati, et al. \"Federated benchmarking of medical artificial intelligence with MedPerf\". Nature Machine Intelligence. 5:799810 (2023).\n \u2022 DOI: https://doi.org/10.1038/s42256-023-00652-2\n- BraTS-GLI\n - Citations Needed\n 1 U.Baid, et al., The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification, arXiv:2107.02314, 2021.\n 2 B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, et al. \"The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)\", IEEE Transactions on Medical Imaging 34(10), 1993-2024 (2015) DOI: https://doi.org/10.1109 TMI.2014.2377694\n 3 S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J.S. Kirby, et al., \"Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features\", Nature Scientific Data, 4:170117 (2017) DOI: https://doi.org/10.1038/sdata.2017.117\n In addition, if there are no restrictions imposed from the journal/conference you submit your paper about citing \"Data Citations\", please be specific and also cite the following:\n 4 S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al., \"Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-GBM collection\", The Cancer Imaging Archive, 2017. DOI: https://doi.org/10.7937/K9/TCIA.2017.KLXWJJ1Q\n 5 S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al., \"Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-LGG collection\", The Cancer Imaging Archive, 2017. DOI: https://doi.org/10.7937/K9/TCIA.2017.GJQ7R0EF\n- BraTS-MEN\n - Citations Needed\n \u2022 arXiv: https://arxiv.org/abs/2305.07642\n \u2022 DOI: https://doi.org/10.48550/arXiv.2305.07642\n- BraTS-MET\n - Citations Needed\n \u2022 arXiv: https://arxiv.org/abs/2306.00838\n \u2022 DOI: https://doi.org/10.48550/arXiv.2306.00838\n- BraTS-PED\n - Citations Needed\n \u2022 arXiv: https://arxiv.org/abs/2305.17033\n \u2022 DOI: https://doi.org/10.48550/arXiv.2305.17033\n- BraTS-SSA\n - Citations Needed\n 1 Adewole M, Rudie JD, Gbadamosi A, et al. The Brain Tumor Segmentation (BraTS) Challenge 2023: Glioma Segmentation in Sub-Saharan Africa Patient Population (BraTS-Africa). arXiv:2305.19369 [eess.IV] (2023).\n \u2022 arXiv: https://arxiv.org/abs/2305.19369\n \u2022 DOI: https://doi.org/10.48550/arXiv.2305.19369\n \nNote: Challenge participants agree to cite the initial challenge pre publication manuscript (or the final publication manuscript). You will be contacted through your Synapse affiliated email when the manuscript has been released for citation. Note: Use of the BraTS datasets for creating and submitting benchmark results for publication on MLPerf.org is considered non-commercial use. It is further acceptable to republish results published on MLPerf.org, as well as to create unverified benchmark results consistent with the MLPerf.org rules in other locations. Please note that you should always adhere to the BraTS data usage guidelines and cite appropriately the aforementioned publications, as well as to the terms of use required by MLPerf.org.\n\nGSP Open Access Data Use Terms\n\nI request access to data collected as part of the Brain Genomics Superstruct Project (GSP) of Harvard University and the Massachusetts General Hospital, and I agree to the following:\n1. I will not attempt to establish the identity of or attempt to contact any of the included human subjects.\n2. I will not attempt to link any of the distributed data to any other data that might contain information about the included human subjects.\n3. I understand that under no circumstances will the code that would link these data to Protected Health Information be given to me, nor will any additional information about individual human subjects be released to me under these Open Access Data Use Terms.\n4. I will comply with all relevant rules and regulations imposed by my institution. This may mean that I need my research to be approved or declared exempt by a committee that oversees research on human subjects e.g., my Internal Review Board or Ethics Committee. Different committees operate under different national, state, and local laws and may interpret regulations differently, so it is important to ask about this.\n5. I may redistribute original GSP Open Access data and any derived data as long as the data are redistributed under these same Data Use Terms.\n6. I will acknowledge the use of GSP data and data derived from GSP data when publicly presenting any results or algorithms that benefitted from their use.\n (a) Papers, book chapters, books, posters, oral presentations, and all other printed and digital presentations of results derived from GSP data should contain the following wording in the acknowledgments section: Data were provided [in part] by the Brain Genomics Superstruct Project of Harvard University and the Massachusetts General Hospital, (Principal Investigators: Randy Buckner, Joshua Roffman, and Jordan Smoller), with support from the Center for Brain Science Neuroinformatics Research Group, the Athinoula A. Martinos Center for Biomedical Imaging, and the Center for Human Genetic Research. 20 individual investigators at Harvard and MGH generously contributed data to the overall project.\n (b) Authors of publications or presentations using GSP data should cite relevant publications describing the methods used by the GSP to acquire and process the data. The specific publications that are appropriate to cite in any given study will depend on what GSP data were used and for what purposes. An annotated and appropriately up-to-date list of publications that may warrant consideration is available at http://neuroinformatics.harvard.edu/gsp/\n (c) The GSP as a consortium should not be included as an author of publications or presentations if this authorship would be based solely on the use of GSP data.\n7. Failure to abide by these guidelines will result in termination of my privileges to access GSP data.\n\nHCP WU-Minn and Test-Retest Data Use Terms\n\nI request access to data collected by the Washington University - University of Minnesota Consortium of the Human Connectome Project (WU-Minn HCP), and I agree to the following:\n1. I will not attempt to establish the identity of or attempt to contact any of the included human subjects.\n2. I understand that under no circumstances will the code that would link these data to Protected Health Information be given to me, nor will any additional information about individual human subjects be released to me under these Open Access Data Use Terms.\n3. I will comply with all relevant rules and regulations imposed by my institution. This may mean that I need my research to be approved or declared exempt by a committee that oversees research on human subjects, e.g. my IRB or Ethics Committee. The released HCP data are not considered de-identified, insofar as certain combinations of HCP Restricted Data (available through a separate process) might allow identification of individuals. Different committees operate under different national, state and local laws and may interpret regulations differently, so it is important to ask about this. If needed and upon request, the HCP will provide a certificate stating that you have accepted the HCP Open Access Data Use Terms.\n4. I may redistribute original WU-Minn HCP Open Access data and any derived data as long as the data are redistributed under these same Data Use Terms.\n5. I will acknowledge the use of WU-Minn HCP data and data derived from WU-Minn HCP data when publicly presenting any results or algorithms that benefitted from their use.\n (a) Papers, book chapters, books, posters, oral presentations, and all other printed and digital presentations of results derived from HCP data should contain the following wording in the acknowledgments section: \"Data were provided [in part] by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University.\"\n (b) Authors of publications or presentations using WU-Minn HCP data should cite relevant publications describing the methods used by the HCP to acquire and process the data. The specific publications that are appropriate to cite in any given study will depend on what HCP data were used and for what purposes. An annotated and appropriately up-to-date list of publications that may warrant consideration is available at http://www.humanconnectome.org/about/acknowledgehcp.html\n (c) The WU-Minn HCP Consortium as a whole should not be included as an author of publications or presentations if this authorship would be based solely on the use of WU-Minn HCP data.\n6. Failure to abide by these guidelines will result in termination of my privileges to access WU-Minn HCP data.\n\nBy requesting access, you agree to the above terms.\n", "extra_gated_fields": {"I agree to these terms": "checkbox", "Name": "text", "Email": "text"}}
| false
|
auto
| 2026-01-25T09:25:23
| 58
| 56
| false
|
580083cd4f33b145d5ffdc57265915128e541ffe
|
FOMO300K: Brain MRI Dataset for Large-Scale Self-Supervised Learning with Clinical Data
Dataset paper preprint: A large-scale heterogeneous 3D magnetic resonance brain imaging dataset for self-supervised learning.
https://arxiv.org/pdf/2506.14432v2.
Description
FOMO-300K is a large-scale dataset of brain MRI scans, including both clinical and research-grade scans. The dataset includes a wide range of sequences, including T1, MPRAGE, T2, T2*, FLAIR, SWI, T1c, PD, DWI… See the full description on the dataset page: https://huggingface.co/datasets/FOMO-MRI/FOMO300K.
| 10,967
| 10,967
|
[
"task_categories:image-feature-extraction",
"task_categories:zero-shot-classification",
"license:other",
"size_categories:100K<n<1M",
"modality:3d",
"modality:image",
"arxiv:2506.14432",
"region:us",
"brain",
"mri",
"ssl",
"foundation_model",
"3d",
"image"
] | 2026-01-13T08:42:10
| null | null |
695df55a4e351abe5277cca5
|
UniParser/OmniScience
|
UniParser
|
{"license": "cc-by-nc-sa-4.0", "task_categories": ["image-to-text"], "extra_gated_heading": "Request Access to This Dataset", "extra_gated_description": "Please complete the required fields below to request access. Access will be automatically granted upon submission.", "extra_gated_fields": {"Full Name": {"type": "text"}, "Email": {"type": "text"}, "Affiliation (Company / University)": {"type": "text"}, "I agree this dataset is for non-commercial use ONLY": {"type": "checkbox"}}, "extra_gated_button_content": "Submit Access Request"}
| false
|
auto
| 2026-01-22T02:55:43
| 91
| 53
| false
|
9c9fdac9ea87b36e3889330463cd4aee2e81ce95
|
OmniScience: A Large-scale Dataset for Scientific Image Understanding
🚀 2026-01-21: The OmniScience dataset ranked Top 8 on Hugging Face Datasets Trending (Top 1 on Image Caption Filed). 🚀 2026-01-17: The OmniScience dataset surpassed 5,000 downloads within 5 days of its release. 🚀 2026-01-12: Official release of the OmniScience dataset. 🚀 2025-06-01: Completion of the original dataset collection.
📘 Dataset Summary
OmniScience is an ultra-large-scale… See the full description on the dataset page: https://huggingface.co/datasets/UniParser/OmniScience.
| 8,271
| 8,278
|
[
"task_categories:image-to-text",
"license:cc-by-nc-sa-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"format:optimized-parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"arxiv:2512.15098",
"region:us"
] | 2026-01-07T05:55:38
| null | null |
69645867fd167898fdec27e6
|
moonworks/lunara-aesthetic
|
moonworks
|
{"license": "apache-2.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "region", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "topic", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2953317713, "num_examples": 2000}], "download_size": 2970387971, "dataset_size": 2953317713}, "task_categories": ["text-to-image"], "tags": ["art"], "size_categories": ["1K<n<10K"]}
| false
|
False
| 2026-01-22T08:40:29
| 62
| 53
| false
|
fcf45a62e226560ae63e60eb01c4d40372457965
|
Dataset Card for Moonworks Lunara Aesthetic Dataset
Sample Images
Dataset Summary
paper: https://arxiv.org/abs/2601.07941
The Lunara Aesthetic Dataset is a curated collection of 2,000 high-quality image–prompt pairs designed for controlled research on prompt grounding, style conditioning, and aesthetic alignment in text-to-image generation.
All images are generated using the Moonworks Lunara, a sub-10B parameter… See the full description on the dataset page: https://huggingface.co/datasets/moonworks/lunara-aesthetic.
| 3,149
| 3,149
|
[
"task_categories:text-to-image",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"arxiv:2601.07941",
"region:us",
"art"
] | 2026-01-12T02:11:51
| null | null |
696ddc1ba806b4bfbcfc0224
|
opendatalab/ChartVerse-SFT-1800K
|
opendatalab
|
{"license": "apache-2.0", "language": ["en"], "task_categories": ["visual-question-answering", "image-text-to-text"], "tags": ["chart", "reasoning", "vision-language", "multimodal", "chart-understanding", "CoT", "SFT", "large-scale"], "size_categories": ["1M<n<10M"]}
| false
|
False
| 2026-01-23T03:19:13
| 54
| 51
| false
|
2bd884772dcfa191a705202ccaa008d55d646309
|
ChartVerse-SFT-1800K is an extended large-scale chart reasoning dataset with Chain-of-Thought (CoT) annotations, developed as part of the opendatalab/ChartVerse project. For more details about our method, datasets, and full model series, please visit our Project Page.
This dataset contains all verified correct samples without failure rate filtering. Unlike SFT-600K which excludes easy samples (r=0), SFT-1800K includes the complete set of truth-anchored QA pairs for maximum coverage and scale.… See the full description on the dataset page: https://huggingface.co/datasets/opendatalab/ChartVerse-SFT-1800K.
| 1,346
| 1,346
|
[
"task_categories:visual-question-answering",
"task_categories:image-text-to-text",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"arxiv:2601.13606",
"region:us",
"chart",
"reasoning",
"vision-language",
"multimodal",
"chart-understanding",
"CoT",
"SFT",
"large-scale"
] | 2026-01-19T07:24:11
| null | null |
696a53dfe8359277ca69b28a
|
rootsautomation/pubmed-ocr
|
rootsautomation
|
{"language": ["en"], "license": "other", "size_categories": ["1M<n<10M"], "task_categories": ["image-to-text", "image-text-to-text"], "pretty_name": "PubMed-OCR", "arxiv": 2601.11425, "dataset_info": {"features": [{"name": "basename", "dtype": "string"}, {"name": "page", "dtype": "int32"}, {"name": "license", "dtype": "string"}, {"name": "pmid", "dtype": "string"}, {"name": "accession_id", "dtype": "string"}, {"name": "article_citation", "dtype": "string"}, {"name": "pdf_bytes", "dtype": "binary"}, {"name": "ocr_json", "dtype": "string"}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train-*.parquet"}]}], "license_name": "pubmed-ocr-multiple-cc-licenses", "tags": ["biology", "medical", "ocr", "multimodal"]}
| false
|
False
| 2026-01-22T19:58:29
| 45
| 44
| false
|
d03682f1b9e4d1c2a4d48657063cc467a464363d
|
PubMed-OCR: PMC Open Access OCR Annotations
PubMed-OCR is an OCR-centric corpus of scientific articles derived from PubMed Central Open Access PDFs. Each page is rendered to an image and annotated with Google Cloud Vision OCR, released in a compact JSON schema with word-, line-, and paragraph-level bounding boxes.
Scale (release):
209.5K articles
~1.5M pages
~1.3B words (OCR tokens)
This dataset is intended to support layout-aware modeling, coordinate-grounded QA, and evaluation… See the full description on the dataset page: https://huggingface.co/datasets/rootsautomation/pubmed-ocr.
| 566
| 566
|
[
"task_categories:image-to-text",
"task_categories:image-text-to-text",
"language:en",
"license:other",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:polars",
"library:mlcroissant",
"arxiv:2601.11425",
"region:us",
"biology",
"medical",
"ocr",
"multimodal"
] | 2026-01-16T15:06:07
| null | null |
67a404bc8c6d42c5ec097433
|
Anthropic/EconomicIndex
|
Anthropic
|
{"language": "en", "pretty_name": "EconomicIndex", "tags": ["AI", "LLM", "Economic Impacts", "Anthropic"], "viewer": true, "license": "mit", "configs": [{"config_name": "release_2026_01_15", "data_files": [{"split": "raw_claude_ai", "path": "release_2026_01_15/data/intermediate/aei_raw_claude_ai_2025-11-13_to_2025-11-20.csv"}, {"split": "raw_1p_api", "path": "release_2025_09_15/data/intermediate/aei_raw_1p_api_2025-11-13_to_2025-11-20.csv"}]}]}
| false
|
False
| 2026-01-15T23:52:53
| 430
| 43
| false
|
f7f2edfbbcf28329dd621fc8e3cc83d0d99b72eb
|
The Anthropic Economic Index
Overview
The Anthropic Economic Index provides insights into how AI is being incorporated into real-world tasks across the modern economy.
Data Releases
This repository contains multiple data releases, each with its own documentation:
2026-01-15 Release: Updated analysis with economic primitives and Sonnet 4.5
2025-09-15 Release: Updated analysis with geographic and first-party API data using Sonnet 4
2025-03-27 Release: Updated… See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/EconomicIndex.
| 6,109
| 40,089
|
[
"language:en",
"license:mit",
"arxiv:2503.04761",
"region:us",
"AI",
"LLM",
"Economic Impacts",
"Anthropic"
] | 2025-02-06T00:39:24
| null | null |
End of preview. Expand
in Data Studio
Changelog
NEW Changes July 25th
- added
baseModelsfield to models which shows the models that the user tagged as base models for that model
Example:
{
"models": [
{
"_id": "687de260234339fed21e768a",
"id": "Qwen/Qwen3-235B-A22B-Instruct-2507"
}
],
"relation": "quantized"
}
NEW Changes July 9th
- Fixed issue with
ggufcolumn with integer overflow causing import pipeline to be broken over a few weeks ✅
NEW Changes Feb 27th
Added new fields on the
modelssplit:downloadsAllTime,safetensors,ggufAdded new field on the
datasetssplit:downloadsAllTimeAdded new split:
paperswhich is all of the Daily Papers
Updated Daily
- Downloads last month
- 4,062
Size of downloaded dataset files:
1.76 GB
Size of the auto-converted Parquet files:
1.76 GB
Number of rows:
4,239,461