Datasets:
File size: 5,445 Bytes
46df5f8 cadd4e4 9bf3ba5 784b59a 9bf3ba5 46df5f8 9bf3ba5 ca9dd25 9bf3ba5 6dce42c 9bf3ba5 cadd4e4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 |
---
license: cc-by-4.0
task_categories:
- text-generation
- summarization
language:
- code
tags:
- code
- documentation
- docstring
- code-to-text
- python
- java
- javascript
- typescript
- cpp
size_categories:
- 10K<n<100K
---
# Code2Doc: Function-Documentation Pairs Dataset
A curated dataset of **13,358** high-quality function-documentation pairs extracted from popular open-source repositories on GitHub. Designed for training models to generate documentation from code.
## Dataset Description
This dataset contains functions paired with their docstrings/documentation comments from 5 programming languages, extracted from well-maintained, highly-starred GitHub repositories.
### Languages Distribution
| Language | Train | Val | Test | Total |
|----------|-------|-----|------|-------|
| Java | 6,560 (61.4%) | 820 | 820 | 8,200 |
| Python | 2,885 (27.0%) | 360 | 362 | 3,607 |
| TypeScript | 681 (6.4%) | 85 | 86 | 852 |
| JavaScript | 428 (4.0%) | 53 | 55 | 536 |
| C++ | 130 (1.2%) | 16 | 17 | 163 |
| **Total** | **10,684** | **1,334** | **1,340** | **13,358** |
### Source Repositories
The data was extracted from high-quality open-source projects including:
**Python:** Django, PyTorch, Pandas, NumPy, scikit-learn, FastAPI, Flask, Celery, Airflow, Requests
**Java:** Guava, Elasticsearch, Spring Framework, Spring Boot, Apache Kafka, Commons-Lang
**TypeScript:** TypeScript, VS Code, Angular, Prisma, Grafana, Storybook, NestJS
**JavaScript:** React, Node.js, Lodash, Axios, Express
**C++:** OpenCV, Protobuf, Folly, gRPC, LLVM, TensorFlow
## Dataset Structure
### Data Fields
| Field | Type | Description |
|-------|------|-------------|
| `function_name` | string | Name of the function/method |
| `function_code` | string | Complete source code of the function |
| `documentation` | string | Extracted docstring/documentation |
| `language` | string | Programming language |
| `file_path` | string | Original file path in repository |
| `line_number` | int | Line number where function starts |
| `parameters` | list[string] | List of parameter names |
| `return_type` | string | Return type annotation (if available) |
| `has_type_hints` | bool | Whether function has type annotations |
| `complexity` | int | Cyclomatic complexity score |
| `quality_score` | float | Documentation quality score (0-10) |
| `repo_name` | string | Source repository (owner/repo) |
| `repo_stars` | int | Repository star count at extraction time |
| `docstring_style` | string | Documentation style (google, numpy, sphinx, jsdoc, javadoc, doxygen) |
| `is_async` | bool | Whether function is async |
### Data Splits
- **Train:** 10,684 samples (80%)
- **Validation:** 1,334 samples (10%)
- **Test:** 1,340 samples (10%)
Splits are stratified by language to maintain consistent distribution across sets.
## Data Processing Pipeline
The dataset was created through a multi-stage pipeline:
1. **Extraction:** Used tree-sitter parsers to accurately extract functions with documentation
2. **Basic Filtering:** Removed test functions, trivial functions, and applied length constraints
3. **Quality Scoring:** Scored documentation completeness (parameters, returns, examples)
4. **Deduplication:** Removed exact and near-duplicates using MinHash LSH
5. **AI Detection:** Filtered potentially AI-generated documentation
### Quality Criteria
- Minimum documentation length: 20 characters
- Maximum documentation length: 10,000 characters
- Minimum code length: 50 characters
- Excluded test functions and trivial getters/setters
- Required meaningful documentation structure
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("kaanrkaraman/code2doc")
# Access splits
train_data = dataset["train"]
val_data = dataset["val"]
test_data = dataset["test"]
# Example: Get a Python function
python_samples = train_data.filter(lambda x: x["language"] == "python")
sample = python_samples[0]
print(f"Function: {sample['function_name']}")
print(f"Code:\n{sample['function_code']}")
print(f"Documentation:\n{sample['documentation']}")
```
### For Fine-tuning
```python
def format_for_training(example):
return {
"input": f"Generate documentation for the following {example['language']} function:\n\n{example['function_code']}",
"output": example["documentation"]
}
formatted_dataset = dataset.map(format_for_training)
```
## Intended Use
- **Training code documentation generation models**
- **Fine-tuning LLMs for code-to-text tasks**
- **Evaluating documentation quality metrics**
- **Research on code understanding and generation**
## Limitations
- Heavily weighted towards Java due to verbose documentation practices
- C++ representation is small due to different documentation conventions
- Documentation quality varies by repository coding standards
- Extracted from a specific snapshot in time (December 2025)
## Citation
```bibtex
@misc{recep_kaan_karaman_2025,
author = {Recep Kaan Karaman and Meftun Akarsu},
title = {code2doc (Revision cadd4e4)},
year = 2025,
url = {https://huggingface.co/datasets/kaanrkaraman/code2doc},
doi = {10.57967/hf/7310},
publisher = {Hugging Face}
}
```
## License
This dataset is released under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) License. The source code comes from repositories with permissive licenses (MIT, Apache 2.0, BSD).
|