code2doc / README.md
kaanrkaraman's picture
Upload README.md with huggingface_hub
6dce42c verified
metadata
license: cc-by-4.0
task_categories:
  - text-generation
  - summarization
language:
  - code
tags:
  - code
  - documentation
  - docstring
  - code-to-text
  - python
  - java
  - javascript
  - typescript
  - cpp
size_categories:
  - 10K<n<100K

Code2Doc: Function-Documentation Pairs Dataset

A curated dataset of 13,358 high-quality function-documentation pairs extracted from popular open-source repositories on GitHub. Designed for training models to generate documentation from code.

Dataset Description

This dataset contains functions paired with their docstrings/documentation comments from 5 programming languages, extracted from well-maintained, highly-starred GitHub repositories.

Languages Distribution

Language Train Val Test Total
Java 6,560 (61.4%) 820 820 8,200
Python 2,885 (27.0%) 360 362 3,607
TypeScript 681 (6.4%) 85 86 852
JavaScript 428 (4.0%) 53 55 536
C++ 130 (1.2%) 16 17 163
Total 10,684 1,334 1,340 13,358

Source Repositories

The data was extracted from high-quality open-source projects including:

Python: Django, PyTorch, Pandas, NumPy, scikit-learn, FastAPI, Flask, Celery, Airflow, Requests

Java: Guava, Elasticsearch, Spring Framework, Spring Boot, Apache Kafka, Commons-Lang

TypeScript: TypeScript, VS Code, Angular, Prisma, Grafana, Storybook, NestJS

JavaScript: React, Node.js, Lodash, Axios, Express

C++: OpenCV, Protobuf, Folly, gRPC, LLVM, TensorFlow

Dataset Structure

Data Fields

Field Type Description
function_name string Name of the function/method
function_code string Complete source code of the function
documentation string Extracted docstring/documentation
language string Programming language
file_path string Original file path in repository
line_number int Line number where function starts
parameters list[string] List of parameter names
return_type string Return type annotation (if available)
has_type_hints bool Whether function has type annotations
complexity int Cyclomatic complexity score
quality_score float Documentation quality score (0-10)
repo_name string Source repository (owner/repo)
repo_stars int Repository star count at extraction time
docstring_style string Documentation style (google, numpy, sphinx, jsdoc, javadoc, doxygen)
is_async bool Whether function is async

Data Splits

  • Train: 10,684 samples (80%)
  • Validation: 1,334 samples (10%)
  • Test: 1,340 samples (10%)

Splits are stratified by language to maintain consistent distribution across sets.

Data Processing Pipeline

The dataset was created through a multi-stage pipeline:

  1. Extraction: Used tree-sitter parsers to accurately extract functions with documentation
  2. Basic Filtering: Removed test functions, trivial functions, and applied length constraints
  3. Quality Scoring: Scored documentation completeness (parameters, returns, examples)
  4. Deduplication: Removed exact and near-duplicates using MinHash LSH
  5. AI Detection: Filtered potentially AI-generated documentation

Quality Criteria

  • Minimum documentation length: 20 characters
  • Maximum documentation length: 10,000 characters
  • Minimum code length: 50 characters
  • Excluded test functions and trivial getters/setters
  • Required meaningful documentation structure

Usage

from datasets import load_dataset

dataset = load_dataset("kaanrkaraman/code2doc")

# Access splits
train_data = dataset["train"]
val_data = dataset["val"]
test_data = dataset["test"]

# Example: Get a Python function
python_samples = train_data.filter(lambda x: x["language"] == "python")
sample = python_samples[0]

print(f"Function: {sample['function_name']}")
print(f"Code:\n{sample['function_code']}")
print(f"Documentation:\n{sample['documentation']}")

For Fine-tuning

def format_for_training(example):
    return {
        "input": f"Generate documentation for the following {example['language']} function:\n\n{example['function_code']}",
        "output": example["documentation"]
    }

formatted_dataset = dataset.map(format_for_training)

Intended Use

  • Training code documentation generation models
  • Fine-tuning LLMs for code-to-text tasks
  • Evaluating documentation quality metrics
  • Research on code understanding and generation

Limitations

  • Heavily weighted towards Java due to verbose documentation practices
  • C++ representation is small due to different documentation conventions
  • Documentation quality varies by repository coding standards
  • Extracted from a specific snapshot in time (December 2025)

Citation

@misc{recep_kaan_karaman_2025,
  author       = {Recep Kaan Karaman and Meftun Akarsu},
  title        = {code2doc (Revision cadd4e4)},
  year         = 2025,
  url          = {https://huggingface.co/datasets/kaanrkaraman/code2doc},
  doi          = {10.57967/hf/7310},
  publisher    = {Hugging Face}
}

License

This dataset is released under the CC BY 4.0 License. The source code comes from repositories with permissive licenses (MIT, Apache 2.0, BSD).