Update README.md
Browse files
README.md
CHANGED
|
@@ -47,21 +47,30 @@ dataset_info:
|
|
| 47 |
LOC-BENCH is a dataset specifically designed for evaluating code localization methods in software repositories.
|
| 48 |
LOC-BENCH provides a diverse set of issues, including bug reports, feature requests, security vulnerabilities, and performance optimizations.
|
| 49 |
|
| 50 |
-
|
| 51 |
-
This is the dataset that was used in [paper](https://arxiv.org/abs/2503.09089).
|
| 52 |
|
| 53 |
-
|
| 54 |
-
We later released a refined version, [`czlll/Loc-Bench`](https://huggingface.co/datasets/czlll/Loc-Bench), with improved data quality by filtering out examples that do not modify any functions.
|
| 55 |
-
We recommend using the refined dataset to evaluate code localization performance.
|
| 56 |
|
| 57 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
|
| 59 |
## 🔧 How to Use
|
| 60 |
You can easily load LOC-BENCH using Hugging Face's datasets library:
|
| 61 |
```
|
| 62 |
from datasets import load_dataset
|
| 63 |
|
| 64 |
-
dataset = load_dataset("czlll/Loc-Bench_V0", split='test')
|
| 65 |
```
|
| 66 |
## 📄 Citation
|
| 67 |
If you use LOC-BENCH in your research, please cite our paper:
|
|
|
|
| 47 |
LOC-BENCH is a dataset specifically designed for evaluating code localization methods in software repositories.
|
| 48 |
LOC-BENCH provides a diverse set of issues, including bug reports, feature requests, security vulnerabilities, and performance optimizations.
|
| 49 |
|
| 50 |
+
Please refer to [**`Loc-Bench_V1`**](https://huggingface.co/datasets/czlll/Loc-Bench_V1) for evaluating code localization methods and for easy comparison with our approach.
|
|
|
|
| 51 |
|
| 52 |
+
Code: https://github.com/gersteinlab/LocAgent
|
|
|
|
|
|
|
| 53 |
|
| 54 |
+
|
| 55 |
+
## 📊 Details
|
| 56 |
+
This is the dataset that was used in [the early version of our paper](https://arxiv.org/abs/2503.09089).
|
| 57 |
+
We later released a refined version, [`czlll/Loc-Bench_V1`](https://huggingface.co/datasets/czlll/Loc-Bench_V1), with improved data quality by filtering out examples that do not modify any functions.
|
| 58 |
+
We **recommend** using the refined dataset to evaluate code localization performance.
|
| 59 |
+
|
| 60 |
+
The table below shows the distribution of categories in the dataset `Loc-Bench_V0.1`.
|
| 61 |
+
| category | count |
|
| 62 |
+
|:---------|:---------|
|
| 63 |
+
| Bug Report | 282 |
|
| 64 |
+
| Feature Request | 203 |
|
| 65 |
+
| Performance Issue | 144 |
|
| 66 |
+
| Security Vulnerability | 31 |
|
| 67 |
|
| 68 |
## 🔧 How to Use
|
| 69 |
You can easily load LOC-BENCH using Hugging Face's datasets library:
|
| 70 |
```
|
| 71 |
from datasets import load_dataset
|
| 72 |
|
| 73 |
+
dataset = load_dataset("czlll/Loc-Bench_V0.1", split='test')
|
| 74 |
```
|
| 75 |
## 📄 Citation
|
| 76 |
If you use LOC-BENCH in your research, please cite our paper:
|