---
license: cc-by-sa-4.0
dataset_info:
features:
- name: id
dtype: string
- name: source
dtype: string
- name: task
dtype: string
- name: type
dtype: string
- name: instruction
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
- name: evidence
dtype: string
splits:
- name: test
num_bytes: 1966003072
num_examples: 1934
download_size: 660289122
dataset_size: 1966003072
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
LooGLE v2
**The official repository of "LooGLE v2: Are LLMs Ready for Real World Long Dependency Challenges?"**
**NeurIPS DB Track 2025**
---
## π Overview
LooGLE v2 is a comprehensive benchmark designed to evaluate large language models on their ability to understand and process long-context documents with complex dependencies. The benchmark covers diverse domains including **Finance**, **Law**, **Code**, and **Game**.
---
## π Quick Start
### π¦ Installation
```bash
# Create environment with Python 3.10
conda create -n loogle-v2 python=3.10
conda activate loogle-v2
# Install dependencies
pip install -r requirements.txt
# Install Flash Attention
pip install flash-attn==2.6.3 --no-build-isolation
# Or you can download flash_attn-2.6.3-cp310-cp310-linux_x86_64.whl
pip install flash_attn-2.6.3-cp310-cp310-linux_x86_64.whl
```
---
## π Dataset
Download the LooGLE v2 dataset from Hugging Face:
```bash
git clone https://huggingface.co/datasets/MuLabPKU/LooGLE-v2 ./datasets/LooGLE-v2
# Or use the Hugging Face CLI to download:
hf download MuLabPKU/LooGLE-v2 --repo-type dataset --local-dir ./datasets/LooGLE-v2
```
---
## π οΈ Usage
### βοΈ Configuration
**vLLM server (for `predict.py`):**
```bash
python -m vllm.entrypoints.openai.api_server \
--model path/to/your/model \
--port 8000 \
--max-model-len 131072
```
**Model entry (`config/models.jsonl`, shared by both scripts):**
```json
{
"name": "your-model-name",
"model": "path/to/model",
"max_len": 131072,
"base_url": "http://localhost:8000/v1",
"api_key": "your-api-key"
}
```
Transformers mode (`predict_transformers.py`) does not need a server; it still reuses `name/model/max_len` from this config. Ensure `base_url` matches your vLLM port when using the server route.
### π Pre-compute RAG Contexts (optional)
If you plan to run `--use_rag`, first generate `context_rag` with the preprocessor:
```bash
python rag_preprocess.py \
--input_path ./datasets/LooGLE-v2 \
--split test \
--output_path ./datasets/LooGLE-v2/test_rag.jsonl \
--embedding_model THUDM/LongCite-glm4-9b \
--devices 0,1
```
For multi-turn refinement (using a generator model to iteratively improve retrieval queries):
```bash
python rag_preprocess.py \
--input_path ./datasets/LooGLE-v2 \
--split test \
--output_path ./datasets/LooGLE-v2/test_rag_multi.jsonl \
--embedding_model THUDM/LongCite-glm4-9b \
--generator_model meta-llama/Llama-3.1-8B \
--multi_turn --devices 0,1
```
### π― Running Predictions
#### Option A: vLLM server (`predict.py`)
```bash
python predict.py \
--model your-model-name \
--data_dir ./datasets/LooGLE-v2 \
--save_dir ./results \
--max_new_tokens 512
```
#### Option B: Transformers local (`predict_transformers.py`)
```bash
python predict_transformers.py \
--model your-model-name \
--data_dir ./datasets/LooGLE-v2 \
--save_dir ./results \
--max_new_tokens 512
```
Optional prompting flags (both scripts):
- `--use_cot` for Chain-of-Thought
- `--use_rag --rag_topk --rag_context ` to inject precomputed `context_rag` (default file: `./datasets/LooGLE-v2/test_rag.jsonl`)
π Core parameters (both options)
| Flag | Purpose |
|------|---------|
| `--model` | Must match `config/models.jsonl` name |
| `--data_dir` | Dataset path (jsonl or HF) |
| `--save_dir` | Output directory |
| `--with_context` | 1/0 to include original context |
| `--n_proc` | Parallel processes |
| `--max_new_tokens` | Generation length |
| `--use_cot` | Enable Chain-of-Thought |
| `--use_rag` | Use retrieved context |
| `--rag_topk` | How many retrieved chunks to keep |
| `--rag_context` | Path to `id + context_rag` jsonl |
π₯οΈ Transformers-only flags
| Flag | Purpose |
|------|---------|
| `--device` | Target device (cuda/cpu, auto by default) |
| `--load_in_8bit` | 8-bit quantization (needs bitsandbytes) |
| `--load_in_4bit` | 4-bit quantization (needs bitsandbytes) |
| `--torch_dtype` | Weight dtype: float16/bfloat16/float32 |
> π‘ Install `bitsandbytes` to enable quantization: `pip install bitsandbytes`
### π Evaluation
After prediction, evaluate the results:
```bash
python evaluate.py --input_path ./results/your-model-name.jsonl
```
This outputs per-task accuracy for each domain and overall accuracy.
For batch evaluation (e.g., multiple runs with CoT/RAG or no-context variants):
```bash
python evaluate.py --input_path ./results --batch --output_json ./results/summary.json
```
This scans a folder for `.jsonl` files, reports each fileβs accuracy, and optionally saves a summary.
---
## π Project Structure
```
LooGLE-v2/
βββ src/
β βββ answer_extractor.py # Answer extraction logic
β βββ evaluator.py # Evaluation metrics
β βββ llm_client.py # LLM client implementations
β βββ data_loader.py # Data loading utilities
β βββ utils.py # Common utilities
βββ config/
β βββ models.jsonl # Model configurations
βββ predict.py # Prediction script (vLLM server)
βββ predict_transformers.py # Prediction script (direct transformers)
βββ rag_preprocess.py # RAG context preprocessing
βββ evaluate.py # Evaluation script
βββ requirements.txt # Dependencies
```
---
## π Results Format
Prediction outputs are saved in JSONL format:
```json
{
"id": "sample_id",
"source": "Finance",
"task": "Metric Calculation",
"type": "question_type",
"correct_answer": "123.45",
"pred_answer": "123.40",
"response": "The correct answer is 123.40",
"judge": true
}
```
---
## π Citation
If you use LooGLE v2 in your research, please cite:
```bibtex
@article{he2025loogle,
title={LooGLE v2: Are LLMs Ready for Real World Long Dependency Challenges?},
author={He, Ziyuan and Wang, Yuxuan and Li, Jiaqi and Liang, Kexin and Zhang, Muhan},
journal={arXiv preprint arXiv:2510.22548},
year={2025}
}
```
---
## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
---