davanstrien HF Staff commited on
Commit
56015af
Β·
verified Β·
1 Parent(s): 15ef566

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +93 -31
README.md CHANGED
@@ -1,33 +1,95 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: document_id
5
- dtype: string
6
- - name: page_number
7
- dtype: string
8
- - name: image
9
- dtype: image
10
- - name: text
11
- dtype: string
12
- - name: alto_xml
13
- dtype: string
14
- - name: has_image
15
- dtype: bool
16
- - name: has_alto
17
- dtype: bool
18
- - name: markdown
19
- dtype: string
20
- - name: inference_info
21
- dtype: string
22
- splits:
23
- - name: train
24
- num_bytes: 10984085
25
- num_examples: 50
26
- download_size: 8077067
27
- dataset_size: 10984085
28
- configs:
29
- - config_name: default
30
- data_files:
31
- - split: train
32
- path: data/train-*
33
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ viewer: false
3
+ tags:
4
+ - ocr
5
+ - document-processing
6
+ - nanonets
7
+ - nanonets-ocr2
8
+ - markdown
9
+ - uv-script
10
+ - generated
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
+
13
+ # Document OCR using Nanonets-OCR2-3B
14
+
15
+ This dataset contains markdown-formatted OCR results from images in [NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset](https://huggingface.co/datasets/NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset) using Nanonets-OCR2-3B.
16
+
17
+ ## Processing Details
18
+
19
+ - **Source Dataset**: [NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset](https://huggingface.co/datasets/NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset)
20
+ - **Model**: [nanonets/Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-3B)
21
+ - **Model Size**: 3B parameters
22
+ - **Number of Samples**: 50
23
+ - **Processing Time**: 8.7 minutes
24
+ - **Processing Date**: 2025-10-13 17:49 UTC
25
+
26
+ ### Configuration
27
+
28
+ - **Image Column**: `image`
29
+ - **Output Column**: `markdown`
30
+ - **Dataset Split**: `train`
31
+ - **Batch Size**: 16
32
+ - **Max Model Length**: 8,192 tokens
33
+ - **Max Output Tokens**: 4,096
34
+ - **GPU Memory Utilization**: 80.0%
35
+
36
+ ## Model Information
37
+
38
+ Nanonets-OCR2-3B is a state-of-the-art document OCR model that excels at:
39
+ - πŸ“ **LaTeX equations** - Mathematical formulas preserved in LaTeX format
40
+ - πŸ“Š **Tables** - Extracted and formatted as HTML
41
+ - πŸ“ **Document structure** - Headers, lists, and formatting maintained
42
+ - πŸ–ΌοΈ **Images** - Captions and descriptions included in `<img>` tags
43
+ - β˜‘οΈ **Forms** - Checkboxes rendered as ☐/β˜‘
44
+ - πŸ”– **Watermarks** - Wrapped in `<watermark>` tags
45
+ - πŸ“„ **Page numbers** - Wrapped in `<page_number>` tags
46
+ - 🌍 **Multilingual** - Supports multiple languages
47
+
48
+ ## Dataset Structure
49
+
50
+ The dataset contains all original columns plus:
51
+ - `markdown`: The extracted text in markdown format with preserved structure
52
+ - `inference_info`: JSON list tracking all OCR models applied to this dataset
53
+
54
+ ## Usage
55
+
56
+ ```python
57
+ from datasets import load_dataset
58
+ import json
59
+
60
+ # Load the dataset
61
+ dataset = load_dataset("{{output_dataset_id}}", split="train")
62
+
63
+ # Access the markdown text
64
+ for example in dataset:
65
+ print(example["markdown"])
66
+ break
67
+
68
+ # View all OCR models applied to this dataset
69
+ inference_info = json.loads(dataset[0]["inference_info"])
70
+ for info in inference_info:
71
+ print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}")
72
+ ```
73
+
74
+ ## Reproduction
75
+
76
+ This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) Nanonets OCR2 script:
77
+
78
+ ```bash
79
+ uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr2.py \
80
+ NationalLibraryOfScotland/Britain-and-UK-Handbooks-Dataset \
81
+ <output-dataset> \
82
+ --model nanonets/Nanonets-OCR2-3B \
83
+ --image-column image \
84
+ --batch-size 16 \
85
+ --max-model-len 8192 \
86
+ --max-tokens 4096 \
87
+ --gpu-memory-utilization 0.8
88
+ ```
89
+
90
+ ## Performance
91
+
92
+ - **Processing Speed**: ~0.1 images/second
93
+ - **GPU Configuration**: vLLM with 80% GPU memory utilization
94
+
95
+ Generated with πŸ€– [UV Scripts](https://huggingface.co/uv-scripts)