xfey commited on
Commit
77360ec
·
0 Parent(s):

Initial commit

Browse files
.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - zh
5
+ - en
6
+ tags:
7
+ - document-parsing
8
+ - document-understanding
9
+ - document-intelligence
10
+ - ocr
11
+ - layout-analysis
12
+ - table-extraction
13
+ - multimodal
14
+ - vision-language-model
15
+ datasets:
16
+ - custom
17
+ pipeline_tag: image-text-to-text
18
+ library_name: transformers
19
+ ---
20
+
21
+
22
+ # Dolphin: Document Image Parsing via Heterogeneous Anchor Prompting
23
+
24
+ <div align="center">
25
+ <img src="https://cdn.wandeer.world/null/dolphin_demo.gif" width="800">
26
+ </div>
27
+
28
+
29
+ ## Model Description
30
+
31
+ Dolphin (**Do**cument Image **P**arsing via **H**eterogeneous Anchor Prompt**in**g) is a novel multimodal document image parsing model that follows an analyze-then-parse paradigm. It addresses the challenges of complex document understanding through a two-stage approach designed to handle intertwined elements such as text paragraphs, figures, formulas, and tables.
32
+
33
+ ## 📑 Overview
34
+
35
+ Document image parsing is challenging due to its complexly intertwined elements such as text paragraphs, figures, formulas, and tables. Dolphin addresses these challenges through a two-stage approach:
36
+
37
+ 1. **🔍 Stage 1**: Comprehensive page-level layout analysis by generating element sequence in natural reading order
38
+ 2. **🧩 Stage 2**: Efficient parallel parsing of document elements using heterogeneous anchors and task-specific prompts
39
+
40
+ <div align="center">
41
+ <img src="https://cdn.wandeer.world/null/dolphin_framework.png" width="680">
42
+ </div>
43
+
44
+ Dolphin achieves promising performance across diverse page-level and element-level parsing tasks while ensuring superior efficiency through its lightweight architecture and parallel parsing mechanism.
45
+
46
+ ## Model Architecture
47
+
48
+ Dolphin is built on a vision-encoder-decoder architecture using transformers:
49
+
50
+ - **Vision Encoder**: Based on Swin Transformer for extracting visual features from document images
51
+ - **Text Decoder**: Based on MBart for decoding text from visual features
52
+ - **Prompt-based interface**: Uses natural language prompts to control parsing tasks
53
+
54
+ The model is implemented as a Hugging Face `VisionEncoderDecoderModel` for easy integration with the Transformers ecosystem.
55
+
56
+ ## Usage
57
+
58
+ Please refer to our [GitHub repository](https://github.com/bytedance/Dolphin) for detailed usage.
59
+
60
+ - [Page-wise parsing](https://github.com/bytedance/Dolphin/demo_page_hf.py): for an entire document image
61
+ - [Element-wise parsing](https://github.com/bytedance/Dolphin/demo_element_hf.py): for an element (paragraph, table, formula) image
62
+
63
+
64
+ ## License
65
+
66
+ This model is released under the MIT License.
67
+
68
+ ## Citation
69
+
70
+ ```bibtex
71
+ @inproceedings{dolphin2025,
72
+ title={Dolphin: Document Image Parsing via Heterogeneous Anchor Prompting},
73
+ author={Feng, Hao and Wei, Shu and Fei, Xiang and Shi, Wei and Han, Yingdong and Liao, Lei and Lu, Jinghui and Wu, Binghong and Liu, Qi and Lin, Chunhui and Tang, Jingqun and Liu, Hao and Huang, Can},
74
+ year={2025},
75
+ booktitle={Proceedings of the 65rd Annual Meeting of the Association for Computational Linguistics (ACL)}
76
+ }
77
+ ```
78
+
79
+ ## Acknowledgements
80
+
81
+ This model builds on several open-source projects including:
82
+ - [Hugging Face Transformers](https://github.com/huggingface/transformers)
83
+ - [Donut](https://github.com/clovaai/donut/)
84
+ - [Nougat](https://github.com/facebookresearch/nougat)
85
+ - [Swin Transformer](https://github.com/microsoft/Swin-Transformer)
config.json ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "VisionEncoderDecoderModel"
4
+ ],
5
+ "decoder": {
6
+ "_name_or_path": "",
7
+ "activation_dropout": 0.0,
8
+ "activation_function": "gelu",
9
+ "add_cross_attention": true,
10
+ "add_final_layer_norm": true,
11
+ "architectures": null,
12
+ "attention_dropout": 0.0,
13
+ "bad_words_ids": null,
14
+ "begin_suppress_tokens": null,
15
+ "bos_token_id": 0,
16
+ "chunk_size_feed_forward": 0,
17
+ "classifier_dropout": 0.0,
18
+ "cross_attention_hidden_size": null,
19
+ "d_model": 1024,
20
+ "decoder_attention_heads": 16,
21
+ "decoder_ffn_dim": 4096,
22
+ "decoder_layerdrop": 0.0,
23
+ "decoder_layers": 10,
24
+ "decoder_start_token_id": null,
25
+ "diversity_penalty": 0.0,
26
+ "do_sample": false,
27
+ "dropout": 0.1,
28
+ "early_stopping": false,
29
+ "encoder_attention_heads": 16,
30
+ "encoder_ffn_dim": 4096,
31
+ "encoder_layerdrop": 0.0,
32
+ "encoder_layers": 12,
33
+ "encoder_no_repeat_ngram_size": 0,
34
+ "eos_token_id": 2,
35
+ "exponential_decay_length_penalty": null,
36
+ "finetuning_task": null,
37
+ "forced_bos_token_id": null,
38
+ "forced_eos_token_id": 2,
39
+ "id2label": {
40
+ "0": "LABEL_0",
41
+ "1": "LABEL_1"
42
+ },
43
+ "init_std": 0.02,
44
+ "is_decoder": true,
45
+ "is_encoder_decoder": false,
46
+ "label2id": {
47
+ "LABEL_0": 0,
48
+ "LABEL_1": 1
49
+ },
50
+ "length_penalty": 1.0,
51
+ "max_length": 20,
52
+ "max_position_embeddings": 4096,
53
+ "min_length": 0,
54
+ "model_type": "mbart",
55
+ "no_repeat_ngram_size": 0,
56
+ "num_beam_groups": 1,
57
+ "num_beams": 1,
58
+ "num_hidden_layers": 12,
59
+ "num_return_sequences": 1,
60
+ "output_attentions": false,
61
+ "output_hidden_states": false,
62
+ "output_scores": false,
63
+ "pad_token_id": 1,
64
+ "prefix": null,
65
+ "problem_type": null,
66
+ "pruned_heads": {},
67
+ "remove_invalid_values": false,
68
+ "repetition_penalty": 1.0,
69
+ "return_dict": true,
70
+ "return_dict_in_generate": false,
71
+ "scale_embedding": true,
72
+ "sep_token_id": null,
73
+ "suppress_tokens": null,
74
+ "task_specific_params": null,
75
+ "temperature": 1.0,
76
+ "tf_legacy_loss": false,
77
+ "tie_encoder_decoder": false,
78
+ "tie_word_embeddings": false,
79
+ "tokenizer_class": null,
80
+ "top_k": 50,
81
+ "top_p": 1.0,
82
+ "torch_dtype": null,
83
+ "torchscript": false,
84
+ "typical_p": 1.0,
85
+ "use_bfloat16": false,
86
+ "use_cache": true,
87
+ "vocab_size": 73921
88
+ },
89
+ "encoder": {
90
+ "_name_or_path": "",
91
+ "add_cross_attention": false,
92
+ "architectures": null,
93
+ "attention_probs_dropout_prob": 0.0,
94
+ "bad_words_ids": null,
95
+ "begin_suppress_tokens": null,
96
+ "bos_token_id": null,
97
+ "chunk_size_feed_forward": 0,
98
+ "cross_attention_hidden_size": null,
99
+ "decoder_start_token_id": null,
100
+ "depths": [
101
+ 2,
102
+ 2,
103
+ 14,
104
+ 2
105
+ ],
106
+ "diversity_penalty": 0.0,
107
+ "do_sample": false,
108
+ "drop_path_rate": 0.1,
109
+ "early_stopping": false,
110
+ "embed_dim": 128,
111
+ "encoder_no_repeat_ngram_size": 0,
112
+ "eos_token_id": null,
113
+ "exponential_decay_length_penalty": null,
114
+ "finetuning_task": null,
115
+ "forced_bos_token_id": null,
116
+ "forced_eos_token_id": null,
117
+ "hidden_act": "gelu",
118
+ "hidden_dropout_prob": 0.0,
119
+ "hidden_size": 1024,
120
+ "id2label": {
121
+ "0": "LABEL_0",
122
+ "1": "LABEL_1"
123
+ },
124
+ "image_size": [
125
+ 1024,
126
+ 1024
127
+ ],
128
+ "initializer_range": 0.02,
129
+ "is_decoder": false,
130
+ "is_encoder_decoder": false,
131
+ "label2id": {
132
+ "LABEL_0": 0,
133
+ "LABEL_1": 1
134
+ },
135
+ "layer_norm_eps": 1e-05,
136
+ "length_penalty": 1.0,
137
+ "max_length": 20,
138
+ "min_length": 0,
139
+ "mlp_ratio": 4.0,
140
+ "model_type": "donut-swin",
141
+ "no_repeat_ngram_size": 0,
142
+ "num_beam_groups": 1,
143
+ "num_beams": 1,
144
+ "num_channels": 3,
145
+ "num_heads": [
146
+ 4,
147
+ 8,
148
+ 16,
149
+ 32
150
+ ],
151
+ "num_layers": 4,
152
+ "num_return_sequences": 1,
153
+ "output_attentions": false,
154
+ "output_hidden_states": false,
155
+ "output_scores": false,
156
+ "pad_token_id": null,
157
+ "patch_size": 4,
158
+ "prefix": null,
159
+ "problem_type": null,
160
+ "pruned_heads": {},
161
+ "qkv_bias": true,
162
+ "remove_invalid_values": false,
163
+ "repetition_penalty": 1.0,
164
+ "return_dict": true,
165
+ "return_dict_in_generate": false,
166
+ "sep_token_id": null,
167
+ "suppress_tokens": null,
168
+ "task_specific_params": null,
169
+ "temperature": 1.0,
170
+ "tf_legacy_loss": false,
171
+ "tie_encoder_decoder": false,
172
+ "tie_word_embeddings": true,
173
+ "tokenizer_class": null,
174
+ "top_k": 50,
175
+ "top_p": 1.0,
176
+ "torch_dtype": null,
177
+ "torchscript": false,
178
+ "typical_p": 1.0,
179
+ "use_absolute_embeddings": false,
180
+ "use_bfloat16": false,
181
+ "window_size": 8
182
+ },
183
+ "is_encoder_decoder": true,
184
+ "model_type": "vision-encoder-decoder",
185
+ "tie_word_embeddings": false,
186
+ "torch_dtype": "float16",
187
+ "transformers_version": "4.40.0"
188
+ }
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 2,
5
+ "forced_eos_token_id": 2,
6
+ "pad_token_id": 1,
7
+ "transformers_version": "4.40.0"
8
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16cbc8bf8a1a225df452b521dc13037ac8356b1ffb154ba143eab27ce0414daa
3
+ size 796129056
preprocessor_config.json ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_valid_processor_keys": [
3
+ "images",
4
+ "do_resize",
5
+ "size",
6
+ "resample",
7
+ "do_thumbnail",
8
+ "do_align_long_axis",
9
+ "do_pad",
10
+ "random_padding",
11
+ "do_rescale",
12
+ "rescale_factor",
13
+ "do_normalize",
14
+ "image_mean",
15
+ "image_std",
16
+ "return_tensors",
17
+ "data_format",
18
+ "input_data_format"
19
+ ],
20
+ "do_align_long_axis": true,
21
+ "do_crop_margin": false,
22
+ "do_normalize": true,
23
+ "do_pad": true,
24
+ "do_rescale": true,
25
+ "do_resize": false,
26
+ "do_thumbnail": true,
27
+ "image_mean": [
28
+ 0.5,
29
+ 0.5,
30
+ 0.5
31
+ ],
32
+ "image_processor_type": "DonutImageProcessor",
33
+ "image_std": [
34
+ 0.5,
35
+ 0.5,
36
+ 0.5
37
+ ],
38
+ "processor_class": "DonutProcessor",
39
+ "resample": 2,
40
+ "rescale_factor": 0.00392156862745098,
41
+ "size": {
42
+ "height": 1024,
43
+ "width": 1024
44
+ }
45
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": " <Answer/>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ }
10
+ ],
11
+ "bos_token": "<s>",
12
+ "eos_token": "</s>",
13
+ "pad_token": "<pad>",
14
+ "unk_token": "<unk>"
15
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff