model_id
stringlengths 9
102
| model_card
stringlengths 4
343k
| model_labels
listlengths 2
50.8k
|
---|---|---|
hmandsager/detr-resnet-50_finetuned_cppe5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.2
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
ddn0116/detr-resnet-50_finetuned_cppe5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.3.0+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
chinh102/chinh102 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9"
] |
Sa3ed99/detr_finetuned_cppe5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_cppe5
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3407
- Map: 0.2599
- Map 50: 0.5107
- Map 75: 0.2411
- Map Small: 0.1265
- Map Medium: 0.2152
- Map Large: 0.4809
- Mar 1: 0.2669
- Mar 10: 0.4141
- Mar 100: 0.4315
- Mar Small: 0.2471
- Mar Medium: 0.4009
- Mar Large: 0.7004
- Map Coverall: 0.5407
- Mar 100 Coverall: 0.6477
- Map Face Shield: 0.1688
- Mar 100 Face Shield: 0.4532
- Map Gloves: 0.1974
- Mar 100 Gloves: 0.3344
- Map Goggles: 0.1266
- Mar 100 Goggles: 0.3415
- Map Mask: 0.266
- Mar 100 Mask: 0.3804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:|
| No log | 1.0 | 107 | 2.4182 | 0.0507 | 0.104 | 0.0439 | 0.0022 | 0.0243 | 0.0555 | 0.0607 | 0.1378 | 0.1804 | 0.028 | 0.1513 | 0.2649 | 0.2397 | 0.5194 | 0.0003 | 0.0494 | 0.0029 | 0.0969 | 0.0 | 0.0 | 0.0105 | 0.2364 |
| No log | 2.0 | 214 | 2.1888 | 0.0484 | 0.0991 | 0.0426 | 0.0128 | 0.0264 | 0.0477 | 0.0773 | 0.1617 | 0.2023 | 0.0433 | 0.1502 | 0.2651 | 0.1982 | 0.5892 | 0.0001 | 0.0101 | 0.0168 | 0.1625 | 0.0015 | 0.0108 | 0.0255 | 0.2387 |
| No log | 3.0 | 321 | 2.0106 | 0.0827 | 0.1666 | 0.0735 | 0.0148 | 0.0543 | 0.1059 | 0.109 | 0.2402 | 0.2787 | 0.0696 | 0.2816 | 0.3968 | 0.304 | 0.6144 | 0.0053 | 0.1671 | 0.0206 | 0.2455 | 0.0118 | 0.0385 | 0.072 | 0.328 |
| No log | 4.0 | 428 | 1.9302 | 0.107 | 0.2298 | 0.0892 | 0.0258 | 0.0669 | 0.1511 | 0.1338 | 0.2939 | 0.3207 | 0.1213 | 0.2985 | 0.4868 | 0.3695 | 0.5797 | 0.016 | 0.2911 | 0.0302 | 0.2464 | 0.0128 | 0.16 | 0.1066 | 0.3262 |
| 3.6586 | 5.0 | 535 | 1.8116 | 0.1183 | 0.2773 | 0.0879 | 0.0292 | 0.0782 | 0.1819 | 0.1467 | 0.3143 | 0.3378 | 0.142 | 0.3138 | 0.5634 | 0.3744 | 0.5658 | 0.052 | 0.3494 | 0.0532 | 0.2652 | 0.007 | 0.1769 | 0.1049 | 0.332 |
| 3.6586 | 6.0 | 642 | 1.7759 | 0.1213 | 0.2878 | 0.0867 | 0.019 | 0.0851 | 0.2366 | 0.1369 | 0.3103 | 0.3372 | 0.1215 | 0.3085 | 0.6211 | 0.4062 | 0.5694 | 0.0278 | 0.3278 | 0.0582 | 0.2594 | 0.0128 | 0.2338 | 0.1017 | 0.2956 |
| 3.6586 | 7.0 | 749 | 1.6378 | 0.1555 | 0.3462 | 0.1182 | 0.0467 | 0.1064 | 0.2873 | 0.168 | 0.3436 | 0.3788 | 0.1635 | 0.3456 | 0.6683 | 0.4273 | 0.5658 | 0.0479 | 0.3873 | 0.0927 | 0.3022 | 0.0277 | 0.2908 | 0.1819 | 0.348 |
| 3.6586 | 8.0 | 856 | 1.6132 | 0.1654 | 0.376 | 0.1226 | 0.0495 | 0.1358 | 0.366 | 0.1966 | 0.353 | 0.3824 | 0.1421 | 0.3667 | 0.6958 | 0.4324 | 0.5833 | 0.0803 | 0.4152 | 0.1005 | 0.2853 | 0.0373 | 0.2754 | 0.1766 | 0.3529 |
| 3.6586 | 9.0 | 963 | 1.5567 | 0.1815 | 0.3979 | 0.1439 | 0.0529 | 0.1518 | 0.3407 | 0.2063 | 0.3721 | 0.396 | 0.1506 | 0.3879 | 0.6784 | 0.4654 | 0.6149 | 0.0841 | 0.4063 | 0.1158 | 0.3058 | 0.041 | 0.3015 | 0.201 | 0.3516 |
| 1.5229 | 10.0 | 1070 | 1.5420 | 0.194 | 0.4056 | 0.1562 | 0.0523 | 0.1635 | 0.3805 | 0.2139 | 0.3706 | 0.3952 | 0.1409 | 0.3849 | 0.7155 | 0.4799 | 0.6338 | 0.1029 | 0.4114 | 0.127 | 0.3089 | 0.0401 | 0.2754 | 0.22 | 0.3467 |
| 1.5229 | 11.0 | 1177 | 1.4853 | 0.2006 | 0.4273 | 0.1683 | 0.0753 | 0.1676 | 0.3949 | 0.2214 | 0.3753 | 0.4054 | 0.18 | 0.3976 | 0.6702 | 0.4916 | 0.6167 | 0.1162 | 0.4241 | 0.1199 | 0.3138 | 0.0464 | 0.3185 | 0.229 | 0.3542 |
| 1.5229 | 12.0 | 1284 | 1.4646 | 0.2054 | 0.4336 | 0.1626 | 0.0809 | 0.1636 | 0.4116 | 0.2244 | 0.3933 | 0.4162 | 0.196 | 0.4011 | 0.688 | 0.4921 | 0.6333 | 0.098 | 0.4291 | 0.152 | 0.2991 | 0.0556 | 0.3692 | 0.2293 | 0.3502 |
| 1.5229 | 13.0 | 1391 | 1.4438 | 0.2113 | 0.4421 | 0.176 | 0.0722 | 0.1721 | 0.4278 | 0.2333 | 0.3903 | 0.4108 | 0.1905 | 0.4006 | 0.6807 | 0.5002 | 0.6315 | 0.1082 | 0.4342 | 0.1602 | 0.3085 | 0.0488 | 0.3169 | 0.2391 | 0.3627 |
| 1.5229 | 14.0 | 1498 | 1.4194 | 0.2241 | 0.4597 | 0.1846 | 0.0857 | 0.1878 | 0.4516 | 0.2418 | 0.3973 | 0.4209 | 0.1983 | 0.4126 | 0.7007 | 0.5049 | 0.6104 | 0.1265 | 0.4291 | 0.1644 | 0.3299 | 0.0686 | 0.3569 | 0.2564 | 0.3782 |
| 1.2614 | 15.0 | 1605 | 1.4168 | 0.2194 | 0.4409 | 0.191 | 0.0921 | 0.172 | 0.443 | 0.2416 | 0.3979 | 0.4213 | 0.2283 | 0.39 | 0.6824 | 0.5237 | 0.6441 | 0.1208 | 0.4557 | 0.1581 | 0.3129 | 0.0595 | 0.3246 | 0.235 | 0.3689 |
| 1.2614 | 16.0 | 1712 | 1.3935 | 0.226 | 0.4735 | 0.187 | 0.0995 | 0.1831 | 0.4229 | 0.237 | 0.4015 | 0.4238 | 0.2175 | 0.4082 | 0.6808 | 0.5125 | 0.6288 | 0.1292 | 0.4734 | 0.1735 | 0.3263 | 0.0566 | 0.32 | 0.2584 | 0.3702 |
| 1.2614 | 17.0 | 1819 | 1.3928 | 0.2295 | 0.4823 | 0.1949 | 0.0841 | 0.1911 | 0.441 | 0.2507 | 0.3996 | 0.4201 | 0.2206 | 0.3903 | 0.7086 | 0.5135 | 0.632 | 0.1465 | 0.4557 | 0.1652 | 0.3246 | 0.0767 | 0.3169 | 0.2456 | 0.3716 |
| 1.2614 | 18.0 | 1926 | 1.3886 | 0.2302 | 0.4745 | 0.1908 | 0.0836 | 0.1922 | 0.4742 | 0.2562 | 0.404 | 0.4203 | 0.199 | 0.3884 | 0.7143 | 0.5158 | 0.6347 | 0.1484 | 0.4582 | 0.1736 | 0.3192 | 0.064 | 0.3215 | 0.2491 | 0.368 |
| 1.104 | 19.0 | 2033 | 1.3812 | 0.2343 | 0.4775 | 0.201 | 0.0954 | 0.1982 | 0.4586 | 0.248 | 0.3985 | 0.4221 | 0.2093 | 0.4013 | 0.7229 | 0.5257 | 0.641 | 0.1555 | 0.462 | 0.1778 | 0.3308 | 0.0791 | 0.32 | 0.2336 | 0.3569 |
| 1.104 | 20.0 | 2140 | 1.3595 | 0.2488 | 0.4941 | 0.2209 | 0.0973 | 0.2065 | 0.4771 | 0.2677 | 0.4188 | 0.4369 | 0.2404 | 0.4026 | 0.7248 | 0.5337 | 0.6441 | 0.1672 | 0.4709 | 0.1832 | 0.3335 | 0.094 | 0.3523 | 0.2658 | 0.3836 |
| 1.104 | 21.0 | 2247 | 1.3556 | 0.2397 | 0.4789 | 0.2046 | 0.0941 | 0.1986 | 0.4552 | 0.2683 | 0.4094 | 0.4298 | 0.2244 | 0.4045 | 0.7063 | 0.5311 | 0.6396 | 0.1483 | 0.4506 | 0.1868 | 0.3304 | 0.0785 | 0.3508 | 0.2537 | 0.3778 |
| 1.104 | 22.0 | 2354 | 1.3572 | 0.2509 | 0.4949 | 0.2242 | 0.1067 | 0.2086 | 0.4512 | 0.2672 | 0.4119 | 0.4308 | 0.2403 | 0.397 | 0.7136 | 0.5405 | 0.6432 | 0.1641 | 0.4595 | 0.176 | 0.3237 | 0.1085 | 0.3431 | 0.2653 | 0.3844 |
| 1.104 | 23.0 | 2461 | 1.3551 | 0.2503 | 0.4951 | 0.2266 | 0.1053 | 0.2057 | 0.476 | 0.2674 | 0.4117 | 0.4297 | 0.2319 | 0.4058 | 0.7042 | 0.5403 | 0.6464 | 0.1522 | 0.4367 | 0.1828 | 0.3299 | 0.1129 | 0.3508 | 0.2633 | 0.3849 |
| 1.0066 | 24.0 | 2568 | 1.3404 | 0.2539 | 0.5049 | 0.2235 | 0.101 | 0.2081 | 0.4745 | 0.2674 | 0.412 | 0.4301 | 0.231 | 0.4038 | 0.692 | 0.5437 | 0.6559 | 0.1537 | 0.4367 | 0.1903 | 0.3348 | 0.1218 | 0.3415 | 0.2601 | 0.3813 |
| 1.0066 | 25.0 | 2675 | 1.3436 | 0.2574 | 0.5062 | 0.2286 | 0.1124 | 0.2119 | 0.4848 | 0.2667 | 0.4101 | 0.4273 | 0.2264 | 0.4014 | 0.6942 | 0.5416 | 0.6477 | 0.1512 | 0.4329 | 0.193 | 0.3366 | 0.131 | 0.3369 | 0.2702 | 0.3822 |
| 1.0066 | 26.0 | 2782 | 1.3377 | 0.258 | 0.5047 | 0.2211 | 0.1254 | 0.2126 | 0.4825 | 0.27 | 0.4168 | 0.4348 | 0.2491 | 0.4062 | 0.7013 | 0.5431 | 0.6518 | 0.1604 | 0.462 | 0.1935 | 0.3397 | 0.1259 | 0.34 | 0.2669 | 0.3804 |
| 1.0066 | 27.0 | 2889 | 1.3393 | 0.2615 | 0.5108 | 0.2388 | 0.1277 | 0.2188 | 0.4796 | 0.2711 | 0.4167 | 0.4347 | 0.2509 | 0.408 | 0.6993 | 0.5427 | 0.6491 | 0.1685 | 0.4608 | 0.1949 | 0.3348 | 0.1315 | 0.3462 | 0.2699 | 0.3827 |
| 1.0066 | 28.0 | 2996 | 1.3399 | 0.2599 | 0.5102 | 0.2352 | 0.1259 | 0.2166 | 0.4843 | 0.2674 | 0.415 | 0.4326 | 0.2482 | 0.4012 | 0.7042 | 0.5419 | 0.65 | 0.1678 | 0.4544 | 0.1945 | 0.3357 | 0.1253 | 0.3385 | 0.2698 | 0.3844 |
| 0.95 | 29.0 | 3103 | 1.3412 | 0.2594 | 0.5122 | 0.2387 | 0.1259 | 0.2159 | 0.4808 | 0.2702 | 0.4143 | 0.4303 | 0.2452 | 0.3983 | 0.7016 | 0.5393 | 0.6468 | 0.1689 | 0.4532 | 0.197 | 0.3335 | 0.1252 | 0.3369 | 0.2667 | 0.3809 |
| 0.95 | 30.0 | 3210 | 1.3407 | 0.2599 | 0.5107 | 0.2411 | 0.1265 | 0.2152 | 0.4809 | 0.2669 | 0.4141 | 0.4315 | 0.2471 | 0.4009 | 0.7004 | 0.5407 | 0.6477 | 0.1688 | 0.4532 | 0.1974 | 0.3344 | 0.1266 | 0.3415 | 0.266 | 0.3804 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
chinh102/chinh1002 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_109s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_109s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_166s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_166s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v2_s1_166s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v2_s1_166s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
rathi2023/detr-resnet-50_finetuned_swny |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_swny
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 510
### Training results
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"50",
"106",
"107",
"2",
"15",
"12",
"97",
"98",
"99",
"106",
"107",
"2",
"84",
"8",
"32",
"33",
"47",
"48",
"61",
"62",
"83",
"57",
"58",
"18",
"83",
"36",
"37",
"38",
"87",
"74",
"63",
"63",
"53",
"59",
"60",
"72",
"35",
"34",
"7",
"98",
"99",
"81",
"80",
"78",
"44",
"45",
"46",
"39",
"40",
"26",
"27",
"87",
"89",
"11",
"13",
"28",
"29",
"51",
"54",
"55",
"56",
"3",
"8",
"74",
"21",
"95",
"96",
"22",
"20",
"19",
"41",
"42",
"10",
"26",
"101",
"100",
"102",
"92",
"93",
"70",
"71",
"70",
"51",
"61",
"62",
"67",
"52",
"49",
"7",
"9",
"74",
"75",
"17",
"14",
"16",
"32",
"14",
"82",
"31",
"69",
"4",
"28",
"30",
"85",
"86",
"20",
"19",
"10",
"53",
"101",
"100",
"102",
"23",
"24",
"90",
"91",
"103",
"104",
"105",
"66",
"1",
"94",
"92",
"57",
"58",
"35",
"34",
"42",
"43",
"15",
"12",
"63",
"6",
"28",
"29",
"72",
"73",
"81",
"80",
"78",
"79",
"67",
"68",
"87",
"88",
"20",
"19",
"97",
"98",
"99",
"53",
"63",
"64",
"25",
"23",
"24",
"92",
"76",
"77"
] |
Sneha-Mahata/Blood-Cell-Detection-DETR |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_suba_s1_106s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_suba_s1_106s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_224s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_224s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v2_s1_224s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v2_s1_224s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
dopamineaddict/detr-resnet-50_finetuned_cppe5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
PekingU/rtdetr_r18vd_coco_o365 |
# Model Card for RT-DETR
## Table of Contents
1. [Model Details](#model-details)
2. [Model Sources](#model-sources)
3. [How to Get Started with the Model](#how-to-get-started-with-the-model)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Model Architecture and Objective](#model-architecture-and-objective)
7. [Citation](#citation)
## Model Details

> The YOLO series has become the most popular framework for real-time object detection due to its reasonable trade-off between speed and accuracy.
However, we observe that the speed and accuracy of YOLOs are negatively affected by the NMS.
Recently, end-to-end Transformer-based detectors (DETRs) have provided an alternative to eliminating NMS.
Nevertheless, the high computational cost limits their practicality and hinders them from fully exploiting the advantage of excluding NMS.
In this paper, we propose the Real-Time DEtection TRansformer (RT-DETR), the first real-time end-to-end object detector to our best knowledge that addresses the above dilemma.
We build RT-DETR in two steps, drawing on the advanced DETR:
first we focus on maintaining accuracy while improving speed, followed by maintaining speed while improving accuracy.
Specifically, we design an efficient hybrid encoder to expeditiously process multi-scale features by decoupling intra-scale interaction and cross-scale fusion to improve speed.
Then, we propose the uncertainty-minimal query selection to provide high-quality initial queries to the decoder, thereby improving accuracy.
In addition, RT-DETR supports flexible speed tuning by adjusting the number of decoder layers to adapt to various scenarios without retraining.
Our RT-DETR-R50 / R101 achieves 53.1% / 54.3% AP on COCO and 108 / 74 FPS on T4 GPU, outperforming previously advanced YOLOs in both speed and accuracy.
We also develop scaled RT-DETRs that outperform the lighter YOLO detectors (S and M models).
Furthermore, RT-DETR-R50 outperforms DINO-R50 by 2.2% AP in accuracy and about 21 times in FPS.
After pre-training with Objects365, RT-DETR-R50 / R101 achieves 55.3% / 56.2% AP. The project page: this [https URL](https://zhao-yian.github.io/RTDETR/).
This is the model card of a 🤗 [transformers](https://huggingface.co/docs/transformers/index) model that has been pushed on the Hub.
- **Developed by:** Yian Zhao and Sangbum Choi
- **Funded by:** National Key R&D Program of China (No.2022ZD0118201), Natural Science Foundation of China (No.61972217, 32071459, 62176249, 62006133, 62271465),
and the Shenzhen Medical Research Funds in China (No.
B2302037).
- **Shared by:** Sangbum Choi
- **Model type:** [RT-DETR](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)
- **License:** Apache-2.0
### Model Sources
<!-- Provide the basic links for the model. -->
- **HF Docs:** [RT-DETR](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)
- **Repository:** https://github.com/lyuwenyu/RT-DETR
- **Paper:** https://arxiv.org/abs/2304.08069
- **Demo:** [RT-DETR Tracking](https://huggingface.co/spaces/merve/RT-DETR-tracking-coco)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
import requests
from PIL import Image
from transformers import RTDetrForObjectDetection, RTDetrImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r18vd_coco_o365")
model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r18vd_coco_o365")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
This should output
```
sofa: 0.97 [0.14, 0.38, 640.13, 476.21]
cat: 0.96 [343.38, 24.28, 640.14, 371.5]
cat: 0.96 [13.23, 54.18, 318.98, 472.22]
remote: 0.95 [40.11, 73.44, 175.96, 118.48]
remote: 0.92 [333.73, 76.58, 369.97, 186.99]
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The RTDETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We conduct experiments on COCO and Objects365 datasets, where RT-DETR is trained on COCO train2017 and validated on COCO val2017 dataset.
We report the standard COCO metrics, including AP (averaged over uniformly sampled IoU thresholds ranging from 0.50-0.95 with a step size of 0.05),
AP50, AP75, as well as AP at different scales: APS, APM, APL.
### Preprocessing
Images are resized to 640x640 pixels and rescaled with `image_mean=[0.485, 0.456, 0.406]` and `image_std=[0.229, 0.224, 0.225]`.
### Training Hyperparameters
- **Training regime:** <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

## Evaluation
| Model | #Epochs | #Params (M) | GFLOPs | FPS_bs=1 | AP (val) | AP50 (val) | AP75 (val) | AP-s (val) | AP-m (val) | AP-l (val) |
|----------------------------|---------|-------------|--------|----------|--------|-----------|-----------|----------|----------|----------|
| RT-DETR-R18 | 72 | 20 | 60.7 | 217 | 46.5 | 63.8 | 50.4 | 28.4 | 49.8 | 63.0 |
| RT-DETR-R34 | 72 | 31 | 91.0 | 172 | 48.5 | 66.2 | 52.3 | 30.2 | 51.9 | 66.2 |
| RT-DETR R50 | 72 | 42 | 136 | 108 | 53.1 | 71.3 | 57.7 | 34.8 | 58.0 | 70.0 |
| RT-DETR R101| 72 | 76 | 259 | 74 | 54.3 | 72.7 | 58.6 | 36.0 | 58.8 | 72.1 |
| RT-DETR-R18 (Objects 365 pretrained) | 60 | 20 | 61 | 217 | 49.2 | 66.6 | 53.5 | 33.2 | 52.3 | 64.8 |
| RT-DETR-R50 (Objects 365 pretrained) | 24 | 42 | 136 | 108 | 55.3 | 73.4 | 60.1 | 37.9 | 59.9 | 71.8 |
| RT-DETR-R101 (Objects 365 pretrained) | 24 | 76 | 259 | 74 | 56.2 | 74.6 | 61.3 | 38.3 | 60.5 | 73.5 |
### Model Architecture and Objective

Overview of RT-DETR. We feed the features from the last three stages of the backbone into the encoder. The efficient hybrid
encoder transforms multi-scale features into a sequence of image features through the Attention-based Intra-scale Feature Interaction (AIFI)
and the CNN-based Cross-scale Feature Fusion (CCFF). Then, the uncertainty-minimal query selection selects a fixed number of encoder
features to serve as initial object queries for the decoder. Finally, the decoder with auxiliary prediction heads iteratively optimizes object
queries to generate categories and boxes.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{lv2023detrs,
title={DETRs Beat YOLOs on Real-time Object Detection},
author={Yian Zhao and Wenyu Lv and Shangliang Xu and Jinman Wei and Guanzhong Wang and Qingqing Dang and Yi Liu and Jie Chen},
year={2023},
eprint={2304.08069},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Model Card Authors
[Sangbum Choi](https://huggingface.co/danelcsb)
[Pavel Iakubovskii](https://huggingface.co/qubvel-hf)
| [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
PekingU/rtdetr_r50vd_coco_o365 |
# Model Card for RT-DETR
## Table of Contents
1. [Model Details](#model-details)
2. [Model Sources](#model-sources)
3. [How to Get Started with the Model](#how-to-get-started-with-the-model)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Model Architecture and Objective](#model-architecture-and-objective)
7. [Citation](#citation)
## Model Details

> The YOLO series has become the most popular framework for real-time object detection due to its reasonable trade-off between speed and accuracy.
However, we observe that the speed and accuracy of YOLOs are negatively affected by the NMS.
Recently, end-to-end Transformer-based detectors (DETRs) have provided an alternative to eliminating NMS.
Nevertheless, the high computational cost limits their practicality and hinders them from fully exploiting the advantage of excluding NMS.
In this paper, we propose the Real-Time DEtection TRansformer (RT-DETR), the first real-time end-to-end object detector to our best knowledge that addresses the above dilemma.
We build RT-DETR in two steps, drawing on the advanced DETR:
first we focus on maintaining accuracy while improving speed, followed by maintaining speed while improving accuracy.
Specifically, we design an efficient hybrid encoder to expeditiously process multi-scale features by decoupling intra-scale interaction and cross-scale fusion to improve speed.
Then, we propose the uncertainty-minimal query selection to provide high-quality initial queries to the decoder, thereby improving accuracy.
In addition, RT-DETR supports flexible speed tuning by adjusting the number of decoder layers to adapt to various scenarios without retraining.
Our RT-DETR-R50 / R101 achieves 53.1% / 54.3% AP on COCO and 108 / 74 FPS on T4 GPU, outperforming previously advanced YOLOs in both speed and accuracy.
We also develop scaled RT-DETRs that outperform the lighter YOLO detectors (S and M models).
Furthermore, RT-DETR-R50 outperforms DINO-R50 by 2.2% AP in accuracy and about 21 times in FPS.
After pre-training with Objects365, RT-DETR-R50 / R101 achieves 55.3% / 56.2% AP. The project page: this [https URL](https://zhao-yian.github.io/RTDETR/).
This is the model card of a 🤗 [transformers](https://huggingface.co/docs/transformers/index) model that has been pushed on the Hub.
- **Developed by:** Yian Zhao and Sangbum Choi
- **Funded by:** National Key R&D Program of China (No.2022ZD0118201), Natural Science Foundation of China (No.61972217, 32071459, 62176249, 62006133, 62271465),
and the Shenzhen Medical Research Funds in China (No.
B2302037).
- **Shared by:** Sangbum Choi
- **Model type:** [RT-DETR](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)
- **License:** Apache-2.0
### Model Sources
<!-- Provide the basic links for the model. -->
- **HF Docs:** [RT-DETR](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)
- **Repository:** https://github.com/lyuwenyu/RT-DETR
- **Paper:** https://arxiv.org/abs/2304.08069
- **Demo:** [RT-DETR Tracking](https://huggingface.co/spaces/merve/RT-DETR-tracking-coco)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
import requests
from PIL import Image
from transformers import RTDetrForObjectDetection, RTDetrImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r50vd_coco_o365")
model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd_coco_o365")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
This should output
```
sofa: 0.97 [0.14, 0.38, 640.13, 476.21]
cat: 0.96 [343.38, 24.28, 640.14, 371.5]
cat: 0.96 [13.23, 54.18, 318.98, 472.22]
remote: 0.95 [40.11, 73.44, 175.96, 118.48]
remote: 0.92 [333.73, 76.58, 369.97, 186.99]
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The RTDETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We conduct experiments on COCO and Objects365 datasets, where RT-DETR is trained on COCO train2017 and validated on COCO val2017 dataset.
We report the standard COCO metrics, including AP (averaged over uniformly sampled IoU thresholds ranging from 0.50-0.95 with a step size of 0.05),
AP50, AP75, as well as AP at different scales: APS, APM, APL.
### Preprocessing
Images are resized to 640x640 pixels and rescaled with `image_mean=[0.485, 0.456, 0.406]` and `image_std=[0.229, 0.224, 0.225]`.
### Training Hyperparameters
- **Training regime:** <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

## Evaluation
| Model | #Epochs | #Params (M) | GFLOPs | FPS_bs=1 | AP (val) | AP50 (val) | AP75 (val) | AP-s (val) | AP-m (val) | AP-l (val) |
|----------------------------|---------|-------------|--------|----------|--------|-----------|-----------|----------|----------|----------|
| RT-DETR-R18 | 72 | 20 | 60.7 | 217 | 46.5 | 63.8 | 50.4 | 28.4 | 49.8 | 63.0 |
| RT-DETR-R34 | 72 | 31 | 91.0 | 172 | 48.5 | 66.2 | 52.3 | 30.2 | 51.9 | 66.2 |
| RT-DETR R50 | 72 | 42 | 136 | 108 | 53.1 | 71.3 | 57.7 | 34.8 | 58.0 | 70.0 |
| RT-DETR R101| 72 | 76 | 259 | 74 | 54.3 | 72.7 | 58.6 | 36.0 | 58.8 | 72.1 |
| RT-DETR-R18 (Objects 365 pretrained) | 60 | 20 | 61 | 217 | 49.2 | 66.6 | 53.5 | 33.2 | 52.3 | 64.8 |
| RT-DETR-R50 (Objects 365 pretrained) | 24 | 42 | 136 | 108 | 55.3 | 73.4 | 60.1 | 37.9 | 59.9 | 71.8 |
| RT-DETR-R101 (Objects 365 pretrained) | 24 | 76 | 259 | 74 | 56.2 | 74.6 | 61.3 | 38.3 | 60.5 | 73.5 |
### Model Architecture and Objective

Overview of RT-DETR. We feed the features from the last three stages of the backbone into the encoder. The efficient hybrid
encoder transforms multi-scale features into a sequence of image features through the Attention-based Intra-scale Feature Interaction (AIFI)
and the CNN-based Cross-scale Feature Fusion (CCFF). Then, the uncertainty-minimal query selection selects a fixed number of encoder
features to serve as initial object queries for the decoder. Finally, the decoder with auxiliary prediction heads iteratively optimizes object
queries to generate categories and boxes.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{lv2023detrs,
title={DETRs Beat YOLOs on Real-time Object Detection},
author={Yian Zhao and Wenyu Lv and Shangliang Xu and Jinman Wei and Guanzhong Wang and Qingqing Dang and Yi Liu and Jie Chen},
year={2023},
eprint={2304.08069},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Model Card Authors
[Sangbum Choi](https://huggingface.co/danelcsb)
[Pavel Iakubovskii](https://huggingface.co/qubvel-hf)
| [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
PekingU/rtdetr_r18vd |
# Model Card for RT-DETR
## Table of Contents
1. [Model Details](#model-details)
2. [Model Sources](#model-sources)
3. [How to Get Started with the Model](#how-to-get-started-with-the-model)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Model Architecture and Objective](#model-architecture-and-objective)
7. [Citation](#citation)
## Model Details

> The YOLO series has become the most popular framework for real-time object detection due to its reasonable trade-off between speed and accuracy.
However, we observe that the speed and accuracy of YOLOs are negatively affected by the NMS.
Recently, end-to-end Transformer-based detectors (DETRs) have provided an alternative to eliminating NMS.
Nevertheless, the high computational cost limits their practicality and hinders them from fully exploiting the advantage of excluding NMS.
In this paper, we propose the Real-Time DEtection TRansformer (RT-DETR), the first real-time end-to-end object detector to our best knowledge that addresses the above dilemma.
We build RT-DETR in two steps, drawing on the advanced DETR:
first we focus on maintaining accuracy while improving speed, followed by maintaining speed while improving accuracy.
Specifically, we design an efficient hybrid encoder to expeditiously process multi-scale features by decoupling intra-scale interaction and cross-scale fusion to improve speed.
Then, we propose the uncertainty-minimal query selection to provide high-quality initial queries to the decoder, thereby improving accuracy.
In addition, RT-DETR supports flexible speed tuning by adjusting the number of decoder layers to adapt to various scenarios without retraining.
Our RT-DETR-R50 / R101 achieves 53.1% / 54.3% AP on COCO and 108 / 74 FPS on T4 GPU, outperforming previously advanced YOLOs in both speed and accuracy.
We also develop scaled RT-DETRs that outperform the lighter YOLO detectors (S and M models).
Furthermore, RT-DETR-R50 outperforms DINO-R50 by 2.2% AP in accuracy and about 21 times in FPS.
After pre-training with Objects365, RT-DETR-R50 / R101 achieves 55.3% / 56.2% AP. The project page: this [https URL](https://zhao-yian.github.io/RTDETR/).
This is the model card of a 🤗 [transformers](https://huggingface.co/docs/transformers/index) model that has been pushed on the Hub.
- **Developed by:** Yian Zhao and Sangbum Choi
- **Funded by:** National Key R&D Program of China (No.2022ZD0118201), Natural Science Foundation of China (No.61972217, 32071459, 62176249, 62006133, 62271465),
and the Shenzhen Medical Research Funds in China (No.
B2302037).
- **Shared by:** Sangbum Choi
- **Model type:** [RT-DETR](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)
- **License:** Apache-2.0
### Model Sources
<!-- Provide the basic links for the model. -->
- **HF Docs:** [RT-DETR](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)
- **Repository:** https://github.com/lyuwenyu/RT-DETR
- **Paper:** https://arxiv.org/abs/2304.08069
- **Demo:** [RT-DETR Tracking](https://huggingface.co/spaces/merve/RT-DETR-tracking-coco)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
import requests
from PIL import Image
from transformers import RTDetrForObjectDetection, RTDetrImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r18vd")
model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r18vd")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
This should output
```
sofa: 0.97 [0.14, 0.38, 640.13, 476.21]
cat: 0.96 [343.38, 24.28, 640.14, 371.5]
cat: 0.96 [13.23, 54.18, 318.98, 472.22]
remote: 0.95 [40.11, 73.44, 175.96, 118.48]
remote: 0.92 [333.73, 76.58, 369.97, 186.99]
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The RTDETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We conduct experiments on COCO and Objects365 datasets, where RT-DETR is trained on COCO train2017 and validated on COCO val2017 dataset.
We report the standard COCO metrics, including AP (averaged over uniformly sampled IoU thresholds ranging from 0.50-0.95 with a step size of 0.05),
AP50, AP75, as well as AP at different scales: APS, APM, APL.
### Preprocessing
Images are resized to 640x640 pixels and rescaled with `image_mean=[0.485, 0.456, 0.406]` and `image_std=[0.229, 0.224, 0.225]`.
### Training Hyperparameters
- **Training regime:** <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

## Evaluation
| Model | #Epochs | #Params (M) | GFLOPs | FPS_bs=1 | AP (val) | AP50 (val) | AP75 (val) | AP-s (val) | AP-m (val) | AP-l (val) |
|----------------------------|---------|-------------|--------|----------|--------|-----------|-----------|----------|----------|----------|
| RT-DETR-R18 | 72 | 20 | 60.7 | 217 | 46.5 | 63.8 | 50.4 | 28.4 | 49.8 | 63.0 |
| RT-DETR-R34 | 72 | 31 | 91.0 | 172 | 48.5 | 66.2 | 52.3 | 30.2 | 51.9 | 66.2 |
| RT-DETR R50 | 72 | 42 | 136 | 108 | 53.1 | 71.3 | 57.7 | 34.8 | 58.0 | 70.0 |
| RT-DETR R101| 72 | 76 | 259 | 74 | 54.3 | 72.7 | 58.6 | 36.0 | 58.8 | 72.1 |
| RT-DETR-R18 (Objects 365 pretrained) | 60 | 20 | 61 | 217 | 49.2 | 66.6 | 53.5 | 33.2 | 52.3 | 64.8 |
| RT-DETR-R50 (Objects 365 pretrained) | 24 | 42 | 136 | 108 | 55.3 | 73.4 | 60.1 | 37.9 | 59.9 | 71.8 |
| RT-DETR-R101 (Objects 365 pretrained) | 24 | 76 | 259 | 74 | 56.2 | 74.6 | 61.3 | 38.3 | 60.5 | 73.5 |
### Model Architecture and Objective

Overview of RT-DETR. We feed the features from the last three stages of the backbone into the encoder. The efficient hybrid
encoder transforms multi-scale features into a sequence of image features through the Attention-based Intra-scale Feature Interaction (AIFI)
and the CNN-based Cross-scale Feature Fusion (CCFF). Then, the uncertainty-minimal query selection selects a fixed number of encoder
features to serve as initial object queries for the decoder. Finally, the decoder with auxiliary prediction heads iteratively optimizes object
queries to generate categories and boxes.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{lv2023detrs,
title={DETRs Beat YOLOs on Real-time Object Detection},
author={Yian Zhao and Wenyu Lv and Shangliang Xu and Jinman Wei and Guanzhong Wang and Qingqing Dang and Yi Liu and Jie Chen},
year={2023},
eprint={2304.08069},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Model Card Authors
[Sangbum Choi](https://huggingface.co/danelcsb)
[Pavel Iakubovskii](https://huggingface.co/qubvel-hf)
| [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_253s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_253s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v2_s1_253s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v2_s1_253s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
Aryan-401/yolo-tiny-fashion |
# Model Trained Using AutoTrain
- Problem type: Object Detection
## Validation Metrics
loss: 1.3179453611373901
map: 0.1361
map_50: 0.1892
map_75: 0.1548
map_small: 0.0
map_medium: 0.102
map_large: 0.1367
mar_1: 0.2076
mar_10: 0.4071
mar_100: 0.4151
mar_small: 0.0
mar_medium: 0.2304
mar_large: 0.4179
| [
"shirt, blouse",
"top, t-shirt, sweatshirt",
"sweater",
"cardigan",
"jacket",
"vest",
"pants",
"shorts",
"skirt",
"coat",
"dress",
"jumpsuit",
"cape",
"glasses",
"hat",
"headband, head covering, hair accessory",
"tie",
"glove",
"watch",
"belt",
"leg warmer",
"tights, stockings",
"sock",
"shoe",
"bag, wallet",
"scarf",
"umbrella",
"hood",
"collar",
"lapel",
"epaulette",
"sleeve",
"pocket",
"neckline",
"buckle",
"zipper",
"applique",
"bead",
"bow",
"flower",
"fringe",
"ribbon",
"rivet",
"ruffle",
"sequin",
"tassel"
] |
Aryan-401/detr-resnet-50-cppe5 |
# Model Trained Using AutoTrain
- Problem type: Object Detection
## Validation Metrics
loss: 1.3475022315979004
map: 0.2746
map_50: 0.5638
map_75: 0.2333
map_small: 0.1345
map_medium: 0.2275
map_large: 0.4482
mar_1: 0.2715
mar_10: 0.4663
mar_100: 0.49
mar_small: 0.1839
mar_medium: 0.4158
mar_large: 0.6686
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_311s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_311s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 750
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v2_s1_311s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v2_s1_311s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_detresnet_v1_s1_311s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_detresnet_v1_s1_311s
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_detresnet_v2_s1_311s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_detresnet_v2_s1_311s
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
stoneseok/detr-finetuned-lane |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_370s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_370s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 750
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v2_s1_370s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v2_s1_370s
This model is a fine-tuned version of [nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v2_s1_370s](https://huggingface.co/nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v2_s1_370s) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 750
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_170s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_170s
This model is a fine-tuned version of [nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_170s](https://huggingface.co/nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_170s) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 750
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
PierreMaxime/detr-resnet-50_finetuned_cppe5-premier | from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="detr-resnet-50_finetuned_cppe5-second",
per_device_train_batch_size=8,
num_train_epochs=30, # Mise à jour pour correspondre à num_epochs: 100
fp16=False,
save_steps=200,
logging_steps=50,
learning_rate=1e-5,
weight_decay=1e-4,
save_total_limit=2,
remove_unused_columns=False,
push_to_hub=True,
seed=42, # Ajout de seed: 42
lr_scheduler_type="linear", # Mise à jour pour correspondre à lr_scheduler_type: linear
optim="adamw_torch", # Optimizer Adam avec betas et epsilon définis ci-dessous
)
# Pour spécifier les paramètres de l'optimiseur Adam, vous pouvez les passer lors de la création de l'optimiseur dans la fonction d'entraînement
from transformers import AdamW
optimizer = AdamW(model.parameters(), lr=1e-5, betas=(0.9, 0.999), eps=1e-08)
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.116
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.193
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.125
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.006
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.025
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.115
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.102
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.196
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.239
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.052
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.115
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.227 | [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v2_s1_170s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v2_s1_170s
This model is a fine-tuned version of [nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v2_s1_170s](https://huggingface.co/nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v2_s1_170s) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 750
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 750
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v2_s1_226s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v2_s1_226s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 750
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
stoneseok/detr-multi-finetuned |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"소화전",
"보도(시멘트 콘크리트)",
"자전거 도로"
] |
nicollecnunes/chvg-db |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8"
] |
Hemg/detr-resnet-50_finetuned_cppe5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
nextt/detr_finetuned_cppe5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_cppe5
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9593
- Map: 0.0044
- Map 50: 0.0137
- Map 75: 0.0023
- Map Small: 0.0022
- Map Medium: 0.0004
- Map Large: 0.0048
- Mar 1: 0.0129
- Mar 10: 0.0353
- Mar 100: 0.0591
- Mar Small: 0.0018
- Mar Medium: 0.0246
- Mar Large: 0.0575
- Map Coverall: 0.0207
- Mar 100 Coverall: 0.2338
- Map Face Shield: 0.0001
- Mar 100 Face Shield: 0.0038
- Map Gloves: 0.0002
- Mar 100 Gloves: 0.021
- Map Goggles: 0.0
- Mar 100 Goggles: 0.0
- Map Mask: 0.001
- Mar 100 Mask: 0.0369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:|
| No log | 1.0 | 107 | 3.4694 | 0.0001 | 0.0007 | 0.0 | 0.0 | 0.0001 | 0.0002 | 0.0018 | 0.0054 | 0.0086 | 0.0057 | 0.0035 | 0.0055 | 0.0004 | 0.0239 | 0.0 | 0.0 | 0.0 | 0.0022 | 0.0 | 0.0 | 0.0001 | 0.0169 |
| No log | 2.0 | 214 | 3.3011 | 0.0009 | 0.0029 | 0.0003 | 0.0009 | 0.0 | 0.0009 | 0.0022 | 0.0183 | 0.0288 | 0.0011 | 0.007 | 0.0292 | 0.0042 | 0.1275 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0164 |
| No log | 3.0 | 321 | 3.4689 | 0.0012 | 0.0045 | 0.0003 | 0.0 | 0.0 | 0.0013 | 0.0032 | 0.0169 | 0.0355 | 0.0 | 0.0 | 0.0406 | 0.0059 | 0.1775 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 4.0 | 428 | 3.2984 | 0.0018 | 0.0077 | 0.0005 | 0.0002 | 0.0001 | 0.0021 | 0.005 | 0.0216 | 0.0346 | 0.0002 | 0.0169 | 0.0316 | 0.009 | 0.1383 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0347 |
| 4.926 | 5.0 | 535 | 3.1808 | 0.0019 | 0.0071 | 0.0005 | 0.0002 | 0.0001 | 0.0021 | 0.0032 | 0.0229 | 0.0445 | 0.0002 | 0.0157 | 0.0431 | 0.0093 | 0.1883 | 0.0 | 0.0 | 0.0 | 0.0089 | 0.0 | 0.0 | 0.0001 | 0.0253 |
| 4.926 | 6.0 | 642 | 3.1296 | 0.002 | 0.007 | 0.0007 | 0.0005 | 0.0001 | 0.0022 | 0.0059 | 0.0207 | 0.0487 | 0.0015 | 0.0168 | 0.0477 | 0.0099 | 0.2095 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0342 |
| 4.926 | 7.0 | 749 | 3.1212 | 0.0021 | 0.007 | 0.0007 | 0.0007 | 0.0008 | 0.0024 | 0.0029 | 0.0255 | 0.0505 | 0.0007 | 0.0143 | 0.051 | 0.0104 | 0.2234 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0289 |
| 4.926 | 8.0 | 856 | 3.2044 | 0.0045 | 0.0148 | 0.0014 | 0.0007 | 0.0 | 0.0051 | 0.0095 | 0.0208 | 0.037 | 0.0007 | 0.0108 | 0.0363 | 0.0222 | 0.1586 | 0.0 | 0.0 | 0.0001 | 0.0129 | 0.0 | 0.0 | 0.0 | 0.0133 |
| 4.926 | 9.0 | 963 | 3.1113 | 0.0028 | 0.0104 | 0.0005 | 0.004 | 0.0001 | 0.0032 | 0.0111 | 0.0237 | 0.0436 | 0.0031 | 0.0133 | 0.0421 | 0.014 | 0.1838 | 0.0 | 0.0 | 0.0 | 0.0058 | 0.0 | 0.0 | 0.0002 | 0.0284 |
| 3.0252 | 10.0 | 1070 | 3.1235 | 0.0038 | 0.0142 | 0.0013 | 0.0013 | 0.0 | 0.0039 | 0.0039 | 0.0283 | 0.0506 | 0.0016 | 0.007 | 0.0534 | 0.0167 | 0.2333 | 0.0 | 0.0 | 0.0 | 0.0107 | 0.0 | 0.0 | 0.0021 | 0.0089 |
| 3.0252 | 11.0 | 1177 | 3.0521 | 0.0041 | 0.0136 | 0.0015 | 0.0062 | 0.0 | 0.0042 | 0.0121 | 0.0309 | 0.051 | 0.0062 | 0.0081 | 0.0514 | 0.0185 | 0.2248 | 0.0 | 0.0 | 0.0001 | 0.0071 | 0.0 | 0.0 | 0.0018 | 0.0231 |
| 3.0252 | 12.0 | 1284 | 3.1122 | 0.0026 | 0.0087 | 0.0008 | 0.001 | 0.0016 | 0.0029 | 0.0084 | 0.0284 | 0.0496 | 0.0005 | 0.0128 | 0.05 | 0.013 | 0.2194 | 0.0 | 0.0 | 0.0001 | 0.0205 | 0.0 | 0.0 | 0.0 | 0.008 |
| 3.0252 | 13.0 | 1391 | 3.1495 | 0.0028 | 0.0096 | 0.0005 | 0.0 | 0.0001 | 0.0031 | 0.0082 | 0.0285 | 0.0481 | 0.0 | 0.0173 | 0.0459 | 0.0136 | 0.2005 | 0.0 | 0.0 | 0.0001 | 0.0219 | 0.0 | 0.0 | 0.0001 | 0.0182 |
| 3.0252 | 14.0 | 1498 | 3.1443 | 0.0026 | 0.0083 | 0.0006 | 0.0 | 0.0001 | 0.0029 | 0.0091 | 0.0253 | 0.0486 | 0.0 | 0.0155 | 0.0466 | 0.0127 | 0.2036 | 0.0 | 0.0 | 0.0002 | 0.0344 | 0.0 | 0.0 | 0.0 | 0.0049 |
| 2.9223 | 15.0 | 1605 | 3.0269 | 0.0064 | 0.0181 | 0.0035 | 0.0035 | 0.0001 | 0.0072 | 0.0109 | 0.0318 | 0.0494 | 0.0029 | 0.012 | 0.0491 | 0.0314 | 0.2144 | 0.0 | 0.0 | 0.0001 | 0.0112 | 0.0 | 0.0 | 0.0002 | 0.0213 |
| 2.9223 | 16.0 | 1712 | 3.0312 | 0.0068 | 0.0178 | 0.0048 | 0.0015 | 0.0004 | 0.0077 | 0.0122 | 0.0323 | 0.0469 | 0.0013 | 0.0215 | 0.0419 | 0.033 | 0.1829 | 0.0 | 0.0 | 0.0002 | 0.0241 | 0.0 | 0.0 | 0.0008 | 0.0276 |
| 2.9223 | 17.0 | 1819 | 2.9839 | 0.0055 | 0.0158 | 0.0026 | 0.0027 | 0.0002 | 0.006 | 0.0118 | 0.0308 | 0.0527 | 0.0022 | 0.0236 | 0.0472 | 0.0267 | 0.2063 | 0.0 | 0.0 | 0.0001 | 0.0214 | 0.0 | 0.0 | 0.0006 | 0.0356 |
| 2.9223 | 18.0 | 1926 | 3.0200 | 0.0064 | 0.0186 | 0.0036 | 0.0005 | 0.0004 | 0.0072 | 0.0118 | 0.0295 | 0.0519 | 0.0004 | 0.0298 | 0.044 | 0.0311 | 0.1923 | 0.0 | 0.0 | 0.0001 | 0.0263 | 0.0 | 0.0 | 0.0008 | 0.0409 |
| 2.8252 | 19.0 | 2033 | 2.9895 | 0.0053 | 0.0166 | 0.0029 | 0.0025 | 0.0002 | 0.006 | 0.0113 | 0.0292 | 0.0475 | 0.002 | 0.021 | 0.0428 | 0.0262 | 0.1869 | 0.0 | 0.0 | 0.0001 | 0.0188 | 0.0 | 0.0 | 0.0004 | 0.032 |
| 2.8252 | 20.0 | 2140 | 3.0483 | 0.0038 | 0.0124 | 0.0018 | 0.0002 | 0.0001 | 0.0044 | 0.0111 | 0.0275 | 0.0431 | 0.0002 | 0.0172 | 0.0403 | 0.0188 | 0.1761 | 0.0 | 0.0 | 0.0001 | 0.0174 | 0.0 | 0.0 | 0.0002 | 0.0218 |
| 2.8252 | 21.0 | 2247 | 3.0509 | 0.0035 | 0.0112 | 0.0017 | 0.0 | 0.0001 | 0.004 | 0.0102 | 0.0314 | 0.0547 | 0.0 | 0.0124 | 0.0563 | 0.0174 | 0.2459 | 0.0 | 0.0 | 0.0 | 0.0107 | 0.0 | 0.0 | 0.0001 | 0.0169 |
| 2.8252 | 22.0 | 2354 | 2.9868 | 0.0039 | 0.0136 | 0.0015 | 0.001 | 0.0004 | 0.0042 | 0.0117 | 0.0353 | 0.064 | 0.0007 | 0.0304 | 0.0576 | 0.0183 | 0.2518 | 0.0 | 0.0 | 0.0001 | 0.0232 | 0.0 | 0.0 | 0.0009 | 0.0449 |
| 2.8252 | 23.0 | 2461 | 2.9752 | 0.0042 | 0.0137 | 0.0019 | 0.0015 | 0.0002 | 0.0047 | 0.0112 | 0.0337 | 0.0601 | 0.0011 | 0.021 | 0.0575 | 0.0204 | 0.2514 | 0.0 | 0.0 | 0.0002 | 0.0188 | 0.0 | 0.0 | 0.0004 | 0.0302 |
| 2.803 | 24.0 | 2568 | 2.9948 | 0.0042 | 0.013 | 0.0021 | 0.0015 | 0.0002 | 0.0046 | 0.0109 | 0.0309 | 0.0557 | 0.0011 | 0.0212 | 0.0526 | 0.0203 | 0.2297 | 0.0 | 0.0 | 0.0001 | 0.0174 | 0.0 | 0.0 | 0.0004 | 0.0316 |
| 2.803 | 25.0 | 2675 | 2.9797 | 0.0043 | 0.0139 | 0.0016 | 0.0015 | 0.0004 | 0.0047 | 0.0119 | 0.033 | 0.059 | 0.0011 | 0.0255 | 0.0541 | 0.0204 | 0.2365 | 0.0 | 0.0 | 0.0001 | 0.0214 | 0.0 | 0.0 | 0.001 | 0.0373 |
| 2.803 | 26.0 | 2782 | 2.9674 | 0.0042 | 0.0133 | 0.0022 | 0.002 | 0.0003 | 0.0046 | 0.0117 | 0.0336 | 0.0579 | 0.0017 | 0.0229 | 0.054 | 0.0201 | 0.236 | 0.0 | 0.0 | 0.0002 | 0.0152 | 0.0 | 0.0 | 0.0008 | 0.0382 |
| 2.803 | 27.0 | 2889 | 2.9539 | 0.0044 | 0.0141 | 0.0021 | 0.0025 | 0.0003 | 0.0047 | 0.012 | 0.0352 | 0.0592 | 0.0021 | 0.0232 | 0.0552 | 0.0207 | 0.241 | 0.0 | 0.0 | 0.0002 | 0.0192 | 0.0 | 0.0 | 0.0009 | 0.036 |
| 2.803 | 28.0 | 2996 | 2.9604 | 0.0042 | 0.0135 | 0.0021 | 0.002 | 0.0004 | 0.0046 | 0.0128 | 0.0347 | 0.0587 | 0.0016 | 0.0239 | 0.0575 | 0.0199 | 0.2338 | 0.0001 | 0.0038 | 0.0002 | 0.0205 | 0.0 | 0.0 | 0.0009 | 0.0356 |
| 2.7833 | 29.0 | 3103 | 2.9589 | 0.0044 | 0.0137 | 0.0023 | 0.0022 | 0.0004 | 0.0048 | 0.0129 | 0.035 | 0.0592 | 0.0018 | 0.0244 | 0.0577 | 0.0207 | 0.2347 | 0.0001 | 0.0038 | 0.0002 | 0.0205 | 0.0 | 0.0 | 0.001 | 0.0369 |
| 2.7833 | 30.0 | 3210 | 2.9593 | 0.0044 | 0.0137 | 0.0023 | 0.0022 | 0.0004 | 0.0048 | 0.0129 | 0.0353 | 0.0591 | 0.0018 | 0.0246 | 0.0575 | 0.0207 | 0.2338 | 0.0001 | 0.0038 | 0.0002 | 0.021 | 0.0 | 0.0 | 0.001 | 0.0369 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.2
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
stoneseok/detr-multi-cars-finetuned |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5"
] |
PekingU/rtdetr_r50vd |
# Model Card for RT-DETR
## Table of Contents
1. [Model Details](#model-details)
2. [Model Sources](#model-sources)
3. [How to Get Started with the Model](#how-to-get-started-with-the-model)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Model Architecture and Objective](#model-architecture-and-objective)
7. [Citation](#citation)
## Model Details

> The YOLO series has become the most popular framework for real-time object detection due to its reasonable trade-off between speed and accuracy.
However, we observe that the speed and accuracy of YOLOs are negatively affected by the NMS.
Recently, end-to-end Transformer-based detectors (DETRs) have provided an alternative to eliminating NMS.
Nevertheless, the high computational cost limits their practicality and hinders them from fully exploiting the advantage of excluding NMS.
In this paper, we propose the Real-Time DEtection TRansformer (RT-DETR), the first real-time end-to-end object detector to our best knowledge that addresses the above dilemma.
We build RT-DETR in two steps, drawing on the advanced DETR:
first we focus on maintaining accuracy while improving speed, followed by maintaining speed while improving accuracy.
Specifically, we design an efficient hybrid encoder to expeditiously process multi-scale features by decoupling intra-scale interaction and cross-scale fusion to improve speed.
Then, we propose the uncertainty-minimal query selection to provide high-quality initial queries to the decoder, thereby improving accuracy.
In addition, RT-DETR supports flexible speed tuning by adjusting the number of decoder layers to adapt to various scenarios without retraining.
Our RT-DETR-R50 / R101 achieves 53.1% / 54.3% AP on COCO and 108 / 74 FPS on T4 GPU, outperforming previously advanced YOLOs in both speed and accuracy.
We also develop scaled RT-DETRs that outperform the lighter YOLO detectors (S and M models).
Furthermore, RT-DETR-R50 outperforms DINO-R50 by 2.2% AP in accuracy and about 21 times in FPS.
After pre-training with Objects365, RT-DETR-R50 / R101 achieves 55.3% / 56.2% AP. The project page: this [https URL](https://zhao-yian.github.io/RTDETR/).
This is the model card of a 🤗 [transformers](https://huggingface.co/docs/transformers/index) model that has been pushed on the Hub.
- **Developed by:** Yian Zhao and Sangbum Choi
- **Funded by:** National Key R&D Program of China (No.2022ZD0118201), Natural Science Foundation of China (No.61972217, 32071459, 62176249, 62006133, 62271465),
and the Shenzhen Medical Research Funds in China (No.
B2302037).
- **Shared by:** Sangbum Choi
- **Model type:** [RT-DETR](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)
- **License:** Apache-2.0
### Model Sources
- **HF Docs:** [RT-DETR](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)
- **Repository:** https://github.com/lyuwenyu/RT-DETR
- **Paper:** https://arxiv.org/abs/2304.08069
- **Demo:** [RT-DETR Tracking](https://huggingface.co/spaces/merve/RT-DETR-tracking-coco)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
import requests
from PIL import Image
from transformers import RTDetrForObjectDetection, RTDetrImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r50vd")
model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
This should output
```
sofa: 0.97 [0.14, 0.38, 640.13, 476.21]
cat: 0.96 [343.38, 24.28, 640.14, 371.5]
cat: 0.96 [13.23, 54.18, 318.98, 472.22]
remote: 0.95 [40.11, 73.44, 175.96, 118.48]
remote: 0.92 [333.73, 76.58, 369.97, 186.99]
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The RTDETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We conduct experiments on COCO and Objects365 datasets, where RT-DETR is trained on COCO train2017 and validated on COCO val2017 dataset.
We report the standard COCO metrics, including AP (averaged over uniformly sampled IoU thresholds ranging from 0.50-0.95 with a step size of 0.05),
AP50, AP75, as well as AP at different scales: APS, APM, APL.
### Preprocessing
Images are resized to 640x640 pixels and rescaled with `image_mean=[0.485, 0.456, 0.406]` and `image_std=[0.229, 0.224, 0.225]`.
### Training Hyperparameters
- **Training regime:** <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

## Evaluation
| Model | #Epochs | #Params (M) | GFLOPs | FPS_bs=1 | AP (val) | AP50 (val) | AP75 (val) | AP-s (val) | AP-m (val) | AP-l (val) |
|----------------------------|---------|-------------|--------|----------|--------|-----------|-----------|----------|----------|----------|
| RT-DETR-R18 | 72 | 20 | 60.7 | 217 | 46.5 | 63.8 | 50.4 | 28.4 | 49.8 | 63.0 |
| RT-DETR-R34 | 72 | 31 | 91.0 | 172 | 48.5 | 66.2 | 52.3 | 30.2 | 51.9 | 66.2 |
| RT-DETR R50 | 72 | 42 | 136 | 108 | 53.1 | 71.3 | 57.7 | 34.8 | 58.0 | 70.0 |
| RT-DETR R101| 72 | 76 | 259 | 74 | 54.3 | 72.7 | 58.6 | 36.0 | 58.8 | 72.1 |
| RT-DETR-R18 (Objects 365 pretrained) | 60 | 20 | 61 | 217 | 49.2 | 66.6 | 53.5 | 33.2 | 52.3 | 64.8 |
| RT-DETR-R50 (Objects 365 pretrained) | 24 | 42 | 136 | 108 | 55.3 | 73.4 | 60.1 | 37.9 | 59.9 | 71.8 |
| RT-DETR-R101 (Objects 365 pretrained) | 24 | 76 | 259 | 74 | 56.2 | 74.6 | 61.3 | 38.3 | 60.5 | 73.5 |
### Model Architecture and Objective

Overview of RT-DETR. We feed the features from the last three stages of the backbone into the encoder. The efficient hybrid
encoder transforms multi-scale features into a sequence of image features through the Attention-based Intra-scale Feature Interaction (AIFI)
and the CNN-based Cross-scale Feature Fusion (CCFF). Then, the uncertainty-minimal query selection selects a fixed number of encoder
features to serve as initial object queries for the decoder. Finally, the decoder with auxiliary prediction heads iteratively optimizes object
queries to generate categories and boxes.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{lv2023detrs,
title={DETRs Beat YOLOs on Real-time Object Detection},
author={Yian Zhao and Wenyu Lv and Shangliang Xu and Jinman Wei and Guanzhong Wang and Qingqing Dang and Yi Liu and Jie Chen},
year={2023},
eprint={2304.08069},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Model Card Authors
[Sangbum Choi](https://huggingface.co/danelcsb)
[Pavel Iakubovskii](https://huggingface.co/qubvel-hf)
| [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 750
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
IDEA-Research/dab-detr-resnet-50 |
# Model Card for Model ID
## Table of Contents
1. [Model Details](#model-details)
2. [Model Sources](#model-sources)
3. [How to Get Started with the Model](#how-to-get-started-with-the-model)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Model Architecture and Objective](#model-architecture-and-objective)
7. [Citation](#citation)
## Model Details

> We present in this paper a novel query formulation using dynamic anchor boxes for DETR (DEtection TRansformer) and offer a deeper understanding of the role of queries in DETR. This new formulation directly uses box coordinates as queries in Transformer decoders and dynamically updates them layer-by-layer. Using box coordinates not only helps using explicit positional priors to improve the query-to-feature similarity and eliminate the slow training convergence issue in DETR, but also allows us to modulate the positional attention map using the box width and height information. Such a design makes it clear that queries in DETR can be implemented as performing soft ROI pooling layer-by-layer in a cascade manner. As a result, it leads to the best performance on MS-COCO benchmark among the DETR-like detection models under the same setting, e.g., AP 45.7\% using ResNet50-DC5 as backbone trained in 50 epochs. We also conducted extensive experiments to confirm our analysis and verify the effectiveness of our methods.
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Shilong Liu, Feng Li, Hao Zhang, Xiao Yang, Xianbiao Qi, Hang Su, Jun Zhu, Lei Zhang
- **Funded by:** IDEA-Research
- **Shared by:** David Hajdu
- **Model type:** DAB-DETR
- **License:** Apache-2.0
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/IDEA-Research/DAB-DETR
- **Paper:** https://arxiv.org/abs/2201.12329
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
import requests
from PIL import Image
from transformers import AutoModelForObjectDetection, AutoImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("IDEA-Research/dab-detr-resnet-50")
model = AutoModelForObjectDetection.from_pretrained("IDEA-Research/dab-detr-resnet-50")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
This should output
```
cat: 0.87 [14.7, 49.39, 320.52, 469.28]
remote: 0.86 [41.08, 72.37, 173.39, 117.2]
cat: 0.86 [344.45, 19.43, 639.85, 367.86]
remote: 0.61 [334.27, 75.93, 367.92, 188.81]
couch: 0.59 [-0.04, 1.34, 639.9, 477.09]
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The DAB-DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training Procedure
Following Deformable DETR and Conditional DETR, we use 300 anchors as queries. We select 300 predicted boxes and labels with the largest classification logits for evaluation as well. We also use focal loss (Lin et al., 2020) with α = 0.25, γ = 2 for classification. The same loss terms are used in bipartite matching and final loss calculating, but with different coefficients. Classification loss with coefficient 2.0 is used in pipartite matching but 1.0 in the final loss. L1 loss with coefficient 5.0 and GIOU loss (Rezatofighi et al., 2019) with coefficient 2.0 are consistent in both the matching and the final loss calculation procedures. All models are trained on 16 GPUs with 1 image per GPU and AdamW (Loshchilov & Hutter, 2018) is used for training with weight decay 10−4. The learning rates for backbone and other modules are set to 10−5 and 10−4 respectively. We train our models for 50 epochs and drop the learning rate by 0.1 after 40 epochs. All models are trained on Nvidia A100 GPU. We search hyperparameters with batch size 64 and all results in our paper are reported with batch size 16
#### Preprocessing
Images are resized/rescaled such that the shortest side is at least 480 and at most 800 pixels and the long size is at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
### Training Hyperparameters
- **Training regime:** <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
| **Key** | **Value** |
|-----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------|
| **activation_dropout** | `0.0` |
| **activation_function** | `prelu` |
| **attention_dropout** | `0.0` |
| **auxiliary_loss** | `false` |
| **backbone** | `resnet50` |
| **bbox_cost** | `5` |
| **bbox_loss_coefficient** | `5` |
| **class_cost** | `2` |
| **cls_loss_coefficient** | `2` |
| **decoder_attention_heads** | `8` |
| **decoder_ffn_dim** | `2048` |
| **decoder_layers** | `6` |
| **dropout** | `0.1` |
| **encoder_attention_heads** | `8` |
| **encoder_ffn_dim** | `2048` |
| **encoder_layers** | `6` |
| **focal_alpha** | `0.25` |
| **giou_cost** | `2` |
| **giou_loss_coefficient** | `2` |
| **hidden_size** | `256` |
| **init_std** | `0.02` |
| **init_xavier_std** | `1.0` |
| **initializer_bias_prior_prob** | `null` |
| **keep_query_pos** | `false` |
| **normalize_before** | `false` |
| **num_hidden_layers** | `6` |
| **num_patterns** | `0` |
| **num_queries** | `300` |
| **query_dim** | `4` |
| **random_refpoints_xy** | `false` |
| **sine_position_embedding_scale** | `null` |
| **temperature_height** | `20` |
| **temperature_width** | `20` |
## Evaluation

### Model Architecture and Objective

Overview of DAB-DETR. We extract image spatial features using a CNN backbone followed with Transformer encoders to refine the CNN features.
Then dual queries, including positional queries (anchor boxes) and content queries (decoder embeddings), are fed into the decoder to probe the objects which correspond to the anchors and have similar patterns with the content queries. The dual queries are updated layer-by-layer to get close to the target ground-truth objects gradually.
The outputs of the final decoder layer are used to predict the objects with labels and boxes by prediction heads, and then a bipartite graph matching is conducted to calculate loss as in DETR.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@inproceedings{
liu2022dabdetr,
title={{DAB}-{DETR}: Dynamic Anchor Boxes are Better Queries for {DETR}},
author={Shilong Liu and Feng Li and Hao Zhang and Xiao Yang and Xianbiao Qi and Hang Su and Jun Zhu and Lei Zhang},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=oMI9PjOb9Jl}
}
```
## Model Card Authors
[David Hajdu](https://huggingface.co/davidhajdu)
| [
"n/a",
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"n/a",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"n/a",
"backpack",
"umbrella",
"n/a",
"n/a",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"n/a",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"n/a",
"dining table",
"n/a",
"n/a",
"toilet",
"n/a",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"n/a",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
qubvel-hf/debug_no_pad |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debug_no_pad
This model is a fine-tuned version of [sbchoi/rtdetr_r50vd_coco_o365](https://huggingface.co/sbchoi/rtdetr_r50vd_coco_o365) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7188
- Map: 0.2736
- Map 50: 0.5376
- Map 75: 0.2345
- Map Small: 0.0676
- Map Medium: 0.2622
- Map Large: 0.3783
- Mar 1: 0.2586
- Mar 10: 0.457
- Mar 100: 0.5147
- Mar Small: 0.1125
- Mar Medium: 0.4717
- Mar Large: 0.5986
- Map Coverall: 0.2102
- Mar 100 Coverall: 0.5846
- Map Face Shield: 0.3488
- Mar 100 Face Shield: 0.6824
- Map Gloves: 0.3656
- Mar 100 Gloves: 0.5271
- Map Goggles: 0.1612
- Mar 100 Goggles: 0.3345
- Map Mask: 0.2823
- Mar 100 Mask: 0.4451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:|
| No log | 1.0 | 107 | 101.3493 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0004 | 0.0026 | 0.0098 | 0.0201 | 0.0144 | 0.0106 | 0.0257 | 0.0002 | 0.0387 | 0.0 | 0.0139 | 0.0 | 0.0018 | 0.0 | 0.0462 | 0.0 | 0.0 |
| No log | 2.0 | 214 | 29.9261 | 0.0003 | 0.0014 | 0.0 | 0.0 | 0.0 | 0.0003 | 0.0019 | 0.0114 | 0.0182 | 0.0 | 0.0133 | 0.0184 | 0.0013 | 0.091 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 3.0 | 321 | 21.9254 | 0.0031 | 0.0095 | 0.0014 | 0.0023 | 0.008 | 0.003 | 0.0233 | 0.0687 | 0.1387 | 0.0058 | 0.0927 | 0.1278 | 0.0119 | 0.3468 | 0.0007 | 0.1494 | 0.0009 | 0.0281 | 0.0 | 0.0308 | 0.0018 | 0.1382 |
| No log | 4.0 | 428 | 17.4624 | 0.0317 | 0.0912 | 0.0132 | 0.0206 | 0.0307 | 0.0303 | 0.0537 | 0.1415 | 0.2005 | 0.0667 | 0.1778 | 0.2273 | 0.0349 | 0.3063 | 0.0077 | 0.2443 | 0.0105 | 0.1442 | 0.0012 | 0.0754 | 0.1044 | 0.2324 |
| 97.4578 | 5.0 | 535 | 16.2654 | 0.0654 | 0.1715 | 0.0386 | 0.031 | 0.0647 | 0.0918 | 0.1116 | 0.2112 | 0.2733 | 0.0983 | 0.2249 | 0.358 | 0.0664 | 0.445 | 0.0501 | 0.3405 | 0.0282 | 0.1335 | 0.031 | 0.1692 | 0.151 | 0.2782 |
| 97.4578 | 6.0 | 642 | 15.4886 | 0.0997 | 0.2444 | 0.0711 | 0.0568 | 0.0907 | 0.1367 | 0.1635 | 0.2746 | 0.3398 | 0.1459 | 0.2877 | 0.4549 | 0.114 | 0.4734 | 0.0637 | 0.3797 | 0.0467 | 0.2299 | 0.0762 | 0.2662 | 0.1982 | 0.3498 |
| 97.4578 | 7.0 | 749 | 15.3647 | 0.1205 | 0.3157 | 0.0659 | 0.0501 | 0.1134 | 0.1725 | 0.1602 | 0.2869 | 0.3296 | 0.1181 | 0.2841 | 0.4334 | 0.1443 | 0.3901 | 0.0689 | 0.4139 | 0.1082 | 0.2714 | 0.0928 | 0.2523 | 0.1882 | 0.3204 |
| 97.4578 | 8.0 | 856 | 15.2431 | 0.1016 | 0.2715 | 0.0558 | 0.0301 | 0.0905 | 0.146 | 0.1569 | 0.2829 | 0.344 | 0.129 | 0.2983 | 0.4391 | 0.1136 | 0.4009 | 0.0707 | 0.438 | 0.0902 | 0.2915 | 0.0755 | 0.2923 | 0.1579 | 0.2973 |
| 97.4578 | 9.0 | 963 | 14.7396 | 0.1497 | 0.3629 | 0.1013 | 0.0641 | 0.1617 | 0.1877 | 0.1691 | 0.3051 | 0.3433 | 0.2003 | 0.3204 | 0.4348 | 0.1987 | 0.4293 | 0.1105 | 0.3861 | 0.1467 | 0.3379 | 0.1034 | 0.2462 | 0.1894 | 0.3169 |
| 27.2963 | 10.0 | 1070 | 14.2373 | 0.1549 | 0.3715 | 0.1124 | 0.0798 | 0.163 | 0.1902 | 0.192 | 0.3368 | 0.4073 | 0.2642 | 0.3594 | 0.5035 | 0.1891 | 0.4977 | 0.1105 | 0.4646 | 0.1507 | 0.3763 | 0.1091 | 0.3108 | 0.2154 | 0.3871 |
| 27.2963 | 11.0 | 1177 | 14.4769 | 0.1413 | 0.3176 | 0.1085 | 0.0477 | 0.1304 | 0.2084 | 0.1892 | 0.3246 | 0.3797 | 0.195 | 0.3323 | 0.4877 | 0.1858 | 0.4775 | 0.1162 | 0.4532 | 0.1387 | 0.3683 | 0.1182 | 0.2569 | 0.1477 | 0.3427 |
| 27.2963 | 12.0 | 1284 | 13.9935 | 0.1922 | 0.4287 | 0.146 | 0.0793 | 0.1931 | 0.2346 | 0.2099 | 0.3566 | 0.4036 | 0.2114 | 0.3472 | 0.5202 | 0.2718 | 0.5117 | 0.1478 | 0.4392 | 0.1856 | 0.3893 | 0.1322 | 0.3 | 0.2236 | 0.3778 |
| 27.2963 | 13.0 | 1391 | 13.7867 | 0.1745 | 0.4012 | 0.1326 | 0.0796 | 0.1727 | 0.2301 | 0.1965 | 0.3503 | 0.4054 | 0.2274 | 0.352 | 0.5241 | 0.2083 | 0.5086 | 0.1234 | 0.4633 | 0.2004 | 0.3924 | 0.1076 | 0.2754 | 0.2327 | 0.3871 |
| 27.2963 | 14.0 | 1498 | 13.7474 | 0.18 | 0.3989 | 0.1445 | 0.0579 | 0.1639 | 0.241 | 0.1995 | 0.3449 | 0.3898 | 0.2128 | 0.3156 | 0.5022 | 0.1854 | 0.4761 | 0.1402 | 0.4038 | 0.2065 | 0.4094 | 0.1273 | 0.2692 | 0.2409 | 0.3902 |
| 24.8642 | 15.0 | 1605 | 13.4012 | 0.1958 | 0.4352 | 0.157 | 0.0757 | 0.1788 | 0.2823 | 0.2073 | 0.3522 | 0.4008 | 0.212 | 0.3359 | 0.5287 | 0.198 | 0.5099 | 0.1506 | 0.4101 | 0.2134 | 0.3933 | 0.1558 | 0.2831 | 0.2611 | 0.4076 |
| 24.8642 | 16.0 | 1712 | 13.3569 | 0.19 | 0.4224 | 0.1454 | 0.1542 | 0.2044 | 0.2407 | 0.2128 | 0.3568 | 0.3989 | 0.2681 | 0.3554 | 0.5058 | 0.1902 | 0.4977 | 0.1678 | 0.3911 | 0.225 | 0.3978 | 0.1325 | 0.2923 | 0.2347 | 0.4156 |
| 24.8642 | 17.0 | 1819 | 13.4929 | 0.1809 | 0.3983 | 0.1353 | 0.0679 | 0.1746 | 0.267 | 0.2142 | 0.3496 | 0.3898 | 0.2683 | 0.3211 | 0.5047 | 0.2047 | 0.5185 | 0.174 | 0.4329 | 0.1679 | 0.321 | 0.1303 | 0.2769 | 0.2276 | 0.3996 |
| 24.8642 | 18.0 | 1926 | 13.4921 | 0.1789 | 0.3853 | 0.1477 | 0.057 | 0.1655 | 0.2511 | 0.2202 | 0.3641 | 0.4147 | 0.2614 | 0.349 | 0.5348 | 0.2266 | 0.55 | 0.1629 | 0.4759 | 0.2087 | 0.3857 | 0.138 | 0.2908 | 0.1584 | 0.3711 |
| 23.551 | 19.0 | 2033 | 13.3617 | 0.1824 | 0.3978 | 0.1471 | 0.0609 | 0.1695 | 0.2607 | 0.2263 | 0.388 | 0.4319 | 0.2304 | 0.3643 | 0.555 | 0.2282 | 0.5536 | 0.1568 | 0.4886 | 0.2076 | 0.4094 | 0.1415 | 0.3046 | 0.1776 | 0.4031 |
| 23.551 | 20.0 | 2140 | 13.3499 | 0.1834 | 0.4074 | 0.1496 | 0.0734 | 0.156 | 0.2478 | 0.2276 | 0.3903 | 0.4373 | 0.2858 | 0.3555 | 0.5568 | 0.2326 | 0.5392 | 0.1721 | 0.4987 | 0.2169 | 0.4237 | 0.1402 | 0.3508 | 0.1553 | 0.3742 |
| 23.551 | 21.0 | 2247 | 13.4009 | 0.1858 | 0.3991 | 0.1394 | 0.0578 | 0.1748 | 0.2553 | 0.2214 | 0.375 | 0.4254 | 0.1951 | 0.3547 | 0.5506 | 0.2349 | 0.5473 | 0.1842 | 0.4861 | 0.2227 | 0.4223 | 0.1443 | 0.3185 | 0.1428 | 0.3529 |
| 23.551 | 22.0 | 2354 | 13.4129 | 0.1824 | 0.3995 | 0.1459 | 0.0715 | 0.1394 | 0.2581 | 0.2234 | 0.3735 | 0.4277 | 0.2243 | 0.348 | 0.5472 | 0.2425 | 0.5491 | 0.1465 | 0.4949 | 0.2072 | 0.417 | 0.133 | 0.3015 | 0.1826 | 0.376 |
| 23.551 | 23.0 | 2461 | 13.4100 | 0.1902 | 0.4141 | 0.1602 | 0.0641 | 0.171 | 0.2609 | 0.2162 | 0.3732 | 0.4327 | 0.2437 | 0.368 | 0.5519 | 0.2405 | 0.5644 | 0.1554 | 0.5139 | 0.2313 | 0.4071 | 0.1506 | 0.3185 | 0.1733 | 0.3596 |
| 23.8193 | 24.0 | 2568 | 13.3091 | 0.1857 | 0.4151 | 0.1452 | 0.0708 | 0.1669 | 0.2486 | 0.214 | 0.3653 | 0.4232 | 0.2201 | 0.358 | 0.5354 | 0.2348 | 0.5676 | 0.145 | 0.4899 | 0.2294 | 0.4004 | 0.1476 | 0.2938 | 0.1717 | 0.3644 |
| 23.8193 | 25.0 | 2675 | 13.2781 | 0.2006 | 0.4435 | 0.1611 | 0.065 | 0.1657 | 0.2624 | 0.215 | 0.3741 | 0.4223 | 0.1951 | 0.351 | 0.5465 | 0.2609 | 0.5595 | 0.1831 | 0.5025 | 0.2365 | 0.3982 | 0.1364 | 0.2877 | 0.1863 | 0.3636 |
| 23.8193 | 26.0 | 2782 | 13.2183 | 0.1951 | 0.4333 | 0.1577 | 0.063 | 0.1709 | 0.2501 | 0.22 | 0.3751 | 0.4286 | 0.1712 | 0.3619 | 0.5411 | 0.2431 | 0.5734 | 0.163 | 0.4772 | 0.2165 | 0.4013 | 0.1581 | 0.3215 | 0.1946 | 0.3693 |
| 23.8193 | 27.0 | 2889 | 13.2704 | 0.2009 | 0.4453 | 0.1559 | 0.0626 | 0.1881 | 0.2605 | 0.2209 | 0.3766 | 0.4293 | 0.1594 | 0.371 | 0.5524 | 0.2635 | 0.5721 | 0.1797 | 0.5127 | 0.2159 | 0.4031 | 0.159 | 0.2938 | 0.1864 | 0.3649 |
| 23.8193 | 28.0 | 2996 | 13.1710 | 0.2062 | 0.4625 | 0.1583 | 0.0722 | 0.1803 | 0.2726 | 0.2227 | 0.3843 | 0.433 | 0.2041 | 0.3729 | 0.5589 | 0.2826 | 0.5838 | 0.1861 | 0.5101 | 0.2225 | 0.3973 | 0.1413 | 0.3015 | 0.1986 | 0.372 |
| 23.7672 | 29.0 | 3103 | 13.1248 | 0.2038 | 0.4521 | 0.1679 | 0.0703 | 0.1655 | 0.2771 | 0.2255 | 0.3828 | 0.4367 | 0.1904 | 0.3678 | 0.5661 | 0.2629 | 0.5757 | 0.1836 | 0.5241 | 0.2332 | 0.4027 | 0.1466 | 0.3031 | 0.1928 | 0.3778 |
| 23.7672 | 30.0 | 3210 | 13.1506 | 0.2026 | 0.4492 | 0.1638 | 0.0726 | 0.1655 | 0.2732 | 0.2221 | 0.3782 | 0.4312 | 0.1892 | 0.3446 | 0.5587 | 0.2662 | 0.5721 | 0.1827 | 0.519 | 0.2317 | 0.3991 | 0.1477 | 0.2938 | 0.1846 | 0.372 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
qubvel-hf/rtdetr-r50-cppe5-finetune |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rtdetr-r50-cppe5-finetune
This model is a fine-tuned version of [PekingU/rtdetr_r50vd_coco_o365](https://huggingface.co/PekingU/rtdetr_r50vd_coco_o365) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 9.7524
- Map: 0.5298
- Map 50: 0.7903
- Map 75: 0.5632
- Map Small: 0.5092
- Map Medium: 0.4212
- Map Large: 0.6655
- Mar 1: 0.4001
- Mar 10: 0.6526
- Mar 100: 0.711
- Mar Small: 0.6038
- Mar Medium: 0.5835
- Mar Large: 0.8378
- Map Coverall: 0.6271
- Mar 100 Coverall: 0.8308
- Map Face Shield: 0.4839
- Mar 100 Face Shield: 0.7706
- Map Gloves: 0.5775
- Mar 100 Gloves: 0.6492
- Map Goggles: 0.425
- Mar 100 Goggles: 0.6103
- Map Mask: 0.5354
- Mar 100 Mask: 0.6941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:|
| No log | 1.0 | 107 | 216.6647 | 0.0037 | 0.0089 | 0.0022 | 0.0032 | 0.0183 | 0.014 | 0.0242 | 0.1046 | 0.1966 | 0.0405 | 0.1831 | 0.4092 | 0.0056 | 0.2649 | 0.001 | 0.1962 | 0.0021 | 0.0719 | 0.0008 | 0.2215 | 0.0091 | 0.2284 |
| No log | 2.0 | 214 | 96.4364 | 0.0294 | 0.0559 | 0.0257 | 0.0169 | 0.0297 | 0.0299 | 0.0707 | 0.1835 | 0.298 | 0.0948 | 0.2203 | 0.4591 | 0.0888 | 0.5527 | 0.001 | 0.3203 | 0.021 | 0.1259 | 0.0014 | 0.2154 | 0.0346 | 0.2756 |
| No log | 3.0 | 321 | 28.5504 | 0.1576 | 0.294 | 0.1448 | 0.0752 | 0.0925 | 0.2629 | 0.1621 | 0.3534 | 0.4661 | 0.347 | 0.3964 | 0.6546 | 0.4399 | 0.6518 | 0.0021 | 0.3797 | 0.1282 | 0.3866 | 0.0045 | 0.4 | 0.2132 | 0.5124 |
| No log | 4.0 | 428 | 17.1997 | 0.2324 | 0.408 | 0.2295 | 0.1228 | 0.1816 | 0.3288 | 0.2317 | 0.4133 | 0.5 | 0.3527 | 0.4438 | 0.6543 | 0.5101 | 0.6396 | 0.0093 | 0.4671 | 0.1827 | 0.4513 | 0.1553 | 0.4062 | 0.3045 | 0.5356 |
| 117.1144 | 5.0 | 535 | 14.8812 | 0.2495 | 0.4498 | 0.2479 | 0.1261 | 0.1962 | 0.4086 | 0.253 | 0.4388 | 0.5189 | 0.3485 | 0.4683 | 0.7111 | 0.5078 | 0.6752 | 0.0291 | 0.5013 | 0.2265 | 0.4491 | 0.1715 | 0.4246 | 0.3129 | 0.5444 |
| 117.1144 | 6.0 | 642 | 13.5348 | 0.2572 | 0.4698 | 0.2541 | 0.1377 | 0.1905 | 0.424 | 0.2532 | 0.4315 | 0.4895 | 0.314 | 0.4481 | 0.6649 | 0.5166 | 0.6716 | 0.026 | 0.4873 | 0.2391 | 0.3754 | 0.1866 | 0.3754 | 0.3178 | 0.5378 |
| 117.1144 | 7.0 | 749 | 12.7545 | 0.2812 | 0.5035 | 0.2612 | 0.1618 | 0.2143 | 0.4653 | 0.2595 | 0.4568 | 0.496 | 0.3394 | 0.4438 | 0.6648 | 0.5152 | 0.6815 | 0.0918 | 0.4949 | 0.2504 | 0.3759 | 0.208 | 0.3954 | 0.3405 | 0.5324 |
| 117.1144 | 8.0 | 856 | 12.5330 | 0.2909 | 0.5328 | 0.2687 | 0.1568 | 0.2262 | 0.4868 | 0.2831 | 0.4625 | 0.5035 | 0.3209 | 0.4428 | 0.686 | 0.5059 | 0.6838 | 0.1762 | 0.5038 | 0.2528 | 0.3978 | 0.1905 | 0.4062 | 0.3289 | 0.5258 |
| 117.1144 | 9.0 | 963 | 12.2873 | 0.3023 | 0.5355 | 0.2927 | 0.1621 | 0.2502 | 0.494 | 0.2851 | 0.4696 | 0.5064 | 0.3301 | 0.452 | 0.6736 | 0.5276 | 0.6932 | 0.1696 | 0.4899 | 0.2633 | 0.4085 | 0.2249 | 0.4154 | 0.326 | 0.5249 |
| 16.4463 | 10.0 | 1070 | 12.2585 | 0.3095 | 0.5506 | 0.3029 | 0.1738 | 0.2405 | 0.4996 | 0.2901 | 0.4721 | 0.5105 | 0.3271 | 0.4558 | 0.6864 | 0.5196 | 0.6892 | 0.2225 | 0.5241 | 0.264 | 0.4022 | 0.2102 | 0.4077 | 0.3309 | 0.5293 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
NotSarahConnor1984/detr_finetuned_cppe5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_cppe5
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2426
- Map: 0.268
- Map 50: 0.5294
- Map 75: 0.2419
- Map Small: 0.1163
- Map Medium: 0.2288
- Map Large: 0.5006
- Mar 1: 0.2865
- Mar 10: 0.4475
- Mar 100: 0.4749
- Mar Small: 0.3002
- Mar Medium: 0.4623
- Mar Large: 0.7345
- Map Coverall: 0.5546
- Mar 100 Coverall: 0.6736
- Map Face Shield: 0.1674
- Mar 100 Face Shield: 0.4833
- Map Gloves: 0.1944
- Mar 100 Gloves: 0.3662
- Map Goggles: 0.1199
- Mar 100 Goggles: 0.4421
- Map Mask: 0.3036
- Mar 100 Mask: 0.4092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:|
| No log | 1.0 | 102 | 2.3847 | 0.008 | 0.0259 | 0.0031 | 0.0071 | 0.0113 | 0.0323 | 0.0303 | 0.1094 | 0.1661 | 0.1002 | 0.1711 | 0.2087 | 0.0105 | 0.207 | 0.0034 | 0.0681 | 0.0078 | 0.2167 | 0.0025 | 0.0807 | 0.0156 | 0.2583 |
| No log | 2.0 | 204 | 2.1346 | 0.0292 | 0.0819 | 0.0147 | 0.0126 | 0.0268 | 0.042 | 0.0773 | 0.1631 | 0.2064 | 0.0815 | 0.1912 | 0.2543 | 0.0873 | 0.3791 | 0.0083 | 0.1153 | 0.0116 | 0.2069 | 0.0 | 0.0 | 0.0388 | 0.3306 |
| No log | 3.0 | 306 | 2.0183 | 0.0546 | 0.1289 | 0.0413 | 0.0288 | 0.0352 | 0.0606 | 0.1234 | 0.2329 | 0.2765 | 0.1078 | 0.2436 | 0.3393 | 0.1671 | 0.5567 | 0.0187 | 0.2167 | 0.0094 | 0.2157 | 0.0346 | 0.0912 | 0.0431 | 0.3024 |
| No log | 4.0 | 408 | 1.9305 | 0.0779 | 0.1742 | 0.0587 | 0.02 | 0.0476 | 0.1114 | 0.1417 | 0.2692 | 0.3059 | 0.0926 | 0.2783 | 0.4861 | 0.27 | 0.59 | 0.0335 | 0.3208 | 0.0214 | 0.2377 | 0.0036 | 0.086 | 0.061 | 0.2951 |
| 3.3072 | 5.0 | 510 | 1.7155 | 0.1186 | 0.2742 | 0.0935 | 0.0303 | 0.0871 | 0.2195 | 0.1434 | 0.3145 | 0.3401 | 0.1573 | 0.3076 | 0.5985 | 0.3762 | 0.5582 | 0.0213 | 0.2569 | 0.0463 | 0.2892 | 0.0081 | 0.2333 | 0.1414 | 0.3626 |
| 3.3072 | 6.0 | 612 | 1.6430 | 0.1371 | 0.301 | 0.1037 | 0.0347 | 0.1046 | 0.2527 | 0.1733 | 0.3374 | 0.3664 | 0.1605 | 0.3393 | 0.5973 | 0.4248 | 0.601 | 0.0216 | 0.2875 | 0.0572 | 0.3059 | 0.0243 | 0.2877 | 0.1577 | 0.35 |
| 3.3072 | 7.0 | 714 | 1.5879 | 0.1537 | 0.345 | 0.1218 | 0.0534 | 0.1358 | 0.2768 | 0.1933 | 0.3528 | 0.3807 | 0.1858 | 0.3554 | 0.6497 | 0.4326 | 0.6124 | 0.034 | 0.3486 | 0.0719 | 0.2917 | 0.0246 | 0.2877 | 0.2056 | 0.3631 |
| 3.3072 | 8.0 | 816 | 1.5310 | 0.1649 | 0.3587 | 0.1398 | 0.0579 | 0.1333 | 0.3036 | 0.1925 | 0.3729 | 0.3946 | 0.1854 | 0.3742 | 0.6476 | 0.4674 | 0.6353 | 0.0415 | 0.3722 | 0.0933 | 0.301 | 0.0387 | 0.3333 | 0.1836 | 0.3311 |
| 3.3072 | 9.0 | 918 | 1.4758 | 0.1789 | 0.3922 | 0.1478 | 0.0668 | 0.1372 | 0.3167 | 0.2241 | 0.3745 | 0.405 | 0.2063 | 0.3752 | 0.6691 | 0.4539 | 0.6199 | 0.063 | 0.4139 | 0.1041 | 0.3108 | 0.0375 | 0.3053 | 0.236 | 0.3752 |
| 1.4864 | 10.0 | 1020 | 1.4622 | 0.1735 | 0.3827 | 0.1333 | 0.05 | 0.1411 | 0.354 | 0.2103 | 0.371 | 0.3951 | 0.183 | 0.3664 | 0.6752 | 0.4784 | 0.6313 | 0.053 | 0.3903 | 0.1182 | 0.3186 | 0.0195 | 0.3053 | 0.1985 | 0.3301 |
| 1.4864 | 11.0 | 1122 | 1.4252 | 0.1858 | 0.4134 | 0.1496 | 0.0591 | 0.1632 | 0.3561 | 0.2227 | 0.3873 | 0.4144 | 0.1911 | 0.4137 | 0.6645 | 0.4794 | 0.6488 | 0.0752 | 0.4153 | 0.1131 | 0.3034 | 0.0292 | 0.3439 | 0.2319 | 0.3607 |
| 1.4864 | 12.0 | 1224 | 1.3893 | 0.1973 | 0.4218 | 0.1643 | 0.0749 | 0.169 | 0.4139 | 0.242 | 0.4054 | 0.4302 | 0.2226 | 0.4175 | 0.6991 | 0.4854 | 0.6413 | 0.0662 | 0.4292 | 0.1319 | 0.3397 | 0.0503 | 0.3561 | 0.2529 | 0.3845 |
| 1.4864 | 13.0 | 1326 | 1.3891 | 0.1998 | 0.431 | 0.1596 | 0.0675 | 0.1829 | 0.3762 | 0.2277 | 0.3962 | 0.4222 | 0.1979 | 0.4311 | 0.7011 | 0.504 | 0.6428 | 0.0911 | 0.4333 | 0.1384 | 0.3353 | 0.0552 | 0.3702 | 0.2101 | 0.3296 |
| 1.4864 | 14.0 | 1428 | 1.3981 | 0.193 | 0.42 | 0.1614 | 0.0698 | 0.1693 | 0.3523 | 0.235 | 0.3978 | 0.4271 | 0.2379 | 0.42 | 0.6729 | 0.4962 | 0.6557 | 0.0681 | 0.4278 | 0.136 | 0.3451 | 0.0493 | 0.3298 | 0.2155 | 0.3772 |
| 1.2306 | 15.0 | 1530 | 1.3472 | 0.217 | 0.4617 | 0.1785 | 0.0857 | 0.1817 | 0.4264 | 0.2416 | 0.4046 | 0.4329 | 0.2377 | 0.4143 | 0.7007 | 0.5137 | 0.6363 | 0.0968 | 0.4611 | 0.1571 | 0.3475 | 0.0484 | 0.3509 | 0.2689 | 0.3684 |
| 1.2306 | 16.0 | 1632 | 1.3450 | 0.227 | 0.4747 | 0.1915 | 0.0861 | 0.1891 | 0.4373 | 0.2521 | 0.4104 | 0.439 | 0.2503 | 0.4112 | 0.7344 | 0.5183 | 0.6428 | 0.1179 | 0.4514 | 0.1589 | 0.3289 | 0.0684 | 0.3912 | 0.2717 | 0.3806 |
| 1.2306 | 17.0 | 1734 | 1.2998 | 0.2359 | 0.4833 | 0.202 | 0.1089 | 0.1972 | 0.4426 | 0.2661 | 0.4303 | 0.4475 | 0.2792 | 0.4221 | 0.6999 | 0.5251 | 0.6463 | 0.12 | 0.4556 | 0.1646 | 0.3466 | 0.0857 | 0.393 | 0.284 | 0.3961 |
| 1.2306 | 18.0 | 1836 | 1.2995 | 0.2376 | 0.4866 | 0.1989 | 0.0926 | 0.2056 | 0.4487 | 0.2711 | 0.4325 | 0.4575 | 0.2798 | 0.4319 | 0.7195 | 0.522 | 0.6542 | 0.1299 | 0.475 | 0.1636 | 0.3544 | 0.0838 | 0.4018 | 0.2884 | 0.4019 |
| 1.2306 | 19.0 | 1938 | 1.2998 | 0.2362 | 0.4948 | 0.1954 | 0.1036 | 0.1905 | 0.4647 | 0.2563 | 0.4277 | 0.4446 | 0.249 | 0.4216 | 0.7165 | 0.5308 | 0.6672 | 0.1334 | 0.4722 | 0.1829 | 0.3407 | 0.0721 | 0.3772 | 0.2617 | 0.3655 |
| 1.0733 | 20.0 | 2040 | 1.2773 | 0.2513 | 0.5082 | 0.2298 | 0.1057 | 0.2148 | 0.4873 | 0.2723 | 0.4393 | 0.4678 | 0.2749 | 0.4514 | 0.7437 | 0.5342 | 0.6652 | 0.1499 | 0.4556 | 0.1754 | 0.3534 | 0.1101 | 0.4561 | 0.287 | 0.4087 |
| 1.0733 | 21.0 | 2142 | 1.2668 | 0.2516 | 0.5077 | 0.2323 | 0.1048 | 0.2104 | 0.4929 | 0.2758 | 0.4353 | 0.4592 | 0.2787 | 0.4287 | 0.7393 | 0.541 | 0.6692 | 0.1386 | 0.4653 | 0.1778 | 0.3525 | 0.1074 | 0.4018 | 0.2933 | 0.4073 |
| 1.0733 | 22.0 | 2244 | 1.2665 | 0.2496 | 0.5166 | 0.2143 | 0.114 | 0.2045 | 0.4759 | 0.2609 | 0.4314 | 0.454 | 0.2708 | 0.4246 | 0.7292 | 0.5355 | 0.6577 | 0.1393 | 0.4556 | 0.182 | 0.3657 | 0.1069 | 0.4 | 0.2842 | 0.3913 |
| 1.0733 | 23.0 | 2346 | 1.2512 | 0.2585 | 0.5258 | 0.2298 | 0.1196 | 0.2121 | 0.4884 | 0.2789 | 0.4465 | 0.4695 | 0.2991 | 0.4453 | 0.7262 | 0.5455 | 0.6672 | 0.1491 | 0.4833 | 0.1899 | 0.373 | 0.1149 | 0.4211 | 0.2931 | 0.4029 |
| 1.0733 | 24.0 | 2448 | 1.2511 | 0.2639 | 0.5275 | 0.2388 | 0.1198 | 0.2218 | 0.511 | 0.2845 | 0.4464 | 0.47 | 0.2911 | 0.4511 | 0.7377 | 0.5482 | 0.6657 | 0.1549 | 0.4694 | 0.192 | 0.3725 | 0.125 | 0.4386 | 0.2994 | 0.4039 |
| 0.9823 | 25.0 | 2550 | 1.2495 | 0.2629 | 0.5392 | 0.2309 | 0.1173 | 0.2213 | 0.4926 | 0.2828 | 0.4429 | 0.467 | 0.2888 | 0.4478 | 0.7363 | 0.549 | 0.6672 | 0.1633 | 0.4792 | 0.1931 | 0.3652 | 0.1181 | 0.4263 | 0.2908 | 0.3971 |
| 0.9823 | 26.0 | 2652 | 1.2470 | 0.2653 | 0.5276 | 0.2364 | 0.1136 | 0.2258 | 0.5082 | 0.2884 | 0.4486 | 0.4715 | 0.3017 | 0.4567 | 0.7313 | 0.5535 | 0.6701 | 0.1641 | 0.475 | 0.192 | 0.3672 | 0.1162 | 0.4368 | 0.3007 | 0.4083 |
| 0.9823 | 27.0 | 2754 | 1.2471 | 0.2661 | 0.5287 | 0.2366 | 0.1138 | 0.227 | 0.5013 | 0.2809 | 0.4483 | 0.4736 | 0.2986 | 0.4636 | 0.7286 | 0.5519 | 0.6711 | 0.1687 | 0.4806 | 0.1934 | 0.3676 | 0.1135 | 0.4404 | 0.3031 | 0.4083 |
| 0.9823 | 28.0 | 2856 | 1.2434 | 0.2673 | 0.5291 | 0.242 | 0.1156 | 0.229 | 0.5028 | 0.2866 | 0.4462 | 0.4745 | 0.3008 | 0.461 | 0.7367 | 0.5555 | 0.6736 | 0.1651 | 0.4806 | 0.1951 | 0.3662 | 0.1179 | 0.4421 | 0.3028 | 0.4102 |
| 0.9823 | 29.0 | 2958 | 1.2427 | 0.2676 | 0.5272 | 0.2425 | 0.116 | 0.2286 | 0.5 | 0.2863 | 0.4472 | 0.4745 | 0.299 | 0.4623 | 0.7343 | 0.554 | 0.6721 | 0.1675 | 0.4833 | 0.1942 | 0.3667 | 0.1195 | 0.4404 | 0.3027 | 0.4102 |
| 0.9316 | 30.0 | 3060 | 1.2426 | 0.268 | 0.5294 | 0.2419 | 0.1163 | 0.2288 | 0.5006 | 0.2865 | 0.4475 | 0.4749 | 0.3002 | 0.4623 | 0.7345 | 0.5546 | 0.6736 | 0.1674 | 0.4833 | 0.1944 | 0.3662 | 0.1199 | 0.4421 | 0.3036 | 0.4092 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.2.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
Spatiallysaying/detr-finetuned-590_v1 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3"
] |
firefiruses/detr-resnet-50_dogs |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_dogs
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"dog",
"dog",
"dogs"
] |
NRPU/detr-finetuned-balloon-v2 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15"
] |
NotSarahConnor1984/detr_finetuned_coco |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_coco
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4235
- Map: 0.2714
- Map 50: 0.3867
- Map 75: 0.2968
- Map Small: 0.0662
- Map Medium: 0.1688
- Map Large: 0.3006
- Mar 1: 0.2872
- Mar 10: 0.507
- Mar 100: 0.5305
- Mar Small: 0.0952
- Mar Medium: 0.2946
- Mar Large: 0.5785
- Map Person: 0.5441
- Mar 100 Person: 0.6838
- Map Bicycle: 0.3146
- Mar 100 Bicycle: 0.6
- Map Car: 0.3302
- Mar 100 Car: 0.6011
- Map Motorcycle: 0.3008
- Mar 100 Motorcycle: 0.5347
- Map Airplane: 0.2611
- Mar 100 Airplane: 0.4893
- Map Bus: 0.2997
- Mar 100 Bus: 0.68
- Map Train: 0.4005
- Mar 100 Train: 0.6808
- Map Truck: 0.2124
- Mar 100 Truck: 0.6667
- Map Boat: 0.2231
- Mar 100 Boat: 0.4964
- Map Traffic light: 0.3589
- Mar 100 Traffic light: 0.5547
- Map Fire hydrant: 0.7419
- Mar 100 Fire hydrant: 0.7556
- Map Stop sign: 0.3547
- Mar 100 Stop sign: 0.4375
- Map Parking meter: 0.0595
- Mar 100 Parking meter: 0.48
- Map Bench: 0.0526
- Mar 100 Bench: 0.237
- Map Bird: 0.1136
- Mar 100 Bird: 0.3169
- Map Cat: 0.5449
- Mar 100 Cat: 0.7214
- Map Dog: 0.2094
- Mar 100 Dog: 0.6021
- Map Horse: 0.4232
- Mar 100 Horse: 0.6575
- Map Sheep: 0.3734
- Mar 100 Sheep: 0.6176
- Map Cow: 0.2624
- Mar 100 Cow: 0.6521
- Map Elephant: 0.6967
- Mar 100 Elephant: 0.8636
- Map Bear: 0.1197
- Mar 100 Bear: 0.6875
- Map Zebra: 0.419
- Mar 100 Zebra: 0.5
- Map Giraffe: 0.7759
- Mar 100 Giraffe: 0.9
- Map Backpack: 0.0932
- Mar 100 Backpack: 0.3887
- Map Umbrella: 0.2971
- Mar 100 Umbrella: 0.498
- Map Handbag: 0.028
- Mar 100 Handbag: 0.3605
- Map Tie: 0.4376
- Mar 100 Tie: 0.5745
- Map Suitcase: 0.0202
- Mar 100 Suitcase: 0.2778
- Map Frisbee: 0.4422
- Mar 100 Frisbee: 0.6583
- Map Skis: 0.2384
- Mar 100 Skis: 0.5714
- Map Snowboard: 0.2114
- Mar 100 Snowboard: 0.575
- Map Sports ball: 0.3106
- Mar 100 Sports ball: 0.545
- Map Kite: 0.3103
- Mar 100 Kite: 0.585
- Map Baseball bat: 0.0709
- Mar 100 Baseball bat: 0.4364
- Map Baseball glove: 0.1192
- Mar 100 Baseball glove: 0.6211
- Map Skateboard: 0.3989
- Mar 100 Skateboard: 0.668
- Map Surfboard: 0.4623
- Mar 100 Surfboard: 0.7478
- Map Tennis racket: 0.36
- Mar 100 Tennis racket: 0.5905
- Map Bottle: 0.2205
- Mar 100 Bottle: 0.4743
- Map Wine glass: 0.3316
- Mar 100 Wine glass: 0.4955
- Map Cup: 0.2914
- Mar 100 Cup: 0.5055
- Map Fork: 0.2044
- Mar 100 Fork: 0.3984
- Map Knife: 0.0793
- Mar 100 Knife: 0.349
- Map Spoon: 0.0941
- Mar 100 Spoon: 0.4433
- Map Bowl: 0.3273
- Mar 100 Bowl: 0.6047
- Map Banana: 0.2905
- Mar 100 Banana: 0.5079
- Map Apple: 0.1335
- Mar 100 Apple: 0.4471
- Map Sandwich: 0.2086
- Mar 100 Sandwich: 0.656
- Map Orange: 0.2413
- Mar 100 Orange: 0.5346
- Map Broccoli: 0.1865
- Mar 100 Broccoli: 0.5719
- Map Carrot: 0.2751
- Mar 100 Carrot: 0.6054
- Map Hot dog: 0.1325
- Mar 100 Hot dog: 0.6438
- Map Pizza: 0.6047
- Mar 100 Pizza: 0.719
- Map Donut: 0.4449
- Mar 100 Donut: 0.6707
- Map Cake: 0.1137
- Mar 100 Cake: 0.4508
- Map Chair: 0.2514
- Mar 100 Chair: 0.5078
- Map Couch: 0.1922
- Mar 100 Couch: 0.5962
- Map Potted plant: 0.1817
- Mar 100 Potted plant: 0.3297
- Map Bed: 0.5156
- Mar 100 Bed: 0.7962
- Map Dining table: 0.3427
- Mar 100 Dining table: 0.5894
- Map Toilet: 0.4477
- Mar 100 Toilet: 0.5267
- Map Tv: 0.4456
- Mar 100 Tv: 0.6929
- Map Laptop: 0.1816
- Mar 100 Laptop: 0.3328
- Map Mouse: 0.2073
- Mar 100 Mouse: 0.6556
- Map Remote: 0.1011
- Mar 100 Remote: 0.4635
- Map Keyboard: 0.233
- Mar 100 Keyboard: 0.3769
- Map Cell phone: 0.0924
- Mar 100 Cell phone: 0.3432
- Map Microwave: 0.1925
- Mar 100 Microwave: 0.325
- Map Oven: 0.0465
- Mar 100 Oven: 0.2
- Map Toaster: 0.0
- Mar 100 Toaster: 0.0
- Map Sink: 0.381
- Mar 100 Sink: 0.5957
- Map Refrigerator: 0.1198
- Mar 100 Refrigerator: 0.2846
- Map Book: 0.0901
- Mar 100 Book: 0.2951
- Map Clock: 0.436
- Mar 100 Clock: 0.6
- Map Vase: 0.3249
- Mar 100 Vase: 0.5714
- Map Scissors: 0.0307
- Mar 100 Scissors: 0.5333
- Map Teddy bear: 0.3399
- Mar 100 Teddy bear: 0.6667
- Map Hair drier: 0.0
- Mar 100 Hair drier: 0.0
- Map Toothbrush: 0.1852
- Mar 100 Toothbrush: 0.5667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Person | Mar 100 Person | Map Bicycle | Mar 100 Bicycle | Map Car | Mar 100 Car | Map Motorcycle | Mar 100 Motorcycle | Map Airplane | Mar 100 Airplane | Map Bus | Mar 100 Bus | Map Train | Mar 100 Train | Map Truck | Mar 100 Truck | Map Boat | Mar 100 Boat | Map Traffic light | Mar 100 Traffic light | Map Fire hydrant | Mar 100 Fire hydrant | Map Stop sign | Mar 100 Stop sign | Map Parking meter | Mar 100 Parking meter | Map Bench | Mar 100 Bench | Map Bird | Mar 100 Bird | Map Cat | Mar 100 Cat | Map Dog | Mar 100 Dog | Map Horse | Mar 100 Horse | Map Sheep | Mar 100 Sheep | Map Cow | Mar 100 Cow | Map Elephant | Mar 100 Elephant | Map Bear | Mar 100 Bear | Map Zebra | Mar 100 Zebra | Map Giraffe | Mar 100 Giraffe | Map Backpack | Mar 100 Backpack | Map Umbrella | Mar 100 Umbrella | Map Handbag | Mar 100 Handbag | Map Tie | Mar 100 Tie | Map Suitcase | Mar 100 Suitcase | Map Frisbee | Mar 100 Frisbee | Map Skis | Mar 100 Skis | Map Snowboard | Mar 100 Snowboard | Map Sports ball | Mar 100 Sports ball | Map Kite | Mar 100 Kite | Map Baseball bat | Mar 100 Baseball bat | Map Baseball glove | Mar 100 Baseball glove | Map Skateboard | Mar 100 Skateboard | Map Surfboard | Mar 100 Surfboard | Map Tennis racket | Mar 100 Tennis racket | Map Bottle | Mar 100 Bottle | Map Wine glass | Mar 100 Wine glass | Map Cup | Mar 100 Cup | Map Fork | Mar 100 Fork | Map Knife | Mar 100 Knife | Map Spoon | Mar 100 Spoon | Map Bowl | Mar 100 Bowl | Map Banana | Mar 100 Banana | Map Apple | Mar 100 Apple | Map Sandwich | Mar 100 Sandwich | Map Orange | Mar 100 Orange | Map Broccoli | Mar 100 Broccoli | Map Carrot | Mar 100 Carrot | Map Hot dog | Mar 100 Hot dog | Map Pizza | Mar 100 Pizza | Map Donut | Mar 100 Donut | Map Cake | Mar 100 Cake | Map Chair | Mar 100 Chair | Map Couch | Mar 100 Couch | Map Potted plant | Mar 100 Potted plant | Map Bed | Mar 100 Bed | Map Dining table | Mar 100 Dining table | Map Toilet | Mar 100 Toilet | Map Tv | Mar 100 Tv | Map Laptop | Mar 100 Laptop | Map Mouse | Mar 100 Mouse | Map Remote | Mar 100 Remote | Map Keyboard | Mar 100 Keyboard | Map Cell phone | Mar 100 Cell phone | Map Microwave | Mar 100 Microwave | Map Oven | Mar 100 Oven | Map Toaster | Mar 100 Toaster | Map Sink | Mar 100 Sink | Map Refrigerator | Mar 100 Refrigerator | Map Book | Mar 100 Book | Map Clock | Mar 100 Clock | Map Vase | Mar 100 Vase | Map Scissors | Mar 100 Scissors | Map Teddy bear | Mar 100 Teddy bear | Map Hair drier | Mar 100 Hair drier | Map Toothbrush | Mar 100 Toothbrush |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:----------:|:--------------:|:-----------:|:---------------:|:-------:|:-----------:|:--------------:|:------------------:|:------------:|:----------------:|:-------:|:-----------:|:---------:|:-------------:|:---------:|:-------------:|:--------:|:------------:|:-----------------:|:---------------------:|:----------------:|:--------------------:|:-------------:|:-----------------:|:-----------------:|:---------------------:|:---------:|:-------------:|:--------:|:------------:|:-------:|:-----------:|:-------:|:-----------:|:---------:|:-------------:|:---------:|:-------------:|:-------:|:-----------:|:------------:|:----------------:|:--------:|:------------:|:---------:|:-------------:|:-----------:|:---------------:|:------------:|:----------------:|:------------:|:----------------:|:-----------:|:---------------:|:-------:|:-----------:|:------------:|:----------------:|:-----------:|:---------------:|:--------:|:------------:|:-------------:|:-----------------:|:---------------:|:-------------------:|:--------:|:------------:|:----------------:|:--------------------:|:------------------:|:----------------------:|:--------------:|:------------------:|:-------------:|:-----------------:|:-----------------:|:---------------------:|:----------:|:--------------:|:--------------:|:------------------:|:-------:|:-----------:|:--------:|:------------:|:---------:|:-------------:|:---------:|:-------------:|:--------:|:------------:|:----------:|:--------------:|:---------:|:-------------:|:------------:|:----------------:|:----------:|:--------------:|:------------:|:----------------:|:----------:|:--------------:|:-----------:|:---------------:|:---------:|:-------------:|:---------:|:-------------:|:--------:|:------------:|:---------:|:-------------:|:---------:|:-------------:|:----------------:|:--------------------:|:-------:|:-----------:|:----------------:|:--------------------:|:----------:|:--------------:|:------:|:----------:|:----------:|:--------------:|:---------:|:-------------:|:----------:|:--------------:|:------------:|:----------------:|:--------------:|:------------------:|:-------------:|:-----------------:|:--------:|:------------:|:-----------:|:---------------:|:--------:|:------------:|:----------------:|:--------------------:|:--------:|:------------:|:---------:|:-------------:|:--------:|:------------:|:------------:|:----------------:|:--------------:|:------------------:|:--------------:|:------------------:|:--------------:|:------------------:|
| No log | 1.0 | 250 | 4.1385 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0001 | 0.0001 | 0.0001 | 0.0004 | 0.0034 | 0.0 | 0.0014 | 0.004 | 0.0038 | 0.271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 79.8681 | 2.0 | 500 | 2.5474 | 0.0009 | 0.0019 | 0.0009 | 0.0002 | 0.0007 | 0.001 | 0.0007 | 0.0028 | 0.0067 | 0.0005 | 0.0039 | 0.0077 | 0.0727 | 0.5334 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 79.8681 | 3.0 | 750 | 2.2549 | 0.002 | 0.0035 | 0.0021 | 0.0037 | 0.0014 | 0.0023 | 0.0025 | 0.0075 | 0.0117 | 0.0072 | 0.0082 | 0.0134 | 0.1287 | 0.6477 | 0.0 | 0.0 | 0.0137 | 0.2372 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0198 | 0.0154 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0004 | 0.0266 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0096 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2645 | 4.0 | 1000 | 2.0820 | 0.0039 | 0.0069 | 0.004 | 0.0011 | 0.002 | 0.0047 | 0.0068 | 0.0165 | 0.0205 | 0.0067 | 0.0139 | 0.0231 | 0.2678 | 0.6853 | 0.0 | 0.0 | 0.0227 | 0.4047 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0018 | 0.0007 | 0.0141 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0013 | 0.0222 | 0.0046 | 0.0446 | 0.001 | 0.0333 | 0.0 | 0.0 | 0.0001 | 0.0041 | 0.0002 | 0.0093 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0056 | 0.0002 | 0.0059 | 0.0011 | 0.0173 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0054 | 0.1532 | 0.0 | 0.0 | 0.0 | 0.007 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0075 | 0.0001 | 0.0048 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0043 | 0.1348 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0016 | 0.0596 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0005 | 0.0096 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0122 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.2645 | 5.0 | 1250 | 1.9632 | 0.0083 | 0.0145 | 0.008 | 0.0014 | 0.0041 | 0.0097 | 0.0458 | 0.0697 | 0.0748 | 0.0055 | 0.0229 | 0.085 | 0.345 | 0.6825 | 0.0 | 0.0038 | 0.0376 | 0.5588 | 0.0006 | 0.0082 | 0.0034 | 0.0679 | 0.0 | 0.0075 | 0.0009 | 0.1231 | 0.0029 | 0.0833 | 0.0009 | 0.0455 | 0.0015 | 0.1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0003 | 0.0556 | 0.0022 | 0.0831 | 0.0258 | 0.2952 | 0.0012 | 0.0426 | 0.0036 | 0.0562 | 0.0013 | 0.0398 | 0.0043 | 0.0708 | 0.0146 | 0.2045 | 0.0059 | 0.0625 | 0.0058 | 0.1061 | 0.0082 | 0.2889 | 0.0014 | 0.0465 | 0.0001 | 0.0078 | 0.0029 | 0.1086 | 0.0003 | 0.0098 | 0.0 | 0.0 | 0.0002 | 0.0083 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0008 | 0.0375 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0009 | 0.032 | 0.0058 | 0.2522 | 0.0008 | 0.031 | 0.0053 | 0.1385 | 0.0001 | 0.0091 | 0.0059 | 0.1359 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0274 | 0.272 | 0.0012 | 0.046 | 0.0032 | 0.0676 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0079 | 0.0969 | 0.0169 | 0.273 | 0.0 | 0.0 | 0.0185 | 0.2524 | 0.0004 | 0.0121 | 0.002 | 0.0049 | 0.006 | 0.2126 | 0.0 | 0.0 | 0.0002 | 0.0109 | 0.011 | 0.1769 | 0.0378 | 0.2809 | 0.003 | 0.07 | 0.0 | 0.0 | 0.0004 | 0.0172 | 0.0 | 0.0 | 0.0005 | 0.0135 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0068 | 0.2 | 0.0 | 0.0 | 0.0001 | 0.0171 | 0.0015 | 0.0632 | 0.0002 | 0.0321 | 0.0 | 0.0 | 0.0282 | 0.0611 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.9313 | 6.0 | 1500 | 1.8637 | 0.0137 | 0.021 | 0.0145 | 0.002 | 0.0059 | 0.0159 | 0.0779 | 0.1113 | 0.1172 | 0.023 | 0.0261 | 0.1325 | 0.428 | 0.7047 | 0.0005 | 0.0472 | 0.1007 | 0.5737 | 0.002 | 0.0918 | 0.0007 | 0.0643 | 0.0023 | 0.05 | 0.0067 | 0.2615 | 0.0015 | 0.0867 | 0.0025 | 0.0982 | 0.0019 | 0.1141 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0005 | 0.0722 | 0.0053 | 0.1369 | 0.0296 | 0.3571 | 0.0106 | 0.1936 | 0.0067 | 0.0808 | 0.0052 | 0.0722 | 0.0042 | 0.1229 | 0.0267 | 0.4545 | 0.0082 | 0.15 | 0.0329 | 0.2455 | 0.0781 | 0.5222 | 0.0005 | 0.0408 | 0.0001 | 0.0147 | 0.0046 | 0.1272 | 0.0017 | 0.051 | 0.0006 | 0.0511 | 0.001 | 0.0667 | 0.0005 | 0.0476 | 0.0 | 0.0 | 0.003 | 0.1175 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0024 | 0.068 | 0.0079 | 0.3043 | 0.0004 | 0.0286 | 0.0121 | 0.2963 | 0.0003 | 0.0114 | 0.0106 | 0.1898 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.01 | 0.0244 | 0.3047 | 0.0015 | 0.0651 | 0.0019 | 0.05 | 0.0223 | 0.08 | 0.0 | 0.0 | 0.019 | 0.2672 | 0.0117 | 0.2162 | 0.0151 | 0.1688 | 0.0856 | 0.381 | 0.0019 | 0.0793 | 0.0033 | 0.0672 | 0.0055 | 0.2478 | 0.0014 | 0.0538 | 0.0029 | 0.0141 | 0.0075 | 0.2077 | 0.0604 | 0.3245 | 0.0062 | 0.19 | 0.0 | 0.0 | 0.0043 | 0.0707 | 0.0 | 0.0 | 0.0005 | 0.0154 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0065 | 0.2391 | 0.0 | 0.0 | 0.0077 | 0.0963 | 0.0024 | 0.0895 | 0.0013 | 0.1357 | 0.0 | 0.0 | 0.0048 | 0.0833 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.9313 | 7.0 | 1750 | 1.7967 | 0.0209 | 0.0315 | 0.0217 | 0.0078 | 0.0089 | 0.0239 | 0.1037 | 0.157 | 0.1627 | 0.0268 | 0.0349 | 0.1833 | 0.4476 | 0.6914 | 0.0006 | 0.0547 | 0.1889 | 0.6153 | 0.004 | 0.1429 | 0.0067 | 0.2071 | 0.0065 | 0.1325 | 0.014 | 0.3654 | 0.0191 | 0.225 | 0.0026 | 0.1091 | 0.009 | 0.2844 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0011 | 0.0963 | 0.0045 | 0.16 | 0.0449 | 0.4167 | 0.0113 | 0.2213 | 0.0418 | 0.1767 | 0.0177 | 0.187 | 0.023 | 0.2042 | 0.063 | 0.5591 | 0.0753 | 0.3875 | 0.042 | 0.3727 | 0.1476 | 0.7333 | 0.0009 | 0.0507 | 0.0 | 0.0078 | 0.0185 | 0.1198 | 0.0397 | 0.0863 | 0.0012 | 0.0867 | 0.0013 | 0.0833 | 0.0011 | 0.0619 | 0.0 | 0.0 | 0.0022 | 0.12 | 0.0002 | 0.013 | 0.0 | 0.0 | 0.0004 | 0.0211 | 0.0083 | 0.168 | 0.0147 | 0.4348 | 0.002 | 0.0929 | 0.0274 | 0.3596 | 0.0004 | 0.0432 | 0.0176 | 0.3078 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0004 | 0.02 | 0.0452 | 0.3925 | 0.0011 | 0.081 | 0.0023 | 0.0324 | 0.0077 | 0.168 | 0.0 | 0.0 | 0.0163 | 0.3063 | 0.0368 | 0.4027 | 0.051 | 0.2875 | 0.0417 | 0.3333 | 0.0019 | 0.0517 | 0.0092 | 0.1016 | 0.0075 | 0.2452 | 0.0008 | 0.0923 | 0.0004 | 0.0203 | 0.013 | 0.4231 | 0.0782 | 0.3543 | 0.0063 | 0.2433 | 0.0 | 0.0 | 0.0015 | 0.0569 | 0.0001 | 0.0222 | 0.0008 | 0.0308 | 0.0 | 0.0 | 0.0011 | 0.0297 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0047 | 0.1957 | 0.0178 | 0.0692 | 0.0041 | 0.1268 | 0.0075 | 0.1895 | 0.003 | 0.225 | 0.0 | 0.0 | 0.0072 | 0.1167 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.713 | 8.0 | 2000 | 1.7404 | 0.031 | 0.0447 | 0.0337 | 0.0078 | 0.0109 | 0.0356 | 0.127 | 0.1917 | 0.1987 | 0.0275 | 0.0422 | 0.2251 | 0.4787 | 0.7132 | 0.0005 | 0.066 | 0.2187 | 0.6234 | 0.0109 | 0.1571 | 0.0142 | 0.2321 | 0.0166 | 0.185 | 0.0216 | 0.4115 | 0.052 | 0.2317 | 0.013 | 0.1527 | 0.0201 | 0.3875 | 0.0029 | 0.1889 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0009 | 0.0963 | 0.019 | 0.1954 | 0.0433 | 0.4643 | 0.0164 | 0.3298 | 0.028 | 0.2644 | 0.0467 | 0.238 | 0.0089 | 0.1458 | 0.0783 | 0.5636 | 0.0278 | 0.4 | 0.0799 | 0.4121 | 0.2563 | 0.7704 | 0.0021 | 0.0761 | 0.0005 | 0.0255 | 0.009 | 0.142 | 0.0231 | 0.0961 | 0.0008 | 0.06 | 0.013 | 0.3917 | 0.0051 | 0.131 | 0.0 | 0.0 | 0.0103 | 0.1875 | 0.0019 | 0.062 | 0.0012 | 0.0727 | 0.0006 | 0.0474 | 0.0045 | 0.14 | 0.0221 | 0.5174 | 0.0029 | 0.1214 | 0.0704 | 0.4037 | 0.0011 | 0.0636 | 0.0355 | 0.4164 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0101 | 0.0567 | 0.0805 | 0.4654 | 0.0016 | 0.073 | 0.0017 | 0.0471 | 0.0102 | 0.26 | 0.0015 | 0.0365 | 0.1213 | 0.3547 | 0.0288 | 0.3351 | 0.0415 | 0.325 | 0.1984 | 0.4476 | 0.0399 | 0.3172 | 0.0065 | 0.1361 | 0.0122 | 0.2578 | 0.0007 | 0.0769 | 0.0021 | 0.0703 | 0.0131 | 0.4577 | 0.1248 | 0.3681 | 0.0075 | 0.2733 | 0.0009 | 0.0607 | 0.0037 | 0.1121 | 0.0 | 0.0 | 0.0014 | 0.0308 | 0.0 | 0.0 | 0.0002 | 0.0378 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0287 | 0.2217 | 0.0339 | 0.1385 | 0.006 | 0.1293 | 0.0353 | 0.2763 | 0.003 | 0.2107 | 0.0 | 0.0 | 0.0054 | 0.1389 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.713 | 9.0 | 2250 | 1.6661 | 0.0486 | 0.069 | 0.0531 | 0.0181 | 0.0295 | 0.0554 | 0.1501 | 0.2391 | 0.2508 | 0.0421 | 0.0982 | 0.2801 | 0.4891 | 0.717 | 0.0019 | 0.1264 | 0.2979 | 0.6737 | 0.0145 | 0.2306 | 0.0473 | 0.2857 | 0.0175 | 0.175 | 0.0189 | 0.2615 | 0.0998 | 0.2933 | 0.0272 | 0.2527 | 0.0698 | 0.4828 | 0.0086 | 0.3889 | 0.0025 | 0.1125 | 0.0 | 0.0 | 0.0024 | 0.1296 | 0.0118 | 0.2108 | 0.0906 | 0.5119 | 0.0509 | 0.4553 | 0.0551 | 0.3438 | 0.1412 | 0.5093 | 0.0127 | 0.2104 | 0.0622 | 0.7682 | 0.0524 | 0.65 | 0.0977 | 0.4394 | 0.4432 | 0.7963 | 0.0013 | 0.0577 | 0.0045 | 0.0539 | 0.0113 | 0.1889 | 0.151 | 0.2353 | 0.0015 | 0.1 | 0.014 | 0.4583 | 0.0371 | 0.2952 | 0.002 | 0.07 | 0.0252 | 0.285 | 0.0103 | 0.117 | 0.0018 | 0.0636 | 0.0009 | 0.1 | 0.0097 | 0.3 | 0.0514 | 0.5043 | 0.0082 | 0.1929 | 0.0761 | 0.5266 | 0.0006 | 0.0477 | 0.1098 | 0.4367 | 0.0059 | 0.0148 | 0.0004 | 0.0327 | 0.0027 | 0.0467 | 0.0964 | 0.557 | 0.0045 | 0.1254 | 0.005 | 0.1029 | 0.0097 | 0.236 | 0.0216 | 0.1308 | 0.1427 | 0.5375 | 0.0616 | 0.4108 | 0.0199 | 0.2937 | 0.2512 | 0.5095 | 0.0663 | 0.4069 | 0.0205 | 0.118 | 0.0177 | 0.2896 | 0.0031 | 0.1231 | 0.0019 | 0.0437 | 0.0324 | 0.4769 | 0.1529 | 0.3766 | 0.0182 | 0.2967 | 0.0053 | 0.0964 | 0.0118 | 0.1586 | 0.0001 | 0.0111 | 0.001 | 0.0115 | 0.0 | 0.0 | 0.0015 | 0.0595 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0152 | 0.2391 | 0.075 | 0.1462 | 0.0116 | 0.3207 | 0.1923 | 0.4184 | 0.0048 | 0.2286 | 0.0 | 0.0 | 0.0035 | 0.1889 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5296 | 10.0 | 2500 | 1.6241 | 0.0668 | 0.0971 | 0.0735 | 0.0156 | 0.036 | 0.0762 | 0.1804 | 0.2955 | 0.3077 | 0.04 | 0.1246 | 0.3414 | 0.4942 | 0.7027 | 0.0089 | 0.1736 | 0.2573 | 0.6515 | 0.09 | 0.3143 | 0.0861 | 0.2857 | 0.0414 | 0.2525 | 0.0489 | 0.4692 | 0.1338 | 0.4817 | 0.022 | 0.2782 | 0.206 | 0.5531 | 0.0103 | 0.4222 | 0.0076 | 0.225 | 0.0 | 0.0 | 0.0016 | 0.1204 | 0.0087 | 0.2431 | 0.0951 | 0.5833 | 0.0344 | 0.4362 | 0.1086 | 0.4041 | 0.2013 | 0.5306 | 0.0316 | 0.3125 | 0.1352 | 0.7955 | 0.0285 | 0.6375 | 0.1419 | 0.4333 | 0.4392 | 0.7926 | 0.0041 | 0.1014 | 0.0156 | 0.1108 | 0.0176 | 0.2148 | 0.1966 | 0.3275 | 0.0042 | 0.1356 | 0.0274 | 0.5583 | 0.0427 | 0.3143 | 0.0119 | 0.16 | 0.096 | 0.335 | 0.0486 | 0.199 | 0.0057 | 0.0727 | 0.0049 | 0.2 | 0.0244 | 0.388 | 0.0617 | 0.5391 | 0.0212 | 0.2548 | 0.0788 | 0.4651 | 0.0042 | 0.1091 | 0.1282 | 0.4828 | 0.0103 | 0.0311 | 0.0003 | 0.0388 | 0.0045 | 0.1 | 0.1327 | 0.5402 | 0.0088 | 0.1968 | 0.0121 | 0.1735 | 0.0479 | 0.456 | 0.0666 | 0.1904 | 0.1244 | 0.5719 | 0.154 | 0.4865 | 0.0245 | 0.475 | 0.2765 | 0.6095 | 0.092 | 0.5034 | 0.0086 | 0.123 | 0.0294 | 0.3852 | 0.0182 | 0.2346 | 0.0073 | 0.1141 | 0.1412 | 0.7577 | 0.2182 | 0.467 | 0.0359 | 0.3033 | 0.0573 | 0.3964 | 0.0395 | 0.2086 | 0.0002 | 0.0111 | 0.0015 | 0.0538 | 0.0 | 0.0 | 0.0045 | 0.1243 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0322 | 0.4348 | 0.0757 | 0.1385 | 0.0126 | 0.2195 | 0.2557 | 0.4658 | 0.0063 | 0.3179 | 0.0045 | 0.15 | 0.0172 | 0.2667 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5296 | 11.0 | 2750 | 1.5608 | 0.0921 | 0.1306 | 0.1009 | 0.0175 | 0.0603 | 0.1039 | 0.191 | 0.3198 | 0.3337 | 0.0383 | 0.1536 | 0.3679 | 0.5217 | 0.7201 | 0.0254 | 0.1811 | 0.3062 | 0.638 | 0.0862 | 0.3469 | 0.0783 | 0.3107 | 0.0618 | 0.275 | 0.0619 | 0.3654 | 0.1139 | 0.41 | 0.0397 | 0.3455 | 0.2801 | 0.5562 | 0.0157 | 0.3333 | 0.0132 | 0.225 | 0.0 | 0.0 | 0.0036 | 0.1352 | 0.035 | 0.3662 | 0.1535 | 0.5738 | 0.0511 | 0.5468 | 0.1318 | 0.5342 | 0.2573 | 0.625 | 0.0378 | 0.3625 | 0.155 | 0.8045 | 0.0379 | 0.65 | 0.1643 | 0.4576 | 0.5884 | 0.8704 | 0.0066 | 0.1465 | 0.0474 | 0.1716 | 0.0202 | 0.1926 | 0.2018 | 0.3137 | 0.005 | 0.1711 | 0.031 | 0.5833 | 0.0715 | 0.369 | 0.0324 | 0.295 | 0.1765 | 0.4275 | 0.1884 | 0.402 | 0.0045 | 0.1 | 0.0133 | 0.2842 | 0.0468 | 0.448 | 0.0854 | 0.5609 | 0.076 | 0.2619 | 0.1094 | 0.5009 | 0.0062 | 0.1659 | 0.1978 | 0.4859 | 0.0189 | 0.0475 | 0.0003 | 0.0224 | 0.0333 | 0.1667 | 0.17 | 0.6159 | 0.0236 | 0.2508 | 0.0331 | 0.1794 | 0.0354 | 0.404 | 0.0915 | 0.3423 | 0.1639 | 0.5531 | 0.1444 | 0.4811 | 0.0411 | 0.3187 | 0.4028 | 0.6143 | 0.1954 | 0.5103 | 0.0195 | 0.1574 | 0.0522 | 0.4178 | 0.091 | 0.3731 | 0.0155 | 0.1484 | 0.1686 | 0.6577 | 0.2272 | 0.45 | 0.0675 | 0.32 | 0.138 | 0.5107 | 0.0642 | 0.2155 | 0.0004 | 0.0111 | 0.0056 | 0.0962 | 0.0022 | 0.0923 | 0.0077 | 0.2 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1244 | 0.4087 | 0.0838 | 0.1462 | 0.0125 | 0.2476 | 0.2974 | 0.4816 | 0.0214 | 0.3643 | 0.0043 | 0.15 | 0.0682 | 0.2278 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3664 | 12.0 | 3000 | 1.5439 | 0.1152 | 0.1658 | 0.1248 | 0.0626 | 0.0694 | 0.1298 | 0.2084 | 0.3655 | 0.379 | 0.0755 | 0.1783 | 0.4163 | 0.5108 | 0.6906 | 0.0741 | 0.234 | 0.3161 | 0.6252 | 0.2019 | 0.5061 | 0.1025 | 0.3357 | 0.1161 | 0.43 | 0.1271 | 0.5577 | 0.1347 | 0.5383 | 0.0724 | 0.3945 | 0.2913 | 0.5281 | 0.0415 | 0.5667 | 0.0251 | 0.2375 | 0.0 | 0.0 | 0.0027 | 0.1426 | 0.0186 | 0.3154 | 0.1809 | 0.619 | 0.0909 | 0.5745 | 0.1873 | 0.5493 | 0.2839 | 0.613 | 0.0606 | 0.5 | 0.1737 | 0.8136 | 0.0362 | 0.7375 | 0.1969 | 0.4758 | 0.713 | 0.8778 | 0.0044 | 0.1225 | 0.1064 | 0.2696 | 0.011 | 0.2321 | 0.2714 | 0.3745 | 0.0106 | 0.2356 | 0.0246 | 0.475 | 0.1112 | 0.4762 | 0.1077 | 0.35 | 0.1786 | 0.4525 | 0.2299 | 0.44 | 0.0124 | 0.1909 | 0.0128 | 0.2263 | 0.0807 | 0.6 | 0.2541 | 0.5913 | 0.0754 | 0.3595 | 0.1464 | 0.5028 | 0.032 | 0.2773 | 0.2206 | 0.5289 | 0.0387 | 0.0918 | 0.0024 | 0.0939 | 0.0392 | 0.1633 | 0.199 | 0.6383 | 0.0191 | 0.2698 | 0.0162 | 0.1971 | 0.0434 | 0.52 | 0.1186 | 0.4558 | 0.138 | 0.5437 | 0.2091 | 0.5892 | 0.0396 | 0.4437 | 0.4124 | 0.681 | 0.2873 | 0.5638 | 0.0215 | 0.1787 | 0.0802 | 0.4891 | 0.0564 | 0.4769 | 0.0636 | 0.2313 | 0.301 | 0.7192 | 0.2514 | 0.4904 | 0.1648 | 0.3333 | 0.1664 | 0.5929 | 0.0879 | 0.2724 | 0.0009 | 0.0111 | 0.0104 | 0.0942 | 0.0093 | 0.1923 | 0.0122 | 0.2054 | 0.0 | 0.0 | 0.0009 | 0.0833 | 0.0 | 0.0 | 0.1493 | 0.4826 | 0.0107 | 0.1385 | 0.0208 | 0.2512 | 0.3401 | 0.5421 | 0.0298 | 0.375 | 0.0 | 0.0 | 0.0262 | 0.3444 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3664 | 13.0 | 3250 | 1.5051 | 0.1395 | 0.1987 | 0.1515 | 0.0335 | 0.0869 | 0.1565 | 0.2266 | 0.3919 | 0.4088 | 0.0605 | 0.1817 | 0.4493 | 0.5281 | 0.7096 | 0.0751 | 0.2943 | 0.3158 | 0.6109 | 0.1672 | 0.5143 | 0.1681 | 0.3179 | 0.119 | 0.3925 | 0.1591 | 0.6 | 0.1327 | 0.5483 | 0.1017 | 0.4418 | 0.2896 | 0.5984 | 0.1632 | 0.5333 | 0.0368 | 0.325 | 0.0 | 0.0 | 0.0038 | 0.1741 | 0.0357 | 0.2892 | 0.2262 | 0.631 | 0.0831 | 0.6021 | 0.225 | 0.5767 | 0.3397 | 0.5713 | 0.0681 | 0.5021 | 0.2878 | 0.8636 | 0.0287 | 0.6375 | 0.2856 | 0.4758 | 0.7605 | 0.8926 | 0.0169 | 0.1901 | 0.1439 | 0.2814 | 0.0137 | 0.2864 | 0.2473 | 0.3824 | 0.0082 | 0.26 | 0.0441 | 0.475 | 0.1055 | 0.5381 | 0.1637 | 0.415 | 0.18 | 0.445 | 0.2548 | 0.448 | 0.0169 | 0.2364 | 0.0126 | 0.4632 | 0.1433 | 0.596 | 0.2609 | 0.6826 | 0.1779 | 0.4095 | 0.157 | 0.4734 | 0.0808 | 0.3523 | 0.222 | 0.4773 | 0.0382 | 0.1049 | 0.0011 | 0.051 | 0.0441 | 0.2367 | 0.2348 | 0.5841 | 0.0661 | 0.4079 | 0.0185 | 0.3118 | 0.094 | 0.58 | 0.15 | 0.4442 | 0.1809 | 0.6219 | 0.2089 | 0.5757 | 0.0405 | 0.4125 | 0.4507 | 0.6619 | 0.2651 | 0.6603 | 0.0307 | 0.2672 | 0.1007 | 0.4561 | 0.1146 | 0.5269 | 0.1162 | 0.2828 | 0.3822 | 0.6923 | 0.292 | 0.516 | 0.339 | 0.4067 | 0.2078 | 0.625 | 0.0853 | 0.25 | 0.0046 | 0.1111 | 0.0167 | 0.2019 | 0.0263 | 0.2538 | 0.0198 | 0.2703 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1561 | 0.4783 | 0.0356 | 0.1385 | 0.033 | 0.2402 | 0.4153 | 0.5921 | 0.0458 | 0.4 | 0.0066 | 0.15 | 0.081 | 0.4778 | 0.0 | 0.0 | 0.0038 | 0.2 |
| 1.228 | 14.0 | 3500 | 1.4548 | 0.1641 | 0.2334 | 0.1793 | 0.0547 | 0.102 | 0.1839 | 0.2435 | 0.4214 | 0.4434 | 0.0645 | 0.2084 | 0.4867 | 0.5369 | 0.7109 | 0.1516 | 0.4 | 0.3265 | 0.6613 | 0.2394 | 0.5265 | 0.124 | 0.3607 | 0.1766 | 0.4975 | 0.2471 | 0.6077 | 0.1535 | 0.605 | 0.1434 | 0.5055 | 0.332 | 0.6047 | 0.1796 | 0.5444 | 0.0779 | 0.35 | 0.0035 | 0.16 | 0.0058 | 0.1667 | 0.0483 | 0.3338 | 0.3091 | 0.6786 | 0.0905 | 0.583 | 0.256 | 0.5589 | 0.3965 | 0.6889 | 0.0914 | 0.5458 | 0.3144 | 0.8591 | 0.0602 | 0.65 | 0.3439 | 0.4879 | 0.7744 | 0.9037 | 0.0177 | 0.2042 | 0.1799 | 0.3402 | 0.0139 | 0.2889 | 0.2812 | 0.398 | 0.0115 | 0.2289 | 0.0636 | 0.65 | 0.1464 | 0.5738 | 0.2177 | 0.52 | 0.2481 | 0.495 | 0.2774 | 0.532 | 0.0402 | 0.5182 | 0.0254 | 0.4737 | 0.1815 | 0.676 | 0.3177 | 0.7087 | 0.193 | 0.4714 | 0.1704 | 0.5339 | 0.102 | 0.3477 | 0.1958 | 0.4781 | 0.0935 | 0.2361 | 0.0064 | 0.149 | 0.0787 | 0.2867 | 0.2444 | 0.5561 | 0.1219 | 0.4619 | 0.0279 | 0.3147 | 0.1001 | 0.604 | 0.1677 | 0.4519 | 0.1846 | 0.6203 | 0.2267 | 0.6514 | 0.0993 | 0.4313 | 0.4665 | 0.6524 | 0.3866 | 0.7034 | 0.0345 | 0.2738 | 0.142 | 0.4826 | 0.0708 | 0.5115 | 0.1586 | 0.3156 | 0.3802 | 0.6846 | 0.2894 | 0.4947 | 0.3497 | 0.4067 | 0.1891 | 0.5786 | 0.1252 | 0.2552 | 0.0008 | 0.0333 | 0.0247 | 0.2385 | 0.0719 | 0.2692 | 0.0325 | 0.3135 | 0.0 | 0.0 | 0.0027 | 0.075 | 0.0 | 0.0 | 0.2186 | 0.4348 | 0.0902 | 0.1462 | 0.0418 | 0.3024 | 0.4062 | 0.6026 | 0.1195 | 0.4607 | 0.0115 | 0.2333 | 0.0944 | 0.4778 | 0.0 | 0.0 | 0.0074 | 0.3333 |
| 1.228 | 15.0 | 3750 | 1.4521 | 0.1844 | 0.2619 | 0.2021 | 0.0487 | 0.1038 | 0.2063 | 0.2544 | 0.4337 | 0.4552 | 0.0624 | 0.2316 | 0.4984 | 0.5359 | 0.6964 | 0.156 | 0.5321 | 0.3211 | 0.6204 | 0.2633 | 0.5551 | 0.1822 | 0.3857 | 0.1818 | 0.4775 | 0.2501 | 0.6308 | 0.1643 | 0.5633 | 0.1503 | 0.4927 | 0.3306 | 0.5828 | 0.3276 | 0.6444 | 0.2096 | 0.35 | 0.0031 | 0.2 | 0.0058 | 0.2093 | 0.047 | 0.3092 | 0.3283 | 0.7143 | 0.1043 | 0.5702 | 0.3409 | 0.6096 | 0.3426 | 0.612 | 0.087 | 0.5271 | 0.4379 | 0.8545 | 0.0619 | 0.6125 | 0.3943 | 0.4848 | 0.7682 | 0.9111 | 0.0235 | 0.2282 | 0.2274 | 0.4225 | 0.0146 | 0.3272 | 0.3073 | 0.4745 | 0.0141 | 0.26 | 0.0707 | 0.55 | 0.189 | 0.5262 | 0.199 | 0.51 | 0.2872 | 0.4725 | 0.2525 | 0.493 | 0.0553 | 0.4364 | 0.0735 | 0.6105 | 0.2645 | 0.636 | 0.3596 | 0.6478 | 0.2168 | 0.4833 | 0.1834 | 0.5239 | 0.1439 | 0.4364 | 0.2321 | 0.4992 | 0.098 | 0.2508 | 0.014 | 0.2061 | 0.0891 | 0.3533 | 0.2615 | 0.5729 | 0.1485 | 0.4524 | 0.0268 | 0.3382 | 0.1234 | 0.636 | 0.1573 | 0.4904 | 0.1856 | 0.5813 | 0.2553 | 0.6216 | 0.0948 | 0.4062 | 0.5299 | 0.6714 | 0.3766 | 0.6707 | 0.0463 | 0.2508 | 0.1836 | 0.483 | 0.1272 | 0.4923 | 0.1447 | 0.3562 | 0.481 | 0.7385 | 0.2985 | 0.5287 | 0.332 | 0.42 | 0.2833 | 0.6286 | 0.1307 | 0.2741 | 0.0271 | 0.3111 | 0.0463 | 0.2846 | 0.0609 | 0.2538 | 0.0621 | 0.327 | 0.0129 | 0.1 | 0.0178 | 0.0833 | 0.0 | 0.0 | 0.276 | 0.5391 | 0.084 | 0.1308 | 0.0453 | 0.2902 | 0.3959 | 0.5947 | 0.1197 | 0.4 | 0.0099 | 0.1833 | 0.0943 | 0.5278 | 0.0 | 0.0 | 0.0071 | 0.2833 |
| 1.1105 | 16.0 | 4000 | 1.4249 | 0.1986 | 0.2824 | 0.2196 | 0.0303 | 0.1209 | 0.2218 | 0.2622 | 0.4539 | 0.4766 | 0.0478 | 0.2645 | 0.5195 | 0.5414 | 0.7129 | 0.1859 | 0.4679 | 0.3424 | 0.6204 | 0.2833 | 0.5204 | 0.1943 | 0.4393 | 0.1864 | 0.5025 | 0.2837 | 0.6346 | 0.1662 | 0.6133 | 0.1887 | 0.5127 | 0.3574 | 0.6281 | 0.4608 | 0.6444 | 0.1798 | 0.4125 | 0.0114 | 0.38 | 0.0053 | 0.1981 | 0.0508 | 0.34 | 0.3727 | 0.7238 | 0.1218 | 0.6255 | 0.3121 | 0.6233 | 0.4127 | 0.6685 | 0.1238 | 0.5896 | 0.4254 | 0.8182 | 0.0755 | 0.6625 | 0.3887 | 0.4727 | 0.753 | 0.9148 | 0.0267 | 0.2493 | 0.2318 | 0.4402 | 0.0155 | 0.321 | 0.3495 | 0.5196 | 0.011 | 0.24 | 0.2628 | 0.6167 | 0.1779 | 0.5738 | 0.1612 | 0.575 | 0.2572 | 0.5475 | 0.2646 | 0.532 | 0.0387 | 0.4636 | 0.0587 | 0.6579 | 0.2552 | 0.684 | 0.3821 | 0.7826 | 0.2694 | 0.4643 | 0.1905 | 0.4963 | 0.1546 | 0.4682 | 0.2489 | 0.5289 | 0.1393 | 0.277 | 0.0343 | 0.2122 | 0.0673 | 0.3133 | 0.2551 | 0.6421 | 0.2316 | 0.4286 | 0.0615 | 0.3529 | 0.2011 | 0.62 | 0.1975 | 0.45 | 0.1709 | 0.5359 | 0.1868 | 0.5703 | 0.0273 | 0.3313 | 0.49 | 0.6571 | 0.4227 | 0.6431 | 0.0344 | 0.2574 | 0.1906 | 0.49 | 0.1258 | 0.5308 | 0.1555 | 0.3891 | 0.4432 | 0.75 | 0.3143 | 0.5553 | 0.4058 | 0.4933 | 0.2941 | 0.6393 | 0.1596 | 0.3017 | 0.0568 | 0.5444 | 0.0431 | 0.275 | 0.0702 | 0.3 | 0.0685 | 0.3108 | 0.0129 | 0.075 | 0.0015 | 0.0833 | 0.0 | 0.0 | 0.3038 | 0.5087 | 0.0562 | 0.2077 | 0.0575 | 0.3439 | 0.4526 | 0.6026 | 0.1077 | 0.4286 | 0.0052 | 0.1667 | 0.2523 | 0.5833 | 0.0 | 0.0 | 0.0116 | 0.3667 |
| 1.1105 | 17.0 | 4250 | 1.4257 | 0.212 | 0.3 | 0.2334 | 0.0477 | 0.1353 | 0.2368 | 0.2724 | 0.4731 | 0.4936 | 0.0574 | 0.2656 | 0.5393 | 0.5456 | 0.7066 | 0.2114 | 0.5453 | 0.3457 | 0.646 | 0.2661 | 0.5408 | 0.193 | 0.4357 | 0.2009 | 0.5 | 0.3029 | 0.6115 | 0.1658 | 0.5733 | 0.1659 | 0.4836 | 0.3379 | 0.5406 | 0.5707 | 0.6667 | 0.2458 | 0.4125 | 0.012 | 0.36 | 0.0104 | 0.2056 | 0.0575 | 0.3323 | 0.4157 | 0.731 | 0.1214 | 0.6277 | 0.337 | 0.6096 | 0.3774 | 0.6324 | 0.1653 | 0.6 | 0.4643 | 0.8409 | 0.0552 | 0.6625 | 0.4081 | 0.4848 | 0.7736 | 0.9 | 0.021 | 0.2606 | 0.2966 | 0.4588 | 0.0189 | 0.3407 | 0.3581 | 0.5549 | 0.0121 | 0.2356 | 0.259 | 0.6083 | 0.1911 | 0.5524 | 0.1869 | 0.555 | 0.2853 | 0.52 | 0.2542 | 0.495 | 0.0578 | 0.4545 | 0.0947 | 0.5737 | 0.2068 | 0.62 | 0.3507 | 0.7043 | 0.2588 | 0.5333 | 0.1778 | 0.5028 | 0.1887 | 0.4727 | 0.2556 | 0.5211 | 0.1474 | 0.3443 | 0.0585 | 0.2796 | 0.102 | 0.4067 | 0.2812 | 0.6093 | 0.2865 | 0.4825 | 0.0778 | 0.3941 | 0.1283 | 0.64 | 0.2188 | 0.4558 | 0.2016 | 0.5922 | 0.2027 | 0.6 | 0.0614 | 0.4375 | 0.5328 | 0.6714 | 0.4268 | 0.6759 | 0.059 | 0.2885 | 0.2291 | 0.5187 | 0.1728 | 0.5423 | 0.1739 | 0.3484 | 0.4496 | 0.7308 | 0.3055 | 0.5468 | 0.3588 | 0.4233 | 0.3432 | 0.6071 | 0.1372 | 0.2931 | 0.1022 | 0.5333 | 0.0668 | 0.3462 | 0.1397 | 0.3923 | 0.0688 | 0.3405 | 0.0547 | 0.275 | 0.0023 | 0.0833 | 0.0 | 0.0 | 0.304 | 0.5696 | 0.0871 | 0.1385 | 0.0457 | 0.2732 | 0.4156 | 0.6 | 0.1934 | 0.5107 | 0.004 | 0.1333 | 0.267 | 0.6611 | 0.0114 | 0.8 | 0.0189 | 0.3333 |
| 1.0125 | 18.0 | 4500 | 1.4093 | 0.2272 | 0.3211 | 0.251 | 0.0373 | 0.1308 | 0.253 | 0.2725 | 0.4752 | 0.5009 | 0.0539 | 0.2738 | 0.5476 | 0.5501 | 0.7096 | 0.2271 | 0.5717 | 0.3404 | 0.6507 | 0.2872 | 0.5735 | 0.2299 | 0.3714 | 0.241 | 0.525 | 0.3226 | 0.6346 | 0.1881 | 0.64 | 0.1796 | 0.48 | 0.3433 | 0.6109 | 0.6712 | 0.7444 | 0.2074 | 0.35 | 0.0113 | 0.26 | 0.0137 | 0.2148 | 0.0815 | 0.3462 | 0.4662 | 0.7214 | 0.1708 | 0.5915 | 0.3742 | 0.6397 | 0.3832 | 0.6435 | 0.1901 | 0.6646 | 0.4756 | 0.8591 | 0.1151 | 0.6375 | 0.4177 | 0.5 | 0.7649 | 0.8889 | 0.0362 | 0.2958 | 0.2609 | 0.4578 | 0.0144 | 0.3 | 0.3642 | 0.5255 | 0.0102 | 0.2889 | 0.3124 | 0.7667 | 0.2196 | 0.5762 | 0.1728 | 0.595 | 0.3123 | 0.545 | 0.2814 | 0.555 | 0.0294 | 0.5364 | 0.0577 | 0.5632 | 0.2887 | 0.652 | 0.3959 | 0.7565 | 0.3044 | 0.5286 | 0.1991 | 0.4853 | 0.2546 | 0.4773 | 0.2669 | 0.5344 | 0.1727 | 0.3492 | 0.0471 | 0.2286 | 0.0969 | 0.3933 | 0.2911 | 0.6103 | 0.2738 | 0.5016 | 0.0613 | 0.4265 | 0.1396 | 0.664 | 0.2129 | 0.5212 | 0.1842 | 0.6156 | 0.32 | 0.6405 | 0.0767 | 0.55 | 0.5527 | 0.6762 | 0.424 | 0.6914 | 0.0839 | 0.3967 | 0.2469 | 0.5091 | 0.1814 | 0.5 | 0.1548 | 0.3484 | 0.4743 | 0.7615 | 0.3076 | 0.5564 | 0.3914 | 0.4567 | 0.3641 | 0.6429 | 0.1797 | 0.3379 | 0.101 | 0.5111 | 0.0712 | 0.4115 | 0.183 | 0.3923 | 0.0809 | 0.3514 | 0.06 | 0.325 | 0.0034 | 0.125 | 0.0 | 0.0 | 0.3408 | 0.5174 | 0.0842 | 0.1385 | 0.066 | 0.3341 | 0.4409 | 0.6395 | 0.2186 | 0.5 | 0.0117 | 0.2167 | 0.2125 | 0.6278 | 0.0 | 0.0 | 0.0285 | 0.3333 |
| 1.0125 | 19.0 | 4750 | 1.4340 | 0.2364 | 0.3367 | 0.2567 | 0.0277 | 0.1433 | 0.2627 | 0.2754 | 0.4927 | 0.5159 | 0.057 | 0.2871 | 0.5615 | 0.538 | 0.6844 | 0.2393 | 0.5302 | 0.3393 | 0.6255 | 0.2729 | 0.5286 | 0.2075 | 0.4 | 0.2806 | 0.5825 | 0.3359 | 0.6538 | 0.1904 | 0.6433 | 0.203 | 0.4964 | 0.3365 | 0.5781 | 0.6668 | 0.7333 | 0.3406 | 0.35 | 0.0396 | 0.58 | 0.0359 | 0.237 | 0.0883 | 0.34 | 0.4724 | 0.7429 | 0.1541 | 0.5957 | 0.3794 | 0.6521 | 0.3789 | 0.6194 | 0.1649 | 0.6521 | 0.6046 | 0.8955 | 0.1062 | 0.7 | 0.4095 | 0.5 | 0.7642 | 0.8963 | 0.0421 | 0.3394 | 0.2781 | 0.4824 | 0.0182 | 0.3272 | 0.3776 | 0.5549 | 0.0133 | 0.2578 | 0.347 | 0.7333 | 0.1932 | 0.5524 | 0.2048 | 0.61 | 0.2799 | 0.5125 | 0.301 | 0.591 | 0.043 | 0.4727 | 0.0738 | 0.5684 | 0.3321 | 0.748 | 0.4345 | 0.7261 | 0.3136 | 0.5548 | 0.2029 | 0.4826 | 0.2524 | 0.4977 | 0.2705 | 0.5117 | 0.1895 | 0.3623 | 0.0768 | 0.2918 | 0.0777 | 0.44 | 0.3011 | 0.615 | 0.2815 | 0.4683 | 0.0971 | 0.3941 | 0.1857 | 0.616 | 0.2353 | 0.5231 | 0.1989 | 0.5875 | 0.2536 | 0.6216 | 0.0862 | 0.5938 | 0.5957 | 0.6857 | 0.4338 | 0.6741 | 0.0844 | 0.4164 | 0.2314 | 0.4909 | 0.1655 | 0.5692 | 0.1694 | 0.3656 | 0.4851 | 0.8 | 0.3262 | 0.5777 | 0.4029 | 0.4667 | 0.3539 | 0.7107 | 0.1703 | 0.3034 | 0.1064 | 0.6 | 0.0905 | 0.3538 | 0.1593 | 0.3385 | 0.0852 | 0.3351 | 0.0665 | 0.325 | 0.0464 | 0.1917 | 0.0 | 0.0 | 0.3331 | 0.5565 | 0.0501 | 0.3538 | 0.0836 | 0.3317 | 0.4052 | 0.5711 | 0.2597 | 0.5393 | 0.0197 | 0.45 | 0.2548 | 0.6611 | 0.0 | 0.0 | 0.025 | 0.35 |
| 0.9351 | 20.0 | 5000 | 1.4215 | 0.2434 | 0.3472 | 0.2648 | 0.0497 | 0.1603 | 0.2709 | 0.2773 | 0.4898 | 0.5162 | 0.0755 | 0.3112 | 0.5611 | 0.5421 | 0.6934 | 0.2393 | 0.5453 | 0.32 | 0.6099 | 0.3118 | 0.5224 | 0.2305 | 0.4429 | 0.2795 | 0.5825 | 0.3386 | 0.6192 | 0.207 | 0.6217 | 0.1973 | 0.4818 | 0.3442 | 0.5469 | 0.6941 | 0.7667 | 0.3101 | 0.35 | 0.0396 | 0.46 | 0.0394 | 0.2389 | 0.0851 | 0.3185 | 0.4959 | 0.7452 | 0.1757 | 0.6426 | 0.38 | 0.674 | 0.395 | 0.6426 | 0.2155 | 0.6708 | 0.5859 | 0.8955 | 0.1119 | 0.6875 | 0.3972 | 0.4879 | 0.7641 | 0.9111 | 0.0501 | 0.3366 | 0.2962 | 0.5049 | 0.0248 | 0.3877 | 0.3646 | 0.5098 | 0.0149 | 0.2889 | 0.4092 | 0.7583 | 0.2168 | 0.5714 | 0.2162 | 0.615 | 0.2831 | 0.5375 | 0.2914 | 0.593 | 0.0498 | 0.4636 | 0.1014 | 0.6 | 0.3398 | 0.672 | 0.43 | 0.7348 | 0.3121 | 0.5643 | 0.1838 | 0.4486 | 0.2836 | 0.4932 | 0.2695 | 0.4992 | 0.1947 | 0.3869 | 0.0606 | 0.3061 | 0.0718 | 0.41 | 0.2962 | 0.5991 | 0.2576 | 0.4762 | 0.109 | 0.4588 | 0.1776 | 0.66 | 0.253 | 0.5865 | 0.1846 | 0.5766 | 0.3031 | 0.6622 | 0.1116 | 0.5 | 0.5767 | 0.6857 | 0.4139 | 0.6759 | 0.1089 | 0.4197 | 0.2525 | 0.5117 | 0.1919 | 0.5769 | 0.1715 | 0.35 | 0.4993 | 0.7615 | 0.3319 | 0.584 | 0.3913 | 0.4967 | 0.3333 | 0.6143 | 0.1755 | 0.3241 | 0.1336 | 0.5667 | 0.0779 | 0.4231 | 0.1384 | 0.3692 | 0.0793 | 0.3838 | 0.1599 | 0.35 | 0.0066 | 0.1917 | 0.0 | 0.0 | 0.3535 | 0.5435 | 0.0997 | 0.2077 | 0.0762 | 0.311 | 0.4235 | 0.6263 | 0.2933 | 0.5714 | 0.0182 | 0.35 | 0.232 | 0.7056 | 0.0 | 0.0 | 0.0785 | 0.3333 |
| 0.9351 | 21.0 | 5250 | 1.4237 | 0.2508 | 0.3586 | 0.2722 | 0.0492 | 0.1477 | 0.2778 | 0.2935 | 0.505 | 0.5297 | 0.0785 | 0.2923 | 0.5764 | 0.5355 | 0.6798 | 0.2764 | 0.5698 | 0.339 | 0.6358 | 0.2893 | 0.5408 | 0.2256 | 0.45 | 0.2811 | 0.65 | 0.3709 | 0.6846 | 0.2209 | 0.6667 | 0.1819 | 0.5018 | 0.3488 | 0.5437 | 0.6986 | 0.7333 | 0.3363 | 0.35 | 0.0639 | 0.48 | 0.018 | 0.2093 | 0.0936 | 0.3292 | 0.5129 | 0.7262 | 0.1689 | 0.6213 | 0.3983 | 0.6562 | 0.3581 | 0.6167 | 0.2097 | 0.6458 | 0.6346 | 0.9 | 0.1264 | 0.6625 | 0.4284 | 0.5061 | 0.7882 | 0.9111 | 0.0591 | 0.3507 | 0.2843 | 0.4922 | 0.0266 | 0.3827 | 0.3889 | 0.5392 | 0.0202 | 0.2867 | 0.4204 | 0.6167 | 0.214 | 0.5643 | 0.2235 | 0.595 | 0.28 | 0.525 | 0.2991 | 0.564 | 0.0426 | 0.5 | 0.0714 | 0.6211 | 0.3502 | 0.7 | 0.3984 | 0.7304 | 0.3348 | 0.5595 | 0.2076 | 0.4743 | 0.3046 | 0.4977 | 0.2733 | 0.507 | 0.2035 | 0.3902 | 0.0759 | 0.3102 | 0.1032 | 0.4133 | 0.3073 | 0.586 | 0.2804 | 0.4968 | 0.1017 | 0.4324 | 0.1885 | 0.644 | 0.2632 | 0.5481 | 0.1809 | 0.5672 | 0.2812 | 0.627 | 0.1091 | 0.4875 | 0.5775 | 0.7048 | 0.4378 | 0.6845 | 0.1099 | 0.4787 | 0.2483 | 0.5248 | 0.2208 | 0.5962 | 0.1816 | 0.3422 | 0.4916 | 0.7692 | 0.3404 | 0.5989 | 0.3949 | 0.4667 | 0.4093 | 0.6857 | 0.1679 | 0.3103 | 0.1344 | 0.5444 | 0.0938 | 0.4712 | 0.185 | 0.3692 | 0.0883 | 0.3541 | 0.1337 | 0.225 | 0.0323 | 0.225 | 0.0 | 0.0 | 0.3563 | 0.587 | 0.0939 | 0.3538 | 0.088 | 0.3049 | 0.4179 | 0.5974 | 0.2952 | 0.5821 | 0.0173 | 0.3167 | 0.2631 | 0.6556 | 0.0099 | 0.8 | 0.0723 | 0.55 |
| 0.8745 | 22.0 | 5500 | 1.4148 | 0.2591 | 0.3688 | 0.2834 | 0.0501 | 0.156 | 0.2874 | 0.2839 | 0.4966 | 0.5218 | 0.0728 | 0.2973 | 0.5684 | 0.5428 | 0.6877 | 0.2602 | 0.5679 | 0.3304 | 0.6117 | 0.2964 | 0.551 | 0.2417 | 0.5179 | 0.2895 | 0.595 | 0.3584 | 0.6769 | 0.2073 | 0.645 | 0.2198 | 0.4964 | 0.3483 | 0.5641 | 0.6895 | 0.7444 | 0.3545 | 0.425 | 0.0727 | 0.46 | 0.0347 | 0.2296 | 0.0976 | 0.3246 | 0.5137 | 0.7381 | 0.1826 | 0.5936 | 0.3978 | 0.6603 | 0.3755 | 0.6074 | 0.2182 | 0.6417 | 0.621 | 0.8909 | 0.1194 | 0.7 | 0.4167 | 0.497 | 0.7847 | 0.8963 | 0.0614 | 0.3535 | 0.3039 | 0.5127 | 0.024 | 0.3889 | 0.4288 | 0.5667 | 0.0184 | 0.2844 | 0.4154 | 0.6583 | 0.2477 | 0.6167 | 0.2244 | 0.54 | 0.2896 | 0.5225 | 0.2939 | 0.568 | 0.1097 | 0.5364 | 0.1005 | 0.6158 | 0.3715 | 0.68 | 0.4527 | 0.7609 | 0.3513 | 0.5833 | 0.2145 | 0.4817 | 0.3111 | 0.4886 | 0.2847 | 0.5023 | 0.211 | 0.4131 | 0.0774 | 0.3265 | 0.0991 | 0.4367 | 0.3119 | 0.6028 | 0.2847 | 0.4921 | 0.1046 | 0.4441 | 0.17 | 0.656 | 0.2392 | 0.5442 | 0.1978 | 0.5891 | 0.2878 | 0.6351 | 0.098 | 0.45 | 0.5893 | 0.6714 | 0.4658 | 0.6621 | 0.0954 | 0.4098 | 0.2462 | 0.5152 | 0.2164 | 0.5808 | 0.1839 | 0.3641 | 0.484 | 0.7808 | 0.3393 | 0.5734 | 0.4417 | 0.5233 | 0.4179 | 0.7071 | 0.178 | 0.3069 | 0.1386 | 0.5556 | 0.1018 | 0.4269 | 0.2279 | 0.4 | 0.0792 | 0.3297 | 0.1801 | 0.325 | 0.0469 | 0.2333 | 0.0 | 0.0 | 0.3742 | 0.5696 | 0.1007 | 0.2769 | 0.0936 | 0.3256 | 0.4415 | 0.6105 | 0.3176 | 0.5786 | 0.0401 | 0.2167 | 0.272 | 0.6611 | 0.0 | 0.0 | 0.0982 | 0.5667 |
| 0.8745 | 23.0 | 5750 | 1.4156 | 0.2609 | 0.3732 | 0.2834 | 0.0604 | 0.1556 | 0.2895 | 0.2841 | 0.5007 | 0.526 | 0.095 | 0.2997 | 0.5713 | 0.5439 | 0.6918 | 0.2819 | 0.5868 | 0.3363 | 0.5989 | 0.296 | 0.5388 | 0.2368 | 0.4643 | 0.3045 | 0.6725 | 0.3799 | 0.6769 | 0.2008 | 0.66 | 0.2368 | 0.5145 | 0.3547 | 0.575 | 0.7068 | 0.7333 | 0.3054 | 0.3875 | 0.0601 | 0.6 | 0.0381 | 0.2148 | 0.0833 | 0.3108 | 0.5324 | 0.7333 | 0.1788 | 0.5936 | 0.3998 | 0.6644 | 0.3688 | 0.612 | 0.2438 | 0.6396 | 0.6672 | 0.8818 | 0.1202 | 0.7 | 0.4137 | 0.5 | 0.7719 | 0.8926 | 0.0792 | 0.3732 | 0.2876 | 0.4804 | 0.0275 | 0.3247 | 0.4318 | 0.5647 | 0.0178 | 0.3133 | 0.4504 | 0.7417 | 0.2403 | 0.5857 | 0.2247 | 0.51 | 0.3131 | 0.555 | 0.313 | 0.592 | 0.0332 | 0.4545 | 0.1058 | 0.6263 | 0.3709 | 0.68 | 0.4482 | 0.7522 | 0.3309 | 0.531 | 0.2103 | 0.478 | 0.3059 | 0.4977 | 0.2884 | 0.5063 | 0.1893 | 0.377 | 0.0737 | 0.3306 | 0.0894 | 0.41 | 0.3049 | 0.5972 | 0.2831 | 0.5159 | 0.1223 | 0.4618 | 0.2453 | 0.644 | 0.2409 | 0.5442 | 0.1998 | 0.5656 | 0.2609 | 0.5865 | 0.0921 | 0.5562 | 0.5991 | 0.6762 | 0.4428 | 0.681 | 0.112 | 0.4787 | 0.2468 | 0.5022 | 0.2168 | 0.6 | 0.1844 | 0.3266 | 0.4809 | 0.7769 | 0.3317 | 0.5734 | 0.4296 | 0.4867 | 0.4401 | 0.6929 | 0.1895 | 0.3207 | 0.1803 | 0.5556 | 0.0911 | 0.4692 | 0.2151 | 0.3538 | 0.0806 | 0.3541 | 0.1737 | 0.3 | 0.0241 | 0.1917 | 0.0 | 0.0 | 0.3949 | 0.6391 | 0.1158 | 0.2846 | 0.0801 | 0.3207 | 0.4288 | 0.6158 | 0.3194 | 0.5679 | 0.0281 | 0.5167 | 0.2654 | 0.65 | 0.0 | 0.0 | 0.1551 | 0.55 |
| 0.8295 | 24.0 | 6000 | 1.4127 | 0.2666 | 0.3806 | 0.2927 | 0.0563 | 0.1617 | 0.2956 | 0.2861 | 0.5081 | 0.5348 | 0.0912 | 0.302 | 0.5814 | 0.5431 | 0.682 | 0.2752 | 0.583 | 0.333 | 0.5989 | 0.2993 | 0.5469 | 0.2496 | 0.4821 | 0.3195 | 0.68 | 0.3889 | 0.6808 | 0.2137 | 0.6617 | 0.2243 | 0.5091 | 0.3656 | 0.5797 | 0.7053 | 0.7444 | 0.3572 | 0.45 | 0.0579 | 0.62 | 0.0477 | 0.2333 | 0.1011 | 0.3231 | 0.5519 | 0.7381 | 0.1956 | 0.6149 | 0.43 | 0.6644 | 0.3854 | 0.6204 | 0.2523 | 0.6479 | 0.7042 | 0.8955 | 0.1162 | 0.6875 | 0.4162 | 0.5121 | 0.7729 | 0.9037 | 0.0825 | 0.3789 | 0.2947 | 0.4941 | 0.0261 | 0.3432 | 0.4281 | 0.5569 | 0.0185 | 0.3022 | 0.4103 | 0.65 | 0.2334 | 0.5786 | 0.2216 | 0.565 | 0.299 | 0.545 | 0.3165 | 0.586 | 0.0485 | 0.4818 | 0.096 | 0.6158 | 0.3945 | 0.68 | 0.487 | 0.7609 | 0.3539 | 0.5762 | 0.2171 | 0.4798 | 0.3251 | 0.4955 | 0.2926 | 0.5031 | 0.188 | 0.382 | 0.0785 | 0.3633 | 0.0915 | 0.4333 | 0.3228 | 0.6 | 0.2883 | 0.5476 | 0.1264 | 0.4441 | 0.2289 | 0.672 | 0.2468 | 0.5365 | 0.1979 | 0.5828 | 0.2825 | 0.6459 | 0.1066 | 0.6062 | 0.5846 | 0.681 | 0.4415 | 0.6534 | 0.101 | 0.4344 | 0.2529 | 0.5174 | 0.2029 | 0.6038 | 0.1778 | 0.3281 | 0.4906 | 0.7962 | 0.3411 | 0.5936 | 0.4549 | 0.5333 | 0.4085 | 0.6821 | 0.1813 | 0.3328 | 0.1774 | 0.6556 | 0.11 | 0.4269 | 0.252 | 0.4154 | 0.0829 | 0.3297 | 0.2089 | 0.325 | 0.0489 | 0.2333 | 0.0 | 0.0 | 0.3959 | 0.6261 | 0.1209 | 0.2846 | 0.091 | 0.3317 | 0.4449 | 0.6211 | 0.3202 | 0.5714 | 0.0276 | 0.5333 | 0.2973 | 0.6444 | 0.0 | 0.0 | 0.1062 | 0.5667 |
| 0.8295 | 25.0 | 6250 | 1.4180 | 0.2662 | 0.3803 | 0.2912 | 0.0518 | 0.157 | 0.2951 | 0.285 | 0.5077 | 0.532 | 0.0839 | 0.3073 | 0.5775 | 0.5447 | 0.6854 | 0.2988 | 0.5962 | 0.3329 | 0.5964 | 0.2971 | 0.5388 | 0.2451 | 0.4821 | 0.3098 | 0.6875 | 0.3935 | 0.6731 | 0.2049 | 0.6567 | 0.225 | 0.4964 | 0.3613 | 0.5766 | 0.7366 | 0.7667 | 0.3555 | 0.4375 | 0.0652 | 0.6 | 0.0425 | 0.2278 | 0.1111 | 0.32 | 0.5557 | 0.731 | 0.1938 | 0.6 | 0.4102 | 0.6534 | 0.3776 | 0.6241 | 0.2523 | 0.6521 | 0.674 | 0.8955 | 0.1293 | 0.7 | 0.4285 | 0.497 | 0.7717 | 0.9 | 0.0854 | 0.4056 | 0.3035 | 0.5059 | 0.0287 | 0.3543 | 0.4367 | 0.5725 | 0.019 | 0.2711 | 0.4249 | 0.65 | 0.2292 | 0.5786 | 0.2078 | 0.58 | 0.3044 | 0.5375 | 0.304 | 0.576 | 0.058 | 0.4091 | 0.1067 | 0.6158 | 0.3855 | 0.672 | 0.4616 | 0.7522 | 0.3426 | 0.5762 | 0.213 | 0.4725 | 0.3194 | 0.4955 | 0.2866 | 0.5078 | 0.1883 | 0.377 | 0.077 | 0.351 | 0.0891 | 0.4167 | 0.3186 | 0.5916 | 0.3012 | 0.5175 | 0.1275 | 0.4412 | 0.2077 | 0.668 | 0.2431 | 0.5442 | 0.1915 | 0.575 | 0.2536 | 0.5838 | 0.1055 | 0.55 | 0.6061 | 0.6905 | 0.4319 | 0.6552 | 0.1145 | 0.4705 | 0.2492 | 0.503 | 0.2088 | 0.6154 | 0.1791 | 0.3281 | 0.4893 | 0.7885 | 0.347 | 0.5926 | 0.45 | 0.5233 | 0.4221 | 0.6714 | 0.1881 | 0.331 | 0.1754 | 0.6444 | 0.1097 | 0.4462 | 0.2289 | 0.4231 | 0.0827 | 0.3595 | 0.2089 | 0.325 | 0.049 | 0.2667 | 0.0 | 0.0 | 0.3774 | 0.6261 | 0.1363 | 0.2846 | 0.0875 | 0.3122 | 0.4305 | 0.6105 | 0.308 | 0.6 | 0.0279 | 0.5333 | 0.3153 | 0.65 | 0.0 | 0.0 | 0.1334 | 0.5667 |
| 0.8005 | 26.0 | 6500 | 1.4211 | 0.2677 | 0.3834 | 0.2933 | 0.0601 | 0.1691 | 0.2963 | 0.2878 | 0.506 | 0.5305 | 0.0854 | 0.2916 | 0.5782 | 0.547 | 0.6881 | 0.3064 | 0.5736 | 0.3295 | 0.5967 | 0.3045 | 0.5388 | 0.2612 | 0.4929 | 0.2996 | 0.67 | 0.3866 | 0.6808 | 0.2167 | 0.6717 | 0.2226 | 0.5 | 0.357 | 0.5609 | 0.7352 | 0.7556 | 0.3543 | 0.4375 | 0.0556 | 0.5 | 0.044 | 0.2407 | 0.1026 | 0.3123 | 0.5498 | 0.7333 | 0.197 | 0.6106 | 0.4232 | 0.663 | 0.3744 | 0.6111 | 0.2574 | 0.6521 | 0.6969 | 0.8682 | 0.1299 | 0.7125 | 0.4165 | 0.5 | 0.7789 | 0.8963 | 0.0878 | 0.3901 | 0.2917 | 0.4941 | 0.0288 | 0.3728 | 0.4254 | 0.5647 | 0.0173 | 0.2622 | 0.4249 | 0.6583 | 0.2303 | 0.5976 | 0.212 | 0.59 | 0.3165 | 0.55 | 0.305 | 0.573 | 0.0451 | 0.4455 | 0.096 | 0.6158 | 0.3905 | 0.664 | 0.454 | 0.7522 | 0.3457 | 0.5881 | 0.2186 | 0.478 | 0.3339 | 0.5045 | 0.2904 | 0.5008 | 0.2025 | 0.3984 | 0.089 | 0.349 | 0.1053 | 0.44 | 0.322 | 0.6037 | 0.3023 | 0.5222 | 0.1306 | 0.45 | 0.1925 | 0.672 | 0.2498 | 0.5519 | 0.1922 | 0.5547 | 0.2631 | 0.5838 | 0.133 | 0.5938 | 0.6007 | 0.6762 | 0.4551 | 0.6741 | 0.1124 | 0.4262 | 0.2516 | 0.5143 | 0.2191 | 0.6077 | 0.1801 | 0.3344 | 0.5106 | 0.7923 | 0.3426 | 0.5968 | 0.4449 | 0.5233 | 0.4178 | 0.6964 | 0.1801 | 0.3276 | 0.1976 | 0.6444 | 0.0958 | 0.4462 | 0.2305 | 0.3692 | 0.0894 | 0.3459 | 0.1815 | 0.325 | 0.0334 | 0.2333 | 0.0 | 0.0 | 0.3717 | 0.587 | 0.1113 | 0.2846 | 0.0887 | 0.3037 | 0.4365 | 0.6132 | 0.3144 | 0.5714 | 0.0283 | 0.55 | 0.3144 | 0.6556 | 0.0 | 0.0 | 0.1663 | 0.55 |
| 0.8005 | 27.0 | 6750 | 1.4224 | 0.2691 | 0.3844 | 0.2945 | 0.0654 | 0.1646 | 0.2981 | 0.2853 | 0.5038 | 0.5283 | 0.0934 | 0.2966 | 0.5756 | 0.5444 | 0.6844 | 0.3122 | 0.5906 | 0.3317 | 0.6 | 0.3125 | 0.5449 | 0.2471 | 0.4857 | 0.3061 | 0.6775 | 0.3927 | 0.6846 | 0.2088 | 0.6667 | 0.2265 | 0.52 | 0.3571 | 0.5516 | 0.7419 | 0.7556 | 0.3404 | 0.35 | 0.0513 | 0.48 | 0.0474 | 0.2407 | 0.1116 | 0.3108 | 0.5461 | 0.7238 | 0.2014 | 0.6021 | 0.4254 | 0.6493 | 0.3757 | 0.6139 | 0.2647 | 0.6583 | 0.7006 | 0.8864 | 0.1301 | 0.6875 | 0.4177 | 0.5 | 0.78 | 0.8963 | 0.0918 | 0.3732 | 0.2935 | 0.4971 | 0.0322 | 0.3519 | 0.4351 | 0.5745 | 0.0206 | 0.2889 | 0.4293 | 0.6833 | 0.2342 | 0.581 | 0.2226 | 0.575 | 0.3147 | 0.525 | 0.3082 | 0.583 | 0.0454 | 0.4545 | 0.1029 | 0.6263 | 0.3967 | 0.664 | 0.4527 | 0.7696 | 0.3598 | 0.5881 | 0.2214 | 0.4716 | 0.3266 | 0.4955 | 0.2851 | 0.5039 | 0.2036 | 0.3885 | 0.0835 | 0.3224 | 0.0931 | 0.4067 | 0.3263 | 0.6056 | 0.3013 | 0.5143 | 0.1339 | 0.45 | 0.2094 | 0.664 | 0.2415 | 0.5423 | 0.1823 | 0.5719 | 0.2677 | 0.5973 | 0.124 | 0.5938 | 0.5971 | 0.7143 | 0.4489 | 0.6759 | 0.1079 | 0.4492 | 0.2526 | 0.51 | 0.2127 | 0.6154 | 0.179 | 0.3313 | 0.5078 | 0.7923 | 0.3466 | 0.5904 | 0.4301 | 0.5133 | 0.4358 | 0.6929 | 0.1882 | 0.331 | 0.1801 | 0.6444 | 0.1019 | 0.45 | 0.2256 | 0.3692 | 0.089 | 0.3351 | 0.1925 | 0.325 | 0.0471 | 0.2083 | 0.0 | 0.0 | 0.3636 | 0.5826 | 0.1166 | 0.2846 | 0.0868 | 0.2927 | 0.4391 | 0.6026 | 0.3246 | 0.575 | 0.0273 | 0.5333 | 0.3493 | 0.6556 | 0.0 | 0.0 | 0.1661 | 0.5667 |
| 0.7843 | 28.0 | 7000 | 1.4235 | 0.2706 | 0.3861 | 0.2959 | 0.0637 | 0.1678 | 0.2992 | 0.2857 | 0.5049 | 0.5288 | 0.093 | 0.2915 | 0.5766 | 0.5439 | 0.6842 | 0.3146 | 0.6019 | 0.3296 | 0.5974 | 0.3035 | 0.5327 | 0.2538 | 0.5143 | 0.3002 | 0.675 | 0.3859 | 0.6769 | 0.2107 | 0.6717 | 0.2257 | 0.5164 | 0.3576 | 0.5594 | 0.7299 | 0.7556 | 0.3539 | 0.425 | 0.061 | 0.48 | 0.0476 | 0.237 | 0.1157 | 0.3138 | 0.5461 | 0.7238 | 0.2076 | 0.6021 | 0.4243 | 0.6521 | 0.3737 | 0.6167 | 0.2592 | 0.6521 | 0.6997 | 0.8636 | 0.1318 | 0.6875 | 0.4158 | 0.497 | 0.7767 | 0.9 | 0.0945 | 0.3944 | 0.2883 | 0.4863 | 0.0283 | 0.3556 | 0.4367 | 0.5706 | 0.0197 | 0.2778 | 0.4287 | 0.625 | 0.2359 | 0.5738 | 0.216 | 0.575 | 0.316 | 0.5275 | 0.3073 | 0.578 | 0.0461 | 0.4545 | 0.1095 | 0.6158 | 0.3981 | 0.664 | 0.465 | 0.7391 | 0.3455 | 0.5905 | 0.2229 | 0.4752 | 0.3273 | 0.4977 | 0.2857 | 0.5102 | 0.1987 | 0.3967 | 0.0791 | 0.351 | 0.0914 | 0.4267 | 0.3303 | 0.5991 | 0.2899 | 0.5079 | 0.1375 | 0.4618 | 0.2142 | 0.664 | 0.2611 | 0.5365 | 0.1882 | 0.5766 | 0.2721 | 0.6054 | 0.1285 | 0.6438 | 0.5999 | 0.7143 | 0.443 | 0.669 | 0.1103 | 0.4508 | 0.2498 | 0.5013 | 0.2108 | 0.6 | 0.1866 | 0.3297 | 0.5196 | 0.7962 | 0.3451 | 0.6021 | 0.4372 | 0.52 | 0.4263 | 0.6929 | 0.1846 | 0.331 | 0.1831 | 0.5778 | 0.1003 | 0.4596 | 0.2334 | 0.3769 | 0.095 | 0.3378 | 0.2259 | 0.325 | 0.0473 | 0.2083 | 0.0 | 0.0 | 0.3729 | 0.6 | 0.1254 | 0.2846 | 0.0883 | 0.2963 | 0.442 | 0.6026 | 0.3253 | 0.5714 | 0.0314 | 0.5333 | 0.3438 | 0.6556 | 0.0 | 0.0 | 0.1869 | 0.55 |
| 0.7843 | 29.0 | 7250 | 1.4244 | 0.2706 | 0.3867 | 0.2953 | 0.0661 | 0.1688 | 0.2997 | 0.2868 | 0.5059 | 0.53 | 0.0937 | 0.2972 | 0.5777 | 0.5444 | 0.6842 | 0.3173 | 0.6038 | 0.3306 | 0.5964 | 0.3011 | 0.5367 | 0.2578 | 0.4679 | 0.3048 | 0.68 | 0.3994 | 0.6808 | 0.2126 | 0.6683 | 0.2256 | 0.5 | 0.3593 | 0.5516 | 0.7392 | 0.7556 | 0.3541 | 0.425 | 0.0602 | 0.48 | 0.0489 | 0.237 | 0.1158 | 0.3185 | 0.5443 | 0.7214 | 0.2068 | 0.6 | 0.4207 | 0.6534 | 0.3737 | 0.6157 | 0.2557 | 0.65 | 0.6957 | 0.8636 | 0.1213 | 0.6875 | 0.4205 | 0.503 | 0.7775 | 0.9 | 0.0933 | 0.3845 | 0.2963 | 0.4951 | 0.0296 | 0.3568 | 0.4419 | 0.5765 | 0.0206 | 0.2756 | 0.4305 | 0.6583 | 0.2312 | 0.5738 | 0.2094 | 0.57 | 0.3148 | 0.5425 | 0.3087 | 0.578 | 0.0609 | 0.4364 | 0.1093 | 0.6263 | 0.3997 | 0.672 | 0.4534 | 0.7522 | 0.3598 | 0.5881 | 0.2202 | 0.4743 | 0.3305 | 0.4955 | 0.2909 | 0.5133 | 0.2037 | 0.4082 | 0.0813 | 0.351 | 0.0926 | 0.4367 | 0.3291 | 0.6065 | 0.2949 | 0.5127 | 0.1366 | 0.4618 | 0.2215 | 0.66 | 0.2451 | 0.5404 | 0.1901 | 0.5766 | 0.276 | 0.6 | 0.1156 | 0.5938 | 0.5963 | 0.7095 | 0.4429 | 0.6707 | 0.1112 | 0.4508 | 0.2499 | 0.5026 | 0.1907 | 0.5962 | 0.1809 | 0.3328 | 0.5141 | 0.7769 | 0.3439 | 0.5851 | 0.4504 | 0.53 | 0.4456 | 0.6857 | 0.1786 | 0.331 | 0.1955 | 0.6556 | 0.1018 | 0.4615 | 0.2323 | 0.3769 | 0.0923 | 0.3405 | 0.1925 | 0.325 | 0.0475 | 0.225 | 0.0 | 0.0 | 0.3835 | 0.6043 | 0.1164 | 0.2846 | 0.0886 | 0.2976 | 0.4375 | 0.6 | 0.3266 | 0.5964 | 0.0312 | 0.5333 | 0.336 | 0.6667 | 0.0 | 0.0 | 0.1857 | 0.5667 |
| 0.7805 | 30.0 | 7500 | 1.4235 | 0.2714 | 0.3867 | 0.2968 | 0.0662 | 0.1688 | 0.3006 | 0.2872 | 0.507 | 0.5305 | 0.0952 | 0.2946 | 0.5785 | 0.5441 | 0.6838 | 0.3146 | 0.6 | 0.3302 | 0.6011 | 0.3008 | 0.5347 | 0.2611 | 0.4893 | 0.2997 | 0.68 | 0.4005 | 0.6808 | 0.2124 | 0.6667 | 0.2231 | 0.4964 | 0.3589 | 0.5547 | 0.7419 | 0.7556 | 0.3547 | 0.4375 | 0.0595 | 0.48 | 0.0526 | 0.237 | 0.1136 | 0.3169 | 0.5449 | 0.7214 | 0.2094 | 0.6021 | 0.4232 | 0.6575 | 0.3734 | 0.6176 | 0.2624 | 0.6521 | 0.6967 | 0.8636 | 0.1197 | 0.6875 | 0.419 | 0.5 | 0.7759 | 0.9 | 0.0932 | 0.3887 | 0.2971 | 0.498 | 0.028 | 0.3605 | 0.4376 | 0.5745 | 0.0202 | 0.2778 | 0.4422 | 0.6583 | 0.2384 | 0.5714 | 0.2114 | 0.575 | 0.3106 | 0.545 | 0.3103 | 0.585 | 0.0709 | 0.4364 | 0.1192 | 0.6211 | 0.3989 | 0.668 | 0.4623 | 0.7478 | 0.36 | 0.5905 | 0.2205 | 0.4743 | 0.3316 | 0.4955 | 0.2914 | 0.5055 | 0.2044 | 0.3984 | 0.0793 | 0.349 | 0.0941 | 0.4433 | 0.3273 | 0.6047 | 0.2905 | 0.5079 | 0.1335 | 0.4471 | 0.2086 | 0.656 | 0.2413 | 0.5346 | 0.1865 | 0.5719 | 0.2751 | 0.6054 | 0.1325 | 0.6438 | 0.6047 | 0.719 | 0.4449 | 0.6707 | 0.1137 | 0.4508 | 0.2514 | 0.5078 | 0.1922 | 0.5962 | 0.1817 | 0.3297 | 0.5156 | 0.7962 | 0.3427 | 0.5894 | 0.4477 | 0.5267 | 0.4456 | 0.6929 | 0.1816 | 0.3328 | 0.2073 | 0.6556 | 0.1011 | 0.4635 | 0.233 | 0.3769 | 0.0924 | 0.3432 | 0.1925 | 0.325 | 0.0465 | 0.2 | 0.0 | 0.0 | 0.381 | 0.5957 | 0.1198 | 0.2846 | 0.0901 | 0.2951 | 0.436 | 0.6 | 0.3249 | 0.5714 | 0.0307 | 0.5333 | 0.3399 | 0.6667 | 0.0 | 0.0 | 0.1852 | 0.5667 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.2.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"dining table",
"toilet",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
nsugianto/detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_s1_2359s_adjparam |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet50_finetuned_detrresnet50_lsdocelementdetv1type7_s1_2359s_adjparam
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29",
"label_30",
"label_31",
"label_32",
"label_33",
"label_34",
"label_35",
"label_36",
"label_37",
"label_38",
"label_39",
"label_40",
"label_41",
"label_42",
"label_43",
"label_44",
"label_45",
"label_46",
"label_47",
"label_48",
"label_49",
"label_50",
"label_51",
"label_52",
"label_53",
"label_54",
"label_55",
"label_56",
"label_57",
"label_58",
"label_59",
"label_60",
"label_61",
"label_62",
"label_63",
"label_64",
"label_65",
"label_66",
"label_67",
"label_68",
"label_69",
"label_70",
"label_71",
"label_72",
"label_73",
"label_74",
"label_75",
"label_76",
"label_77",
"label_78",
"label_79",
"label_80",
"label_81",
"label_82",
"label_83",
"label_84",
"label_85",
"label_86",
"label_87",
"label_88",
"label_89",
"label_90"
] |
MTWD/detr-resnet-50-brain-hack |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DETR Resnet fine tuned with BrainHack dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the BrainHack VLM dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"grey and white light aircraft",
"grey fighter jet",
"grey helicopter",
"orange and black fighter jet",
"white commercial aircraft",
"white and yellow commercial aircraft",
"white light aircraft",
"blue and green fighter plane",
"green fighter plane",
"white, blue, and red commercial aircraft",
"white and blue fighter plane",
"yellow and red light aircraft",
"grey drone",
"black camouflage fighter jet",
"blue and red light aircraft",
"black and white commercial aircraft",
"blue commercial aircraft",
"black cargo aircraft",
"yellow, black, and red helicopter",
"white, red, and green fighter plane",
"white and blue fighter jet",
"blue and grey fighter jet",
"green and grey helicopter",
"blue and red commercial aircraft",
"yellow fighter plane",
"yellow and black fighter plane",
"green and black camouflage helicopter",
"black helicopter",
"red light aircraft",
"green and brown camouflage helicopter",
"black fighter plane",
"grey and yellow fighter plane",
"white and black cargo aircraft",
"green light aircraft",
"white and black fighter plane",
"white and black light aircraft",
"white drone",
"yellow light aircraft",
"white and red light aircraft",
"yellow and green helicopter",
"white and black drone",
"white fighter jet",
"grey, red, and blue commercial aircraft",
"white cargo aircraft",
"white, black, and grey missile",
"red, white, and blue light aircraft",
"red and black drone",
"green and white fighter plane",
"red fighter plane",
"grey commercial aircraft",
"grey and red fighter jet",
"grey and black fighter plane",
"grey camouflage fighter jet",
"green camouflage helicopter",
"white and red fighter plane",
"blue and yellow fighter jet",
"blue and white helicopter",
"white, black, and red drone",
"white and blue cargo aircraft",
"green and yellow fighter plane",
"red and white helicopter",
"grey and green cargo aircraft",
"yellow, red, and grey helicopter",
"silver fighter plane",
"blue, yellow, and green fighter plane",
"red and white fighter plane",
"grey missile",
"white and black fighter jet",
"blue missile",
"grey light aircraft",
"red fighter jet",
"yellow missile",
"white helicopter",
"black drone",
"yellow helicopter",
"black fighter jet",
"white and red helicopter",
"grey and white fighter plane",
"blue helicopter",
"white and blue light aircraft",
"grey and black helicopter",
"blue and yellow helicopter",
"white and blue commercial aircraft",
"blue and white missile",
"black and brown camouflage helicopter",
"red and white fighter jet",
"white and orange commercial aircraft",
"black and white cargo aircraft",
"white and red missile",
"white and red commercial aircraft",
"white missile",
"white and black helicopter",
"yellow, red, and blue fighter plane",
"red helicopter",
"black and white missile",
"green helicopter",
"green and brown camouflage fighter plane",
"grey cargo aircraft",
"green and brown camouflage fighter jet",
"yellow commercial aircraft",
"white and grey helicopter",
"white fighter plane",
"silver and blue fighter plane",
"red and white missile",
"green missile",
"black and orange drone",
"grey fighter plane",
"black and yellow missile",
"orange light aircraft",
"white and blue helicopter",
"red and white light aircraft",
"grey and red missile",
"black and yellow drone",
"white and orange light aircraft",
"blue, yellow, and white cargo aircraft",
"yellow fighter jet",
"blue camouflage fighter jet",
"green and black missile",
"blue, yellow, and black helicopter",
"white and red fighter jet",
"grey and red commercial aircraft",
"red, white, and blue fighter jet",
"white, red, and blue commercial aircraft",
"blue and white commercial aircraft",
"blue and white light aircraft",
"red and grey missile"
] |
anirban22/detr-resnet-50-med_fracture |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4"
] |
Judy07/bone-fracture-DETA |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7"
] |
NRPU/detr-finetuned-balloon-v4 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15"
] |
PekingU/rtdetr_r101vd |
# Model Card for RT-DETR
## Table of Contents
1. [Model Details](#model-details)
2. [Model Sources](#model-sources)
3. [How to Get Started with the Model](#how-to-get-started-with-the-model)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Model Architecture and Objective](#model-architecture-and-objective)
7. [Citation](#citation)
## Model Details

> The YOLO series has become the most popular framework for real-time object detection due to its reasonable trade-off between speed and accuracy.
However, we observe that the speed and accuracy of YOLOs are negatively affected by the NMS.
Recently, end-to-end Transformer-based detectors (DETRs) have provided an alternative to eliminating NMS.
Nevertheless, the high computational cost limits their practicality and hinders them from fully exploiting the advantage of excluding NMS.
In this paper, we propose the Real-Time DEtection TRansformer (RT-DETR), the first real-time end-to-end object detector to our best knowledge that addresses the above dilemma.
We build RT-DETR in two steps, drawing on the advanced DETR:
first we focus on maintaining accuracy while improving speed, followed by maintaining speed while improving accuracy.
Specifically, we design an efficient hybrid encoder to expeditiously process multi-scale features by decoupling intra-scale interaction and cross-scale fusion to improve speed.
Then, we propose the uncertainty-minimal query selection to provide high-quality initial queries to the decoder, thereby improving accuracy.
In addition, RT-DETR supports flexible speed tuning by adjusting the number of decoder layers to adapt to various scenarios without retraining.
Our RT-DETR-R50 / R101 achieves 53.1% / 54.3% AP on COCO and 108 / 74 FPS on T4 GPU, outperforming previously advanced YOLOs in both speed and accuracy.
We also develop scaled RT-DETRs that outperform the lighter YOLO detectors (S and M models).
Furthermore, RT-DETR-R50 outperforms DINO-R50 by 2.2% AP in accuracy and about 21 times in FPS.
After pre-training with Objects365, RT-DETR-R50 / R101 achieves 55.3% / 56.2% AP. The project page: this [https URL](https://zhao-yian.github.io/RTDETR/).
This is the model card of a 🤗 [transformers](https://huggingface.co/docs/transformers/index) model that has been pushed on the Hub.
- **Developed by:** Yian Zhao and Sangbum Choi
- **Funded by:** National Key R&D Program of China (No.2022ZD0118201), Natural Science Foundation of China (No.61972217, 32071459, 62176249, 62006133, 62271465),
and the Shenzhen Medical Research Funds in China (No.
B2302037).
- **Shared by:** Sangbum Choi
- **Model type:** [RT-DETR](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)
- **License:** Apache-2.0
### Model Sources
<!-- Provide the basic links for the model. -->
- **HF Docs:** [RT-DETR](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)
- **Repository:** https://github.com/lyuwenyu/RT-DETR
- **Paper:** https://arxiv.org/abs/2304.08069
- **Demo:** [RT-DETR Tracking](https://huggingface.co/spaces/merve/RT-DETR-tracking-coco)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
import requests
from PIL import Image
from transformers import RTDetrForObjectDetection, RTDetrImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r101vd")
model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r101vd")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
This should output
```
sofa: 0.97 [0.14, 0.38, 640.13, 476.21]
cat: 0.96 [343.38, 24.28, 640.14, 371.5]
cat: 0.96 [13.23, 54.18, 318.98, 472.22]
remote: 0.95 [40.11, 73.44, 175.96, 118.48]
remote: 0.92 [333.73, 76.58, 369.97, 186.99]
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The RTDETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We conduct experiments on COCO and Objects365 datasets, where RT-DETR is trained on COCO train2017 and validated on COCO val2017 dataset.
We report the standard COCO metrics, including AP (averaged over uniformly sampled IoU thresholds ranging from 0.50-0.95 with a step size of 0.05),
AP50, AP75, as well as AP at different scales: APS, APM, APL.
### Preprocessing
Images are resized to 640x640 pixels and rescaled with `image_mean=[0.485, 0.456, 0.406]` and `image_std=[0.229, 0.224, 0.225]`.
### Training Hyperparameters
- **Training regime:** <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

## Evaluation
| Model | #Epochs | #Params (M) | GFLOPs | FPS_bs=1 | AP (val) | AP50 (val) | AP75 (val) | AP-s (val) | AP-m (val) | AP-l (val) |
|----------------------------|---------|-------------|--------|----------|--------|-----------|-----------|----------|----------|----------|
| RT-DETR-R18 | 72 | 20 | 60.7 | 217 | 46.5 | 63.8 | 50.4 | 28.4 | 49.8 | 63.0 |
| RT-DETR-R34 | 72 | 31 | 91.0 | 172 | 48.5 | 66.2 | 52.3 | 30.2 | 51.9 | 66.2 |
| RT-DETR R50 | 72 | 42 | 136 | 108 | 53.1 | 71.3 | 57.7 | 34.8 | 58.0 | 70.0 |
| RT-DETR R101| 72 | 76 | 259 | 74 | 54.3 | 72.7 | 58.6 | 36.0 | 58.8 | 72.1 |
| RT-DETR-R18 (Objects 365 pretrained) | 60 | 20 | 61 | 217 | 49.2 | 66.6 | 53.5 | 33.2 | 52.3 | 64.8 |
| RT-DETR-R50 (Objects 365 pretrained) | 24 | 42 | 136 | 108 | 55.3 | 73.4 | 60.1 | 37.9 | 59.9 | 71.8 |
| RT-DETR-R101 (Objects 365 pretrained) | 24 | 76 | 259 | 74 | 56.2 | 74.6 | 61.3 | 38.3 | 60.5 | 73.5 |
### Model Architecture and Objective

Overview of RT-DETR. We feed the features from the last three stages of the backbone into the encoder. The efficient hybrid
encoder transforms multi-scale features into a sequence of image features through the Attention-based Intra-scale Feature Interaction (AIFI)
and the CNN-based Cross-scale Feature Fusion (CCFF). Then, the uncertainty-minimal query selection selects a fixed number of encoder
features to serve as initial object queries for the decoder. Finally, the decoder with auxiliary prediction heads iteratively optimizes object
queries to generate categories and boxes.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{lv2023detrs,
title={DETRs Beat YOLOs on Real-time Object Detection},
author={Yian Zhao and Wenyu Lv and Shangliang Xu and Jinman Wei and Guanzhong Wang and Qingqing Dang and Yi Liu and Jie Chen},
year={2023},
eprint={2304.08069},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Model Card Authors
[Sangbum Choi](https://huggingface.co/danelcsb)
[Pavel Iakubovskii](https://huggingface.co/qubvel-hf)
| [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
PekingU/rtdetr_r101vd_coco_o365 |
# Model Card for RT-DETR
## Table of Contents
1. [Model Details](#model-details)
2. [Model Sources](#model-sources)
3. [How to Get Started with the Model](#how-to-get-started-with-the-model)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Model Architecture and Objective](#model-architecture-and-objective)
7. [Citation](#citation)
## Model Details

> The YOLO series has become the most popular framework for real-time object detection due to its reasonable trade-off between speed and accuracy.
However, we observe that the speed and accuracy of YOLOs are negatively affected by the NMS.
Recently, end-to-end Transformer-based detectors (DETRs) have provided an alternative to eliminating NMS.
Nevertheless, the high computational cost limits their practicality and hinders them from fully exploiting the advantage of excluding NMS.
In this paper, we propose the Real-Time DEtection TRansformer (RT-DETR), the first real-time end-to-end object detector to our best knowledge that addresses the above dilemma.
We build RT-DETR in two steps, drawing on the advanced DETR:
first we focus on maintaining accuracy while improving speed, followed by maintaining speed while improving accuracy.
Specifically, we design an efficient hybrid encoder to expeditiously process multi-scale features by decoupling intra-scale interaction and cross-scale fusion to improve speed.
Then, we propose the uncertainty-minimal query selection to provide high-quality initial queries to the decoder, thereby improving accuracy.
In addition, RT-DETR supports flexible speed tuning by adjusting the number of decoder layers to adapt to various scenarios without retraining.
Our RT-DETR-R50 / R101 achieves 53.1% / 54.3% AP on COCO and 108 / 74 FPS on T4 GPU, outperforming previously advanced YOLOs in both speed and accuracy.
We also develop scaled RT-DETRs that outperform the lighter YOLO detectors (S and M models).
Furthermore, RT-DETR-R50 outperforms DINO-R50 by 2.2% AP in accuracy and about 21 times in FPS.
After pre-training with Objects365, RT-DETR-R50 / R101 achieves 55.3% / 56.2% AP. The project page: this [https URL](https://zhao-yian.github.io/RTDETR/).
This is the model card of a 🤗 [transformers](https://huggingface.co/docs/transformers/index) model that has been pushed on the Hub.
- **Developed by:** Yian Zhao and Sangbum Choi
- **Funded by:** National Key R&D Program of China (No.2022ZD0118201), Natural Science Foundation of China (No.61972217, 32071459, 62176249, 62006133, 62271465),
and the Shenzhen Medical Research Funds in China (No.
B2302037).
- **Shared by:** Sangbum Choi
- **Model type:** [RT-DETR](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)
- **License:** Apache-2.0
### Model Sources
<!-- Provide the basic links for the model. -->
- **HF Docs:** [RT-DETR](https://huggingface.co/docs/transformers/main/en/model_doc/rt_detr)
- **Repository:** https://github.com/lyuwenyu/RT-DETR
- **Paper:** https://arxiv.org/abs/2304.08069
- **Demo:** [RT-DETR Tracking](https://huggingface.co/spaces/merve/RT-DETR-tracking-coco)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
import requests
from PIL import Image
from transformers import RTDetrForObjectDetection, RTDetrImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r101vd_coco_o365")
model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r101vd_coco_o365")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
This should output
```
sofa: 0.97 [0.14, 0.38, 640.13, 476.21]
cat: 0.96 [343.38, 24.28, 640.14, 371.5]
cat: 0.96 [13.23, 54.18, 318.98, 472.22]
remote: 0.95 [40.11, 73.44, 175.96, 118.48]
remote: 0.92 [333.73, 76.58, 369.97, 186.99]
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The RTDETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We conduct experiments on COCO and Objects365 datasets, where RT-DETR is trained on COCO train2017 and validated on COCO val2017 dataset.
We report the standard COCO metrics, including AP (averaged over uniformly sampled IoU thresholds ranging from 0.50-0.95 with a step size of 0.05),
AP50, AP75, as well as AP at different scales: APS, APM, APL.
### Preprocessing
Images are resized to 640x640 pixels and rescaled with `image_mean=[0.485, 0.456, 0.406]` and `image_std=[0.229, 0.224, 0.225]`.
### Training Hyperparameters
- **Training regime:** <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

## Evaluation
| Model | #Epochs | #Params (M) | GFLOPs | FPS_bs=1 | AP (val) | AP50 (val) | AP75 (val) | AP-s (val) | AP-m (val) | AP-l (val) |
|----------------------------|---------|-------------|--------|----------|--------|-----------|-----------|----------|----------|----------|
| RT-DETR-R18 | 72 | 20 | 60.7 | 217 | 46.5 | 63.8 | 50.4 | 28.4 | 49.8 | 63.0 |
| RT-DETR-R34 | 72 | 31 | 91.0 | 172 | 48.5 | 66.2 | 52.3 | 30.2 | 51.9 | 66.2 |
| RT-DETR R50 | 72 | 42 | 136 | 108 | 53.1 | 71.3 | 57.7 | 34.8 | 58.0 | 70.0 |
| RT-DETR R101| 72 | 76 | 259 | 74 | 54.3 | 72.7 | 58.6 | 36.0 | 58.8 | 72.1 |
| RT-DETR-R18 (Objects 365 pretrained) | 60 | 20 | 61 | 217 | 49.2 | 66.6 | 53.5 | 33.2 | 52.3 | 64.8 |
| RT-DETR-R50 (Objects 365 pretrained) | 24 | 42 | 136 | 108 | 55.3 | 73.4 | 60.1 | 37.9 | 59.9 | 71.8 |
| RT-DETR-R101 (Objects 365 pretrained) | 24 | 76 | 259 | 74 | 56.2 | 74.6 | 61.3 | 38.3 | 60.5 | 73.5 |
### Model Architecture and Objective

Overview of RT-DETR. We feed the features from the last three stages of the backbone into the encoder. The efficient hybrid
encoder transforms multi-scale features into a sequence of image features through the Attention-based Intra-scale Feature Interaction (AIFI)
and the CNN-based Cross-scale Feature Fusion (CCFF). Then, the uncertainty-minimal query selection selects a fixed number of encoder
features to serve as initial object queries for the decoder. Finally, the decoder with auxiliary prediction heads iteratively optimizes object
queries to generate categories and boxes.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{lv2023detrs,
title={DETRs Beat YOLOs on Real-time Object Detection},
author={Yian Zhao and Wenyu Lv and Shangliang Xu and Jinman Wei and Guanzhong Wang and Qingqing Dang and Yi Liu and Jie Chen},
year={2023},
eprint={2304.08069},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Model Card Authors
[Sangbum Choi](https://huggingface.co/danelcsb)
[Pavel Iakubovskii](https://huggingface.co/qubvel-hf)
| [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
stoneseok/finetuning_1 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14"
] |
ding-dong-dang-e/stoneseok_finetuning_2 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"fire_hydrant",
"car",
"truck",
"stop",
"motorcycle",
"402",
"403",
"426",
"412",
"432",
"389",
"391",
"traffic_lane_yellow_solid",
"어린이보호구역",
"주차금지/주정차금지"
] |
MG31/multi40 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"fire_hydrant",
"car",
"truck",
"stop",
"motorcycle",
"402",
"403",
"426",
"412",
"432",
"389",
"391",
"traffic_lane_yellow_solid",
"어린이보호구역",
"주차금지/주정차금지"
] |
NathanOD/detr-resnet-50_fine_tuned_nyu_depth_v2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_fine_tuned_nyu_depth_v2
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
| [
"n/a",
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"n/a",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"n/a",
"backpack",
"umbrella",
"n/a",
"n/a",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"n/a",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"n/a",
"dining table",
"n/a",
"n/a",
"toilet",
"n/a",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"n/a",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
doktor47/table-structure-cleveland-v1alpha |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3"
] |
MTWD/detr-resnet-50-brain-hack-v2-gcp |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DETR Resnet fine tuned with BrainHack dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the BrainHack VLM dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"grey and white light aircraft",
"grey fighter jet",
"grey helicopter",
"orange and black fighter jet",
"white commercial aircraft",
"white and yellow commercial aircraft",
"white light aircraft",
"blue and green fighter plane",
"green fighter plane",
"white, blue, and red commercial aircraft",
"white and blue fighter plane",
"yellow and red light aircraft",
"grey drone",
"black camouflage fighter jet",
"blue and red light aircraft",
"black and white commercial aircraft",
"blue commercial aircraft",
"black cargo aircraft",
"yellow, black, and red helicopter",
"white, red, and green fighter plane",
"white and blue fighter jet",
"blue and grey fighter jet",
"green and grey helicopter",
"blue and red commercial aircraft",
"yellow fighter plane",
"yellow and black fighter plane",
"green and black camouflage helicopter",
"black helicopter",
"red light aircraft",
"green and brown camouflage helicopter",
"black fighter plane",
"grey and yellow fighter plane",
"white and black cargo aircraft",
"green light aircraft",
"white and black fighter plane",
"white and black light aircraft",
"white drone",
"yellow light aircraft",
"white and red light aircraft",
"yellow and green helicopter",
"white and black drone",
"white fighter jet",
"grey, red, and blue commercial aircraft",
"white cargo aircraft",
"white, black, and grey missile",
"red, white, and blue light aircraft",
"red and black drone",
"green and white fighter plane",
"red fighter plane",
"grey commercial aircraft",
"grey and red fighter jet",
"grey and black fighter plane",
"grey camouflage fighter jet",
"green camouflage helicopter",
"white and red fighter plane",
"blue and yellow fighter jet",
"blue and white helicopter",
"white, black, and red drone",
"white and blue cargo aircraft",
"green and yellow fighter plane",
"red and white helicopter",
"grey and green cargo aircraft",
"yellow, red, and grey helicopter",
"silver fighter plane",
"blue, yellow, and green fighter plane",
"red and white fighter plane",
"grey missile",
"white and black fighter jet",
"blue missile",
"grey light aircraft",
"red fighter jet",
"yellow missile",
"white helicopter",
"black drone",
"yellow helicopter",
"black fighter jet",
"white and red helicopter",
"grey and white fighter plane",
"blue helicopter",
"white and blue light aircraft",
"grey and black helicopter",
"blue and yellow helicopter",
"white and blue commercial aircraft",
"blue and white missile",
"black and brown camouflage helicopter",
"red and white fighter jet",
"white and orange commercial aircraft",
"black and white cargo aircraft",
"white and red missile",
"white and red commercial aircraft",
"white missile",
"white and black helicopter",
"yellow, red, and blue fighter plane",
"red helicopter",
"black and white missile",
"green helicopter",
"green and brown camouflage fighter plane",
"grey cargo aircraft",
"green and brown camouflage fighter jet",
"yellow commercial aircraft",
"white and grey helicopter",
"white fighter plane",
"silver and blue fighter plane",
"red and white missile",
"green missile",
"black and orange drone",
"grey fighter plane",
"black and yellow missile",
"orange light aircraft",
"white and blue helicopter",
"red and white light aircraft",
"grey and red missile",
"black and yellow drone",
"white and orange light aircraft",
"blue, yellow, and white cargo aircraft",
"yellow fighter jet",
"blue camouflage fighter jet",
"green and black missile",
"blue, yellow, and black helicopter",
"white and red fighter jet",
"grey and red commercial aircraft",
"red, white, and blue fighter jet",
"white, red, and blue commercial aircraft",
"blue and white commercial aircraft",
"blue and white light aircraft",
"red and grey missile"
] |
MTWD/detr-finetuned-cppe-5-10k-steps |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-finetuned-cppe-5-10k-steps
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2634
- Map: 0.3099
- Map 50: 0.596
- Map 75: 0.2729
- Map Small: 0.0887
- Map Medium: 0.2527
- Map Large: 0.4839
- Mar 1: 0.2938
- Mar 10: 0.4679
- Mar 100: 0.4801
- Mar Small: 0.1859
- Mar Medium: 0.4322
- Mar Large: 0.6546
- Map Coverall: 0.5624
- Mar 100 Coverall: 0.6788
- Map Face Shield: 0.287
- Mar 100 Face Shield: 0.5304
- Map Gloves: 0.2456
- Mar 100 Gloves: 0.4116
- Map Goggles: 0.17
- Mar 100 Goggles: 0.3815
- Map Mask: 0.2843
- Mar 100 Mask: 0.3982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:|
| 2.4009 | 1.0 | 107 | 2.2015 | 0.0541 | 0.1103 | 0.048 | 0.0047 | 0.0306 | 0.0619 | 0.072 | 0.1681 | 0.1971 | 0.0667 | 0.1397 | 0.2614 | 0.251 | 0.5788 | 0.0 | 0.0 | 0.0054 | 0.1513 | 0.0 | 0.0 | 0.0142 | 0.2556 |
| 1.9619 | 2.0 | 214 | 2.0515 | 0.0655 | 0.141 | 0.0509 | 0.0082 | 0.0445 | 0.074 | 0.0896 | 0.1823 | 0.2168 | 0.0864 | 0.1683 | 0.2591 | 0.2839 | 0.577 | 0.0 | 0.0 | 0.0139 | 0.2156 | 0.0 | 0.0 | 0.0299 | 0.2911 |
| 1.9217 | 3.0 | 321 | 2.0117 | 0.081 | 0.1819 | 0.0624 | 0.0141 | 0.0511 | 0.0972 | 0.0824 | 0.1792 | 0.201 | 0.0816 | 0.1546 | 0.2373 | 0.3187 | 0.5275 | 0.0 | 0.0 | 0.0238 | 0.1879 | 0.0 | 0.0 | 0.0624 | 0.2893 |
| 1.8179 | 4.0 | 428 | 2.0698 | 0.0751 | 0.172 | 0.0567 | 0.017 | 0.0596 | 0.0916 | 0.0923 | 0.1794 | 0.2009 | 0.0586 | 0.1418 | 0.264 | 0.3068 | 0.5838 | 0.0051 | 0.0392 | 0.0318 | 0.1871 | 0.0 | 0.0 | 0.032 | 0.1947 |
| 1.7692 | 5.0 | 535 | 1.8755 | 0.1017 | 0.2381 | 0.0751 | 0.0271 | 0.0783 | 0.1362 | 0.1155 | 0.2277 | 0.2474 | 0.0956 | 0.1799 | 0.3309 | 0.3285 | 0.5833 | 0.0316 | 0.1392 | 0.0431 | 0.2371 | 0.0 | 0.0 | 0.1056 | 0.2773 |
| 1.7545 | 6.0 | 642 | 1.9933 | 0.0781 | 0.1909 | 0.0567 | 0.0273 | 0.0808 | 0.0976 | 0.1122 | 0.2294 | 0.2492 | 0.0865 | 0.1995 | 0.3073 | 0.2437 | 0.5523 | 0.0194 | 0.1937 | 0.0316 | 0.2259 | 0.0004 | 0.0154 | 0.0952 | 0.2587 |
| 1.6804 | 7.0 | 749 | 1.7828 | 0.1131 | 0.2575 | 0.0886 | 0.0424 | 0.0937 | 0.146 | 0.1329 | 0.2577 | 0.2782 | 0.0998 | 0.2156 | 0.3716 | 0.3505 | 0.5973 | 0.0416 | 0.2228 | 0.0332 | 0.2214 | 0.0047 | 0.0523 | 0.1357 | 0.2973 |
| 1.6654 | 8.0 | 856 | 1.9042 | 0.1083 | 0.2233 | 0.0989 | 0.0375 | 0.0856 | 0.1512 | 0.119 | 0.2209 | 0.2425 | 0.0997 | 0.184 | 0.3262 | 0.3451 | 0.5788 | 0.0116 | 0.1291 | 0.0387 | 0.2254 | 0.0005 | 0.0077 | 0.1456 | 0.2716 |
| 1.6088 | 9.0 | 963 | 1.6943 | 0.138 | 0.2833 | 0.1157 | 0.0426 | 0.1149 | 0.1627 | 0.1488 | 0.2863 | 0.312 | 0.1169 | 0.2619 | 0.3846 | 0.4569 | 0.6378 | 0.0613 | 0.2722 | 0.049 | 0.2946 | 0.005 | 0.0846 | 0.118 | 0.2707 |
| 1.5138 | 10.0 | 1070 | 1.7314 | 0.1228 | 0.2824 | 0.0887 | 0.0513 | 0.0925 | 0.1645 | 0.1352 | 0.2706 | 0.2938 | 0.1068 | 0.2145 | 0.4247 | 0.3955 | 0.6171 | 0.0583 | 0.2278 | 0.0518 | 0.2853 | 0.0018 | 0.1031 | 0.1066 | 0.2356 |
| 1.5027 | 11.0 | 1177 | 1.6321 | 0.1525 | 0.3448 | 0.1173 | 0.0483 | 0.1088 | 0.2251 | 0.1712 | 0.3401 | 0.3617 | 0.145 | 0.2811 | 0.5185 | 0.4591 | 0.6631 | 0.0774 | 0.3494 | 0.0626 | 0.267 | 0.0169 | 0.2477 | 0.1462 | 0.2813 |
| 1.4677 | 12.0 | 1284 | 2.0406 | 0.1067 | 0.2526 | 0.0761 | 0.0463 | 0.0982 | 0.1369 | 0.1356 | 0.2686 | 0.2953 | 0.1097 | 0.2651 | 0.3816 | 0.2757 | 0.4491 | 0.0601 | 0.2924 | 0.0522 | 0.2647 | 0.0058 | 0.1877 | 0.1395 | 0.2827 |
| 1.5372 | 13.0 | 1391 | 1.7044 | 0.1409 | 0.3314 | 0.1034 | 0.031 | 0.1141 | 0.2107 | 0.1673 | 0.3155 | 0.3429 | 0.1238 | 0.2776 | 0.4824 | 0.4096 | 0.645 | 0.0739 | 0.2785 | 0.0663 | 0.3125 | 0.0267 | 0.1769 | 0.1283 | 0.3018 |
| 1.4344 | 14.0 | 1498 | 1.7464 | 0.1481 | 0.3142 | 0.1286 | 0.0403 | 0.1117 | 0.2116 | 0.1794 | 0.3172 | 0.341 | 0.1046 | 0.2626 | 0.4956 | 0.4386 | 0.6342 | 0.0826 | 0.3241 | 0.0722 | 0.2982 | 0.0146 | 0.1892 | 0.1323 | 0.2591 |
| 1.4229 | 15.0 | 1605 | 1.6013 | 0.1702 | 0.3561 | 0.1491 | 0.0498 | 0.138 | 0.2444 | 0.2054 | 0.3503 | 0.3777 | 0.1116 | 0.3163 | 0.5359 | 0.4423 | 0.6284 | 0.1069 | 0.3937 | 0.074 | 0.3201 | 0.0344 | 0.2292 | 0.1933 | 0.3169 |
| 1.4626 | 16.0 | 1712 | 1.6146 | 0.1503 | 0.3349 | 0.1185 | 0.0503 | 0.1274 | 0.2149 | 0.1799 | 0.3338 | 0.3528 | 0.1451 | 0.2873 | 0.5005 | 0.4158 | 0.6068 | 0.0656 | 0.2962 | 0.0705 | 0.2982 | 0.0132 | 0.2538 | 0.1864 | 0.3089 |
| 1.3855 | 17.0 | 1819 | 1.5998 | 0.161 | 0.3442 | 0.1209 | 0.0365 | 0.133 | 0.2319 | 0.1891 | 0.3623 | 0.3843 | 0.1518 | 0.3293 | 0.5154 | 0.4465 | 0.6369 | 0.0996 | 0.3443 | 0.0744 | 0.3129 | 0.0265 | 0.3185 | 0.158 | 0.3089 |
| 1.3531 | 18.0 | 1926 | 1.5662 | 0.1806 | 0.3906 | 0.1542 | 0.0565 | 0.1444 | 0.2585 | 0.1988 | 0.3576 | 0.3809 | 0.137 | 0.3324 | 0.5082 | 0.4618 | 0.6239 | 0.1185 | 0.3519 | 0.0802 | 0.3232 | 0.0338 | 0.2723 | 0.2084 | 0.3333 |
| 1.3227 | 19.0 | 2033 | 1.5096 | 0.1851 | 0.3799 | 0.1724 | 0.0602 | 0.1443 | 0.2829 | 0.2075 | 0.3738 | 0.3996 | 0.1532 | 0.337 | 0.5536 | 0.4967 | 0.6432 | 0.1089 | 0.3418 | 0.0842 | 0.3241 | 0.0392 | 0.3415 | 0.1964 | 0.3476 |
| 1.3247 | 20.0 | 2140 | 1.6135 | 0.1667 | 0.3672 | 0.1345 | 0.0526 | 0.1349 | 0.2451 | 0.1985 | 0.343 | 0.3669 | 0.1391 | 0.2932 | 0.5312 | 0.417 | 0.5865 | 0.0951 | 0.3481 | 0.0787 | 0.3321 | 0.0545 | 0.2585 | 0.1881 | 0.3093 |
| 1.3415 | 21.0 | 2247 | 1.5346 | 0.1817 | 0.3858 | 0.1538 | 0.0469 | 0.1346 | 0.2846 | 0.208 | 0.3571 | 0.3861 | 0.1497 | 0.3171 | 0.5396 | 0.4553 | 0.6315 | 0.1341 | 0.3494 | 0.093 | 0.3665 | 0.0299 | 0.26 | 0.1963 | 0.3231 |
| 1.2842 | 22.0 | 2354 | 1.5234 | 0.1726 | 0.3864 | 0.1334 | 0.0522 | 0.1229 | 0.2889 | 0.1963 | 0.3537 | 0.3769 | 0.1175 | 0.3062 | 0.5458 | 0.4454 | 0.6347 | 0.1191 | 0.3354 | 0.1017 | 0.3281 | 0.0394 | 0.3015 | 0.1572 | 0.2849 |
| 1.2719 | 23.0 | 2461 | 1.5006 | 0.1954 | 0.4054 | 0.1682 | 0.036 | 0.1567 | 0.2928 | 0.2095 | 0.3811 | 0.4064 | 0.1288 | 0.3401 | 0.5824 | 0.486 | 0.6928 | 0.1188 | 0.343 | 0.1206 | 0.3317 | 0.033 | 0.3185 | 0.2187 | 0.3458 |
| 1.2579 | 24.0 | 2568 | 1.5025 | 0.1841 | 0.3897 | 0.1493 | 0.0533 | 0.1558 | 0.2693 | 0.2153 | 0.3749 | 0.3993 | 0.1524 | 0.3551 | 0.5345 | 0.4639 | 0.6613 | 0.1269 | 0.3304 | 0.1017 | 0.3371 | 0.0397 | 0.3462 | 0.1882 | 0.3218 |
| 1.2554 | 25.0 | 2675 | 1.4877 | 0.1827 | 0.3852 | 0.1541 | 0.0538 | 0.1533 | 0.2853 | 0.2052 | 0.3666 | 0.3852 | 0.1488 | 0.3337 | 0.5221 | 0.4777 | 0.655 | 0.1071 | 0.3253 | 0.0929 | 0.3214 | 0.0292 | 0.2846 | 0.2066 | 0.3396 |
| 1.2252 | 26.0 | 2782 | 1.5077 | 0.1859 | 0.4021 | 0.1599 | 0.0402 | 0.1372 | 0.2948 | 0.216 | 0.3777 | 0.3912 | 0.1225 | 0.3302 | 0.5665 | 0.5031 | 0.6662 | 0.0959 | 0.3848 | 0.1001 | 0.296 | 0.0513 | 0.2985 | 0.1789 | 0.3107 |
| 1.2314 | 27.0 | 2889 | 1.4769 | 0.2077 | 0.4372 | 0.1761 | 0.063 | 0.163 | 0.3194 | 0.2307 | 0.3987 | 0.4126 | 0.152 | 0.3529 | 0.5795 | 0.5016 | 0.6667 | 0.1184 | 0.3797 | 0.1413 | 0.3455 | 0.0468 | 0.3138 | 0.2302 | 0.3573 |
| 1.1945 | 28.0 | 2996 | 1.4541 | 0.2162 | 0.445 | 0.1861 | 0.071 | 0.1627 | 0.3288 | 0.2301 | 0.4001 | 0.4205 | 0.1289 | 0.3657 | 0.5911 | 0.5064 | 0.6716 | 0.1342 | 0.3873 | 0.1525 | 0.3406 | 0.0425 | 0.3446 | 0.2453 | 0.3582 |
| 1.1846 | 29.0 | 3103 | 1.4515 | 0.2166 | 0.4623 | 0.1829 | 0.0571 | 0.168 | 0.3319 | 0.233 | 0.373 | 0.3875 | 0.142 | 0.3222 | 0.5468 | 0.4959 | 0.6631 | 0.149 | 0.3291 | 0.1491 | 0.3522 | 0.0821 | 0.2862 | 0.2071 | 0.3071 |
| 1.1851 | 30.0 | 3210 | 1.3980 | 0.2164 | 0.4486 | 0.1748 | 0.0629 | 0.1659 | 0.3566 | 0.2247 | 0.39 | 0.4034 | 0.1436 | 0.3397 | 0.572 | 0.4939 | 0.6586 | 0.125 | 0.3519 | 0.1682 | 0.3857 | 0.0753 | 0.2892 | 0.2195 | 0.3316 |
| 1.1445 | 31.0 | 3317 | 1.4215 | 0.2134 | 0.4586 | 0.1612 | 0.071 | 0.1498 | 0.3323 | 0.2341 | 0.3731 | 0.3852 | 0.1735 | 0.3155 | 0.5395 | 0.5085 | 0.6626 | 0.1335 | 0.3291 | 0.1568 | 0.3491 | 0.0514 | 0.2538 | 0.2166 | 0.3316 |
| 1.1553 | 32.0 | 3424 | 1.4463 | 0.2218 | 0.4616 | 0.1857 | 0.0786 | 0.1532 | 0.3475 | 0.2378 | 0.406 | 0.4218 | 0.1866 | 0.3423 | 0.576 | 0.4982 | 0.6653 | 0.1843 | 0.4 | 0.1508 | 0.3621 | 0.0442 | 0.3231 | 0.2314 | 0.3587 |
| 1.1677 | 33.0 | 3531 | 1.4201 | 0.2164 | 0.4494 | 0.1867 | 0.0566 | 0.1779 | 0.3404 | 0.2327 | 0.3895 | 0.4036 | 0.1341 | 0.3431 | 0.5815 | 0.4834 | 0.6685 | 0.1661 | 0.3861 | 0.1583 | 0.3362 | 0.0474 | 0.2769 | 0.2266 | 0.3502 |
| 1.1419 | 34.0 | 3638 | 1.4068 | 0.2188 | 0.4421 | 0.1864 | 0.0565 | 0.1688 | 0.3454 | 0.2314 | 0.3822 | 0.396 | 0.1336 | 0.3287 | 0.5702 | 0.5032 | 0.6599 | 0.1144 | 0.3253 | 0.157 | 0.3799 | 0.066 | 0.2723 | 0.2536 | 0.3427 |
| 1.1598 | 35.0 | 3745 | 1.4241 | 0.2107 | 0.4404 | 0.1806 | 0.0513 | 0.1618 | 0.3457 | 0.2361 | 0.3867 | 0.4023 | 0.1441 | 0.326 | 0.5865 | 0.4877 | 0.6604 | 0.1439 | 0.3557 | 0.1516 | 0.383 | 0.0728 | 0.3092 | 0.1975 | 0.3031 |
| 1.1291 | 36.0 | 3852 | 1.4203 | 0.2222 | 0.4631 | 0.1865 | 0.066 | 0.1776 | 0.3377 | 0.2356 | 0.3772 | 0.3887 | 0.1168 | 0.3203 | 0.5732 | 0.505 | 0.6644 | 0.1357 | 0.3215 | 0.1785 | 0.3723 | 0.0542 | 0.2446 | 0.2373 | 0.3404 |
| 1.096 | 37.0 | 3959 | 1.3647 | 0.2364 | 0.4779 | 0.1936 | 0.0782 | 0.1885 | 0.3579 | 0.2595 | 0.4158 | 0.4281 | 0.1397 | 0.3684 | 0.6116 | 0.5178 | 0.6649 | 0.174 | 0.4127 | 0.1687 | 0.3705 | 0.0803 | 0.3523 | 0.241 | 0.34 |
| 1.0876 | 38.0 | 4066 | 1.3592 | 0.2274 | 0.4462 | 0.1904 | 0.0677 | 0.1866 | 0.3538 | 0.2517 | 0.4023 | 0.4187 | 0.1562 | 0.3569 | 0.5983 | 0.4942 | 0.6554 | 0.1468 | 0.3772 | 0.1491 | 0.3781 | 0.0961 | 0.3246 | 0.2509 | 0.3582 |
| 1.075 | 39.0 | 4173 | 1.3624 | 0.237 | 0.482 | 0.1985 | 0.0592 | 0.1942 | 0.3643 | 0.2539 | 0.4113 | 0.4271 | 0.1483 | 0.3677 | 0.6145 | 0.5251 | 0.6689 | 0.2008 | 0.4468 | 0.1737 | 0.3759 | 0.0632 | 0.3108 | 0.2222 | 0.3333 |
| 1.0633 | 40.0 | 4280 | 1.3988 | 0.2331 | 0.4787 | 0.2002 | 0.0776 | 0.1857 | 0.3435 | 0.2543 | 0.4143 | 0.429 | 0.1688 | 0.3526 | 0.6167 | 0.5078 | 0.6689 | 0.1643 | 0.4063 | 0.163 | 0.3415 | 0.0704 | 0.36 | 0.2599 | 0.3684 |
| 1.0683 | 41.0 | 4387 | 1.3223 | 0.2351 | 0.4708 | 0.2088 | 0.0783 | 0.1803 | 0.3628 | 0.2589 | 0.4306 | 0.4437 | 0.1782 | 0.3797 | 0.6223 | 0.5129 | 0.6523 | 0.1562 | 0.4595 | 0.1643 | 0.3817 | 0.0694 | 0.3431 | 0.2727 | 0.3818 |
| 1.0511 | 42.0 | 4494 | 1.3158 | 0.2453 | 0.4814 | 0.2198 | 0.0672 | 0.1869 | 0.3895 | 0.2569 | 0.4248 | 0.4392 | 0.1584 | 0.3756 | 0.6266 | 0.5327 | 0.673 | 0.1771 | 0.4456 | 0.1861 | 0.3888 | 0.0701 | 0.3246 | 0.2603 | 0.364 |
| 1.0551 | 43.0 | 4601 | 1.3500 | 0.2295 | 0.479 | 0.2007 | 0.0772 | 0.1804 | 0.3539 | 0.2472 | 0.3923 | 0.4077 | 0.1766 | 0.3437 | 0.5737 | 0.4986 | 0.6446 | 0.1586 | 0.3696 | 0.1691 | 0.3705 | 0.0699 | 0.3015 | 0.2511 | 0.352 |
| 1.0341 | 44.0 | 4708 | 1.3675 | 0.2456 | 0.4914 | 0.2131 | 0.0711 | 0.1904 | 0.3756 | 0.255 | 0.4115 | 0.425 | 0.1642 | 0.3599 | 0.6005 | 0.5271 | 0.6644 | 0.1833 | 0.4278 | 0.1683 | 0.3536 | 0.0662 | 0.2969 | 0.2829 | 0.3822 |
| 1.0243 | 45.0 | 4815 | 1.3230 | 0.2437 | 0.4966 | 0.207 | 0.0669 | 0.1826 | 0.3743 | 0.2617 | 0.4168 | 0.4305 | 0.1695 | 0.3629 | 0.5994 | 0.5194 | 0.6586 | 0.1745 | 0.4152 | 0.1893 | 0.3795 | 0.0703 | 0.3308 | 0.265 | 0.3684 |
| 1.0241 | 46.0 | 4922 | 1.3187 | 0.2588 | 0.5071 | 0.2314 | 0.0786 | 0.1978 | 0.4126 | 0.2605 | 0.4227 | 0.4347 | 0.1652 | 0.3725 | 0.6235 | 0.5458 | 0.6644 | 0.1982 | 0.4278 | 0.1938 | 0.3625 | 0.0753 | 0.3262 | 0.2808 | 0.3924 |
| 1.0048 | 47.0 | 5029 | 1.3505 | 0.2531 | 0.5007 | 0.2184 | 0.0709 | 0.2029 | 0.3913 | 0.2573 | 0.4229 | 0.4368 | 0.1606 | 0.3813 | 0.6183 | 0.5414 | 0.6604 | 0.192 | 0.4278 | 0.1871 | 0.3768 | 0.0701 | 0.34 | 0.2748 | 0.3791 |
| 0.9981 | 48.0 | 5136 | 1.3362 | 0.2565 | 0.5238 | 0.212 | 0.0778 | 0.2031 | 0.407 | 0.2666 | 0.4311 | 0.4462 | 0.1734 | 0.3905 | 0.6217 | 0.5228 | 0.6622 | 0.1931 | 0.4316 | 0.2016 | 0.3848 | 0.094 | 0.3677 | 0.2708 | 0.3844 |
| 1.0127 | 49.0 | 5243 | 1.3352 | 0.2625 | 0.5179 | 0.2331 | 0.0792 | 0.2182 | 0.3988 | 0.2685 | 0.4368 | 0.4503 | 0.1771 | 0.397 | 0.6254 | 0.5292 | 0.6514 | 0.2139 | 0.4646 | 0.1983 | 0.3772 | 0.0746 | 0.3692 | 0.2966 | 0.3889 |
| 0.9859 | 50.0 | 5350 | 1.3692 | 0.2409 | 0.5163 | 0.1868 | 0.0749 | 0.1902 | 0.3812 | 0.2533 | 0.4147 | 0.4263 | 0.1627 | 0.3588 | 0.6132 | 0.5044 | 0.6459 | 0.206 | 0.4506 | 0.1907 | 0.3723 | 0.0675 | 0.3292 | 0.2358 | 0.3333 |
| 1.0001 | 51.0 | 5457 | 1.3070 | 0.2558 | 0.5255 | 0.2112 | 0.0744 | 0.2043 | 0.4002 | 0.2606 | 0.4254 | 0.4382 | 0.202 | 0.3623 | 0.6226 | 0.5164 | 0.6477 | 0.1989 | 0.4253 | 0.1925 | 0.4098 | 0.0936 | 0.3477 | 0.2777 | 0.3604 |
| 0.9725 | 52.0 | 5564 | 1.3240 | 0.2516 | 0.5075 | 0.2051 | 0.0853 | 0.2073 | 0.3728 | 0.2582 | 0.4193 | 0.4324 | 0.2053 | 0.3656 | 0.5998 | 0.533 | 0.6505 | 0.1893 | 0.4038 | 0.1737 | 0.3915 | 0.0938 | 0.34 | 0.2681 | 0.3764 |
| 0.9618 | 53.0 | 5671 | 1.3466 | 0.2592 | 0.5248 | 0.2268 | 0.088 | 0.2087 | 0.3949 | 0.2596 | 0.4196 | 0.4346 | 0.1842 | 0.3713 | 0.61 | 0.5231 | 0.6568 | 0.202 | 0.4038 | 0.1989 | 0.3879 | 0.1008 | 0.36 | 0.2713 | 0.3644 |
| 0.9775 | 54.0 | 5778 | 1.3195 | 0.2684 | 0.5322 | 0.2327 | 0.0851 | 0.2242 | 0.411 | 0.2726 | 0.4355 | 0.4464 | 0.1777 | 0.3968 | 0.6122 | 0.5223 | 0.6532 | 0.2329 | 0.4506 | 0.2149 | 0.404 | 0.1117 | 0.3692 | 0.2599 | 0.3551 |
| 0.9391 | 55.0 | 5885 | 1.3204 | 0.2688 | 0.5417 | 0.2249 | 0.0816 | 0.2112 | 0.4168 | 0.2738 | 0.4409 | 0.4507 | 0.1725 | 0.3875 | 0.6407 | 0.5387 | 0.664 | 0.2201 | 0.4734 | 0.2148 | 0.3987 | 0.0962 | 0.3477 | 0.2743 | 0.3698 |
| 0.9541 | 56.0 | 5992 | 1.2930 | 0.2616 | 0.5317 | 0.2214 | 0.0889 | 0.2025 | 0.4198 | 0.2653 | 0.4307 | 0.4439 | 0.1817 | 0.3764 | 0.6269 | 0.5406 | 0.6617 | 0.1994 | 0.4405 | 0.2144 | 0.4138 | 0.0954 | 0.3492 | 0.2581 | 0.3542 |
| 0.9568 | 57.0 | 6099 | 1.2989 | 0.2688 | 0.5365 | 0.2288 | 0.089 | 0.2143 | 0.4042 | 0.2656 | 0.4275 | 0.443 | 0.1895 | 0.3806 | 0.6086 | 0.543 | 0.6604 | 0.2119 | 0.4316 | 0.2265 | 0.4161 | 0.1127 | 0.3508 | 0.2499 | 0.356 |
| 0.9645 | 58.0 | 6206 | 1.3244 | 0.2699 | 0.5519 | 0.2355 | 0.0917 | 0.2067 | 0.4132 | 0.2753 | 0.4351 | 0.446 | 0.1724 | 0.3728 | 0.6313 | 0.5372 | 0.6518 | 0.2168 | 0.457 | 0.217 | 0.3964 | 0.1166 | 0.3615 | 0.2618 | 0.3631 |
| 0.9285 | 59.0 | 6313 | 1.3308 | 0.2661 | 0.5385 | 0.2281 | 0.0796 | 0.2149 | 0.4029 | 0.2671 | 0.4376 | 0.4493 | 0.1661 | 0.3847 | 0.6328 | 0.5365 | 0.6536 | 0.2171 | 0.4506 | 0.2137 | 0.3946 | 0.1008 | 0.38 | 0.2622 | 0.3676 |
| 0.9151 | 60.0 | 6420 | 1.3176 | 0.2749 | 0.5582 | 0.2233 | 0.0797 | 0.2237 | 0.4157 | 0.2752 | 0.4417 | 0.4556 | 0.1726 | 0.3899 | 0.6438 | 0.5504 | 0.6649 | 0.2197 | 0.4443 | 0.2224 | 0.4058 | 0.1294 | 0.4046 | 0.2524 | 0.3582 |
| 0.9231 | 61.0 | 6527 | 1.3251 | 0.2736 | 0.5457 | 0.2328 | 0.0858 | 0.2211 | 0.4141 | 0.28 | 0.4476 | 0.4649 | 0.1793 | 0.4134 | 0.6428 | 0.5545 | 0.6797 | 0.2151 | 0.4835 | 0.2106 | 0.3853 | 0.1338 | 0.4046 | 0.254 | 0.3711 |
| 0.9036 | 62.0 | 6634 | 1.3277 | 0.2707 | 0.5603 | 0.2283 | 0.0871 | 0.2209 | 0.4124 | 0.2692 | 0.4346 | 0.4524 | 0.1708 | 0.3979 | 0.6312 | 0.5524 | 0.6698 | 0.24 | 0.4696 | 0.1991 | 0.4134 | 0.1045 | 0.3415 | 0.2574 | 0.3676 |
| 0.9324 | 63.0 | 6741 | 1.3008 | 0.2719 | 0.5527 | 0.2376 | 0.0811 | 0.2206 | 0.4372 | 0.2749 | 0.4544 | 0.471 | 0.2098 | 0.4175 | 0.6394 | 0.5352 | 0.6653 | 0.2224 | 0.5076 | 0.2045 | 0.4103 | 0.1294 | 0.3985 | 0.2681 | 0.3733 |
| 0.9173 | 64.0 | 6848 | 1.3151 | 0.2771 | 0.5587 | 0.2383 | 0.085 | 0.2308 | 0.4134 | 0.2692 | 0.4427 | 0.4583 | 0.1829 | 0.4031 | 0.6247 | 0.5393 | 0.6599 | 0.2378 | 0.4911 | 0.2174 | 0.4054 | 0.1284 | 0.3692 | 0.2626 | 0.3658 |
| 0.8888 | 65.0 | 6955 | 1.3025 | 0.2699 | 0.5572 | 0.2266 | 0.0791 | 0.2175 | 0.4154 | 0.2734 | 0.4414 | 0.4628 | 0.1806 | 0.4118 | 0.6332 | 0.5383 | 0.6694 | 0.2183 | 0.481 | 0.2194 | 0.4067 | 0.1118 | 0.3754 | 0.2616 | 0.3813 |
| 0.9065 | 66.0 | 7062 | 1.2758 | 0.2802 | 0.5637 | 0.2433 | 0.0902 | 0.2236 | 0.4358 | 0.2789 | 0.4549 | 0.4706 | 0.1955 | 0.4238 | 0.6349 | 0.5384 | 0.6658 | 0.2309 | 0.481 | 0.2173 | 0.4036 | 0.1419 | 0.4231 | 0.2726 | 0.3796 |
| 0.897 | 67.0 | 7169 | 1.3021 | 0.2805 | 0.5637 | 0.2403 | 0.0809 | 0.2307 | 0.4264 | 0.2737 | 0.4464 | 0.4641 | 0.1876 | 0.4117 | 0.6413 | 0.5409 | 0.6626 | 0.2351 | 0.4797 | 0.2338 | 0.4201 | 0.1229 | 0.3754 | 0.2697 | 0.3827 |
| 0.891 | 68.0 | 7276 | 1.2872 | 0.2814 | 0.5506 | 0.2468 | 0.0858 | 0.2227 | 0.4234 | 0.2822 | 0.4526 | 0.4707 | 0.2078 | 0.411 | 0.6428 | 0.5462 | 0.6698 | 0.2382 | 0.4759 | 0.2301 | 0.4187 | 0.1228 | 0.4123 | 0.2695 | 0.3769 |
| 0.8814 | 69.0 | 7383 | 1.3457 | 0.2767 | 0.5416 | 0.2428 | 0.0826 | 0.2218 | 0.4196 | 0.2723 | 0.4518 | 0.4679 | 0.19 | 0.4114 | 0.6432 | 0.54 | 0.6671 | 0.2364 | 0.4987 | 0.2268 | 0.4009 | 0.0994 | 0.3908 | 0.281 | 0.3822 |
| 0.8809 | 70.0 | 7490 | 1.2750 | 0.2746 | 0.5531 | 0.2313 | 0.0862 | 0.2193 | 0.4354 | 0.2746 | 0.454 | 0.4707 | 0.2008 | 0.4142 | 0.6397 | 0.5393 | 0.6739 | 0.2295 | 0.4962 | 0.2255 | 0.4125 | 0.1085 | 0.3831 | 0.2701 | 0.388 |
| 0.8715 | 71.0 | 7597 | 1.2738 | 0.2859 | 0.5705 | 0.2399 | 0.0812 | 0.2312 | 0.4419 | 0.28 | 0.4552 | 0.4697 | 0.1821 | 0.4226 | 0.6393 | 0.5552 | 0.6829 | 0.2552 | 0.4962 | 0.2122 | 0.3848 | 0.1366 | 0.3954 | 0.2705 | 0.3893 |
| 0.8582 | 72.0 | 7704 | 1.2803 | 0.2854 | 0.5632 | 0.2472 | 0.0868 | 0.2308 | 0.4345 | 0.2814 | 0.4576 | 0.471 | 0.2112 | 0.4218 | 0.639 | 0.5483 | 0.6689 | 0.2621 | 0.5 | 0.2144 | 0.3969 | 0.125 | 0.3938 | 0.2772 | 0.3956 |
| 0.8798 | 73.0 | 7811 | 1.2991 | 0.2921 | 0.5724 | 0.2629 | 0.0801 | 0.2378 | 0.4484 | 0.291 | 0.4619 | 0.4757 | 0.1914 | 0.4178 | 0.6572 | 0.5481 | 0.6721 | 0.265 | 0.4937 | 0.2187 | 0.3969 | 0.1445 | 0.4231 | 0.2841 | 0.3929 |
| 0.8687 | 74.0 | 7918 | 1.2795 | 0.2751 | 0.5415 | 0.2449 | 0.0743 | 0.2224 | 0.433 | 0.2813 | 0.4525 | 0.4716 | 0.1991 | 0.4103 | 0.6554 | 0.5447 | 0.6788 | 0.2197 | 0.4886 | 0.2193 | 0.4009 | 0.1251 | 0.4062 | 0.2668 | 0.3836 |
| 0.8579 | 75.0 | 8025 | 1.2965 | 0.2827 | 0.5554 | 0.2457 | 0.0834 | 0.2274 | 0.439 | 0.2805 | 0.4568 | 0.4697 | 0.1736 | 0.4164 | 0.6491 | 0.5527 | 0.6739 | 0.2374 | 0.4772 | 0.2244 | 0.4022 | 0.1261 | 0.4154 | 0.2728 | 0.38 |
| 0.8444 | 76.0 | 8132 | 1.2692 | 0.286 | 0.5675 | 0.2456 | 0.0845 | 0.2298 | 0.452 | 0.2844 | 0.4569 | 0.4701 | 0.18 | 0.4201 | 0.6516 | 0.5577 | 0.6716 | 0.2472 | 0.4759 | 0.2247 | 0.4085 | 0.1367 | 0.4154 | 0.2637 | 0.3791 |
| 0.8655 | 77.0 | 8239 | 1.2678 | 0.2914 | 0.5743 | 0.2529 | 0.0802 | 0.2412 | 0.4548 | 0.2854 | 0.4584 | 0.4711 | 0.1862 | 0.42 | 0.6467 | 0.5584 | 0.6725 | 0.2484 | 0.4899 | 0.2155 | 0.4027 | 0.1618 | 0.4015 | 0.2731 | 0.3889 |
| 0.8424 | 78.0 | 8346 | 1.2859 | 0.2871 | 0.5697 | 0.2433 | 0.0843 | 0.2281 | 0.4464 | 0.2792 | 0.4522 | 0.4682 | 0.1924 | 0.4191 | 0.6409 | 0.5575 | 0.6739 | 0.2389 | 0.4797 | 0.2161 | 0.3964 | 0.1444 | 0.4 | 0.2787 | 0.3911 |
| 0.8373 | 79.0 | 8453 | 1.2665 | 0.2952 | 0.5753 | 0.2589 | 0.0917 | 0.2446 | 0.4617 | 0.2813 | 0.4646 | 0.4779 | 0.2011 | 0.4253 | 0.6561 | 0.5594 | 0.6761 | 0.2481 | 0.4962 | 0.2305 | 0.417 | 0.1652 | 0.4062 | 0.2727 | 0.3942 |
| 0.841 | 80.0 | 8560 | 1.2720 | 0.2872 | 0.5588 | 0.2596 | 0.07 | 0.2247 | 0.4592 | 0.2784 | 0.4556 | 0.4691 | 0.1936 | 0.403 | 0.6516 | 0.557 | 0.6779 | 0.242 | 0.5025 | 0.2246 | 0.3929 | 0.1446 | 0.3908 | 0.268 | 0.3813 |
| 0.8207 | 81.0 | 8667 | 1.2764 | 0.2914 | 0.5691 | 0.2633 | 0.0891 | 0.2287 | 0.4633 | 0.285 | 0.4633 | 0.4752 | 0.2079 | 0.4133 | 0.6549 | 0.5568 | 0.6712 | 0.2442 | 0.5266 | 0.2274 | 0.3924 | 0.1589 | 0.4092 | 0.2697 | 0.3764 |
| 0.8151 | 82.0 | 8774 | 1.2726 | 0.292 | 0.5664 | 0.2575 | 0.0899 | 0.2383 | 0.4532 | 0.2838 | 0.4575 | 0.4703 | 0.2029 | 0.4171 | 0.6444 | 0.5498 | 0.6734 | 0.2589 | 0.4949 | 0.2227 | 0.3969 | 0.1539 | 0.3954 | 0.2746 | 0.3911 |
| 0.8118 | 83.0 | 8881 | 1.2791 | 0.2909 | 0.5647 | 0.2645 | 0.0803 | 0.2349 | 0.4543 | 0.2866 | 0.4635 | 0.477 | 0.206 | 0.4167 | 0.659 | 0.554 | 0.673 | 0.2542 | 0.5203 | 0.2257 | 0.4045 | 0.1511 | 0.4 | 0.2695 | 0.3871 |
| 0.8101 | 84.0 | 8988 | 1.2793 | 0.2971 | 0.579 | 0.2669 | 0.0869 | 0.2459 | 0.4703 | 0.2877 | 0.4546 | 0.4646 | 0.1792 | 0.4059 | 0.6583 | 0.5589 | 0.6797 | 0.2452 | 0.4696 | 0.2401 | 0.4107 | 0.1757 | 0.3862 | 0.2656 | 0.3769 |
| 0.8002 | 85.0 | 9095 | 1.2649 | 0.2916 | 0.5718 | 0.2551 | 0.0885 | 0.2432 | 0.4412 | 0.2867 | 0.4527 | 0.4686 | 0.1907 | 0.4215 | 0.6414 | 0.5598 | 0.6824 | 0.2361 | 0.4823 | 0.2377 | 0.4134 | 0.1537 | 0.38 | 0.2708 | 0.3849 |
| 0.8174 | 86.0 | 9202 | 1.2749 | 0.2965 | 0.5785 | 0.2577 | 0.0941 | 0.238 | 0.4623 | 0.2907 | 0.457 | 0.4704 | 0.1919 | 0.4187 | 0.6478 | 0.5581 | 0.6829 | 0.2565 | 0.4886 | 0.2404 | 0.4138 | 0.157 | 0.3815 | 0.2704 | 0.3849 |
| 0.8068 | 87.0 | 9309 | 1.2789 | 0.2965 | 0.5781 | 0.2649 | 0.0934 | 0.2408 | 0.4661 | 0.2908 | 0.4632 | 0.4755 | 0.2036 | 0.4241 | 0.6541 | 0.5563 | 0.6793 | 0.252 | 0.4975 | 0.2397 | 0.4165 | 0.1639 | 0.4015 | 0.2704 | 0.3827 |
| 0.8127 | 88.0 | 9416 | 1.2787 | 0.304 | 0.5921 | 0.2676 | 0.0994 | 0.2394 | 0.4855 | 0.2914 | 0.4612 | 0.4722 | 0.2184 | 0.4099 | 0.6616 | 0.5532 | 0.6716 | 0.273 | 0.4962 | 0.2379 | 0.4058 | 0.1739 | 0.3954 | 0.2821 | 0.392 |
| 0.7997 | 89.0 | 9523 | 1.2658 | 0.3001 | 0.5766 | 0.2665 | 0.0946 | 0.2411 | 0.4831 | 0.288 | 0.4546 | 0.4652 | 0.1939 | 0.4102 | 0.6501 | 0.5544 | 0.6703 | 0.2613 | 0.4785 | 0.2433 | 0.4103 | 0.1727 | 0.3862 | 0.2685 | 0.3809 |
| 0.8024 | 90.0 | 9630 | 1.2593 | 0.3068 | 0.5908 | 0.2714 | 0.0915 | 0.2519 | 0.4812 | 0.2898 | 0.464 | 0.4767 | 0.2179 | 0.4211 | 0.6596 | 0.5597 | 0.6784 | 0.2731 | 0.5038 | 0.2445 | 0.4143 | 0.1755 | 0.3954 | 0.2813 | 0.3916 |
| 0.7874 | 91.0 | 9737 | 1.2550 | 0.3075 | 0.5939 | 0.2726 | 0.0901 | 0.256 | 0.4814 | 0.2923 | 0.4638 | 0.4769 | 0.2131 | 0.4226 | 0.6615 | 0.5592 | 0.6757 | 0.2796 | 0.5101 | 0.2407 | 0.4098 | 0.1739 | 0.3954 | 0.284 | 0.3933 |
| 0.7839 | 92.0 | 9844 | 1.2722 | 0.305 | 0.5851 | 0.2756 | 0.0861 | 0.2542 | 0.4784 | 0.2901 | 0.4597 | 0.4779 | 0.1932 | 0.4302 | 0.6527 | 0.5611 | 0.6779 | 0.2631 | 0.5051 | 0.2454 | 0.408 | 0.1728 | 0.4 | 0.2828 | 0.3987 |
| 0.7865 | 93.0 | 9951 | 1.2689 | 0.3044 | 0.5847 | 0.2748 | 0.0853 | 0.2531 | 0.4763 | 0.2897 | 0.4636 | 0.4764 | 0.1984 | 0.4191 | 0.6621 | 0.561 | 0.6784 | 0.2708 | 0.5 | 0.2446 | 0.4125 | 0.1603 | 0.3892 | 0.2856 | 0.4018 |
| 0.7676 | 94.0 | 10058 | 1.2595 | 0.3073 | 0.5925 | 0.2782 | 0.0846 | 0.2547 | 0.4834 | 0.2948 | 0.4629 | 0.4746 | 0.1864 | 0.4242 | 0.6547 | 0.5625 | 0.6757 | 0.2772 | 0.5114 | 0.2458 | 0.4085 | 0.1709 | 0.3815 | 0.2802 | 0.396 |
| 0.7873 | 95.0 | 10165 | 1.2731 | 0.3075 | 0.5954 | 0.2752 | 0.085 | 0.2526 | 0.4841 | 0.2921 | 0.4613 | 0.4736 | 0.1966 | 0.4234 | 0.6534 | 0.5582 | 0.6721 | 0.2776 | 0.5063 | 0.245 | 0.404 | 0.1738 | 0.3862 | 0.2831 | 0.3996 |
| 0.7912 | 96.0 | 10272 | 1.2655 | 0.3077 | 0.5965 | 0.2658 | 0.0871 | 0.2541 | 0.4834 | 0.2928 | 0.4656 | 0.4784 | 0.2007 | 0.4261 | 0.659 | 0.5578 | 0.6703 | 0.2803 | 0.5241 | 0.2425 | 0.4116 | 0.1728 | 0.3862 | 0.2851 | 0.4 |
| 0.7586 | 97.0 | 10379 | 1.2672 | 0.3099 | 0.6012 | 0.2733 | 0.0839 | 0.2545 | 0.4886 | 0.2937 | 0.4665 | 0.479 | 0.1899 | 0.429 | 0.6617 | 0.5596 | 0.677 | 0.2901 | 0.5278 | 0.2436 | 0.4098 | 0.1694 | 0.3831 | 0.2869 | 0.3973 |
| 0.7833 | 98.0 | 10486 | 1.2673 | 0.3092 | 0.6021 | 0.2687 | 0.0887 | 0.2519 | 0.4851 | 0.2938 | 0.4657 | 0.4792 | 0.1963 | 0.4315 | 0.6522 | 0.5617 | 0.6775 | 0.2847 | 0.5304 | 0.2454 | 0.4125 | 0.1726 | 0.3815 | 0.2818 | 0.3942 |
| 0.7676 | 99.0 | 10593 | 1.2649 | 0.3089 | 0.594 | 0.2724 | 0.0895 | 0.2514 | 0.4833 | 0.2947 | 0.4666 | 0.4791 | 0.1832 | 0.4304 | 0.6531 | 0.5616 | 0.6779 | 0.2868 | 0.5278 | 0.246 | 0.4156 | 0.1689 | 0.3785 | 0.2811 | 0.3956 |
| 0.7743 | 100.0 | 10700 | 1.2634 | 0.3099 | 0.596 | 0.2729 | 0.0887 | 0.2527 | 0.4839 | 0.2938 | 0.4679 | 0.4801 | 0.1859 | 0.4322 | 0.6546 | 0.5624 | 0.6788 | 0.287 | 0.5304 | 0.2456 | 0.4116 | 0.17 | 0.3815 | 0.2843 | 0.3982 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu118
- Datasets 2.19.2
- Tokenizers 0.19.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
flipwooyoung/detr-finetuned-balloon-v2 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29",
"label_30",
"label_31",
"label_32",
"label_33",
"label_34",
"label_35",
"label_36",
"label_37",
"label_38",
"label_39",
"label_40",
"label_41",
"label_42",
"label_43",
"label_44",
"label_45",
"label_46",
"label_47",
"label_48",
"label_49",
"label_50",
"label_51",
"label_52",
"label_53",
"label_54",
"label_55",
"label_56",
"label_57",
"label_58",
"label_59",
"label_60",
"label_61",
"label_62",
"label_63",
"label_64",
"label_65",
"label_66",
"label_67",
"label_68",
"label_69",
"label_70",
"label_71",
"label_72",
"label_73",
"label_74",
"label_75",
"label_76",
"label_77",
"label_78",
"label_79",
"label_80",
"label_81",
"label_82",
"label_83",
"label_84",
"label_85",
"label_86",
"label_87",
"label_88",
"label_89",
"label_90",
"label_91",
"label_92",
"label_93",
"label_94",
"label_95",
"label_96",
"label_97",
"label_98",
"label_99",
"label_100",
"label_101",
"label_102",
"label_103",
"label_104",
"label_105",
"label_106",
"label_107",
"label_108",
"label_109",
"label_110",
"label_111",
"label_112",
"label_113",
"label_114",
"label_115",
"label_116",
"label_117",
"label_118",
"label_119",
"label_120",
"label_121",
"label_122",
"label_123",
"label_124",
"label_125",
"label_126",
"label_127",
"label_128",
"label_129"
] |
itesl/yolos-tiny-fashionpedia-remapped |
YOLOS (tiny-sized) model for fashion object detection.
Fine-tuned on a remapped Fashionpedia dataset. | [
"n/a",
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"n/a",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"n/a",
"backpack",
"umbrella",
"n/a",
"n/a",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"n/a",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"n/a",
"dining table",
"n/a",
"n/a",
"toilet",
"n/a",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"n/a",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
itesl/yolos-tiny-fashionpedia |
YOLOS (tiny-sized) model for fashion object detection.
Fine-tuned on Fashionpedia dataset. | [
"n/a",
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"n/a",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"n/a",
"backpack",
"umbrella",
"n/a",
"n/a",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"n/a",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"n/a",
"dining table",
"n/a",
"n/a",
"toilet",
"n/a",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"n/a",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
itesl/yolos-tiny-clothing |
YOLOS (tiny-sized) model for fashion object detection.
Fine-tuned on a Clothing Detection Dataset with English labels (Spanish originally). | [
"n/a",
"person",
"bicycle",
"car",
"motorcycle",
"airplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"n/a",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"n/a",
"backpack",
"umbrella",
"n/a",
"n/a",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"n/a",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"couch",
"potted plant",
"bed",
"n/a",
"dining table",
"n/a",
"n/a",
"toilet",
"n/a",
"tv",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"n/a",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
amaye15/results |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.4991 | 0.3137 | 100 | 3.6244 |
| 3.1728 | 0.3451 | 110 | 3.4319 |
| 2.7857 | 0.3765 | 120 | 3.2574 |
| 3.0606 | 0.4078 | 130 | 3.1484 |
| 2.6704 | 0.4392 | 140 | 3.0390 |
| 2.7332 | 0.4706 | 150 | 2.9956 |
| 2.8436 | 0.5020 | 160 | 2.9110 |
| 2.8464 | 0.5333 | 170 | 2.8551 |
| 2.1192 | 0.5647 | 180 | 2.8163 |
| 2.6557 | 0.5961 | 190 | 2.8145 |
| 2.3224 | 0.6275 | 200 | 2.7858 |
| 2.6007 | 0.6588 | 210 | 2.7064 |
| 2.6117 | 0.6902 | 220 | 2.6602 |
| 2.4549 | 0.7216 | 230 | 2.6368 |
| 2.5487 | 0.7529 | 240 | 2.6029 |
| 2.6048 | 0.7843 | 250 | 2.5573 |
| 2.0348 | 0.8157 | 260 | 2.5203 |
| 2.4741 | 0.8471 | 270 | 2.4935 |
| 2.5855 | 0.8784 | 280 | 2.4731 |
| 2.1076 | 0.9098 | 290 | 2.4283 |
| 2.3073 | 0.9412 | 300 | 2.3896 |
| 2.214 | 0.9725 | 310 | 2.3919 |
| 2.2078 | 1.0039 | 320 | 2.3343 |
| 2.2391 | 1.0353 | 330 | 2.2970 |
| 2.3607 | 1.0667 | 340 | 2.2921 |
| 2.0244 | 1.0980 | 350 | 2.2751 |
| 2.251 | 1.1294 | 360 | 2.2713 |
| 2.1133 | 1.1608 | 370 | 2.2701 |
| 2.124 | 1.1922 | 380 | 2.2618 |
| 2.1989 | 1.2235 | 390 | 2.2429 |
| 2.2315 | 1.2549 | 400 | 2.2463 |
| 2.2398 | 1.2863 | 410 | 2.2386 |
| 2.261 | 1.3176 | 420 | 2.2360 |
| 2.2144 | 1.3490 | 430 | 2.2427 |
| 2.3344 | 1.3804 | 440 | 2.2452 |
| 2.0412 | 1.4118 | 450 | 2.2092 |
| 2.0854 | 1.4431 | 460 | 2.2197 |
| 2.1636 | 1.4745 | 470 | 2.1830 |
| 1.7776 | 1.5059 | 480 | 2.1904 |
| 2.1118 | 1.5373 | 490 | 2.2194 |
| 2.1203 | 1.5686 | 500 | 2.1978 |
| 2.2468 | 1.6 | 510 | 2.1968 |
| 2.2992 | 1.6314 | 520 | 2.1963 |
| 2.2596 | 1.6627 | 530 | 2.1816 |
| 2.1836 | 1.6941 | 540 | 2.1800 |
| 2.2672 | 1.7255 | 550 | 2.1679 |
| 2.0702 | 1.7569 | 560 | 2.1607 |
| 2.5606 | 1.7882 | 570 | 2.1568 |
| 2.1392 | 1.8196 | 580 | 2.1578 |
| 1.9255 | 1.8510 | 590 | 2.1799 |
| 2.0995 | 1.8824 | 600 | 2.1995 |
| 2.1153 | 1.9137 | 610 | 2.1741 |
| 2.2068 | 1.9451 | 620 | 2.1638 |
| 1.8698 | 1.9765 | 630 | 2.1819 |
| 1.8849 | 2.0078 | 640 | 2.1807 |
| 2.0291 | 2.0392 | 650 | 2.1636 |
| 2.2092 | 2.0706 | 660 | 2.1356 |
| 2.1117 | 2.1020 | 670 | 2.1682 |
| 1.8318 | 2.1333 | 680 | 2.1719 |
| 1.9884 | 2.1647 | 690 | 2.2114 |
| 2.1933 | 2.1961 | 700 | 2.1526 |
| 2.2953 | 2.2275 | 710 | 2.1525 |
| 2.2841 | 2.2588 | 720 | 2.1417 |
| 1.9865 | 2.2902 | 730 | 2.1399 |
| 1.9193 | 2.3216 | 740 | 2.1313 |
| 1.8882 | 2.3529 | 750 | 2.1362 |
| 1.8967 | 2.3843 | 760 | 2.1454 |
| 1.9424 | 2.4157 | 770 | 2.1356 |
| 1.8531 | 2.4471 | 780 | 2.1340 |
| 1.9435 | 2.4784 | 790 | 2.1413 |
| 2.0455 | 2.5098 | 800 | 2.1558 |
| 1.9384 | 2.5412 | 810 | 2.1519 |
| 2.0826 | 2.5725 | 820 | 2.1381 |
| 2.0008 | 2.6039 | 830 | 2.1136 |
| 1.922 | 2.6353 | 840 | 2.1160 |
| 1.9567 | 2.6667 | 850 | 2.0991 |
| 2.2798 | 2.6980 | 860 | 2.0998 |
| 2.4014 | 2.7294 | 870 | 2.0922 |
| 2.3427 | 2.7608 | 880 | 2.0976 |
| 2.2701 | 2.7922 | 890 | 2.0823 |
| 2.1405 | 2.8235 | 900 | 2.1009 |
| 1.9259 | 2.8549 | 910 | 2.1075 |
| 2.0055 | 2.8863 | 920 | 2.1041 |
| 1.9902 | 2.9176 | 930 | 2.0854 |
| 1.9821 | 2.9490 | 940 | 2.1107 |
| 2.0292 | 2.9804 | 950 | 2.0901 |
| 1.9811 | 3.0118 | 960 | 2.1227 |
| 2.2674 | 3.0431 | 970 | 2.0934 |
| 2.0632 | 3.0745 | 980 | 2.0935 |
| 2.1232 | 3.1059 | 990 | 2.0843 |
| 2.0056 | 3.1373 | 1000 | 2.0891 |
| 2.0188 | 3.1686 | 1010 | 2.0811 |
| 2.0898 | 3.2 | 1020 | 2.0848 |
| 2.1809 | 3.2314 | 1030 | 2.0883 |
| 2.1636 | 3.2627 | 1040 | 2.0931 |
| 1.9941 | 3.2941 | 1050 | 2.0894 |
| 1.9761 | 3.3255 | 1060 | 2.0957 |
| 1.9908 | 3.3569 | 1070 | 2.0715 |
| 2.0806 | 3.3882 | 1080 | 2.0774 |
| 1.9419 | 3.4196 | 1090 | 2.0713 |
| 1.8643 | 3.4510 | 1100 | 2.0654 |
| 1.969 | 3.4824 | 1110 | 2.0636 |
| 2.0104 | 3.5137 | 1120 | 2.0710 |
| 1.6745 | 3.5451 | 1130 | 2.0551 |
| 2.047 | 3.5765 | 1140 | 2.0598 |
| 2.1289 | 3.6078 | 1150 | 2.0426 |
| 2.1158 | 3.6392 | 1160 | 2.0525 |
| 1.8543 | 3.6706 | 1170 | 2.0515 |
| 2.0206 | 3.7020 | 1180 | 2.0508 |
| 2.1992 | 3.7333 | 1190 | 2.0485 |
| 1.6875 | 3.7647 | 1200 | 2.0558 |
| 1.8452 | 3.7961 | 1210 | 2.0543 |
| 2.2061 | 3.8275 | 1220 | 2.0594 |
| 2.0418 | 3.8588 | 1230 | 2.0652 |
| 2.0411 | 3.8902 | 1240 | 2.0679 |
| 2.0835 | 3.9216 | 1250 | 2.0731 |
| 1.9003 | 3.9529 | 1260 | 2.0574 |
| 1.7881 | 3.9843 | 1270 | 2.0777 |
| 2.1354 | 4.0157 | 1280 | 2.0630 |
| 1.8935 | 4.0471 | 1290 | 2.0607 |
| 2.1067 | 4.0784 | 1300 | 2.0576 |
| 1.8225 | 4.1098 | 1310 | 2.0767 |
| 1.8132 | 4.1412 | 1320 | 2.0507 |
| 1.985 | 4.1725 | 1330 | 2.0669 |
| 2.112 | 4.2039 | 1340 | 2.0836 |
| 1.7993 | 4.2353 | 1350 | 2.0718 |
| 1.9784 | 4.2667 | 1360 | 2.0676 |
| 2.1628 | 4.2980 | 1370 | 2.0525 |
| 1.876 | 4.3294 | 1380 | 2.0615 |
| 2.0081 | 4.3608 | 1390 | 2.0736 |
| 1.8642 | 4.3922 | 1400 | 2.0565 |
| 1.9308 | 4.4235 | 1410 | 2.0608 |
| 2.2296 | 4.4549 | 1420 | 2.0553 |
| 2.0166 | 4.4863 | 1430 | 2.0575 |
| 2.0422 | 4.5176 | 1440 | 2.0543 |
| 1.8729 | 4.5490 | 1450 | 2.0552 |
| 2.0323 | 4.5804 | 1460 | 2.0656 |
| 1.9935 | 4.6118 | 1470 | 2.0794 |
| 1.8534 | 4.6431 | 1480 | 2.0685 |
| 1.8363 | 4.6745 | 1490 | 2.0581 |
| 1.9679 | 4.7059 | 1500 | 2.0353 |
| 1.8585 | 4.7373 | 1510 | 2.0334 |
| 1.9772 | 4.7686 | 1520 | 2.0420 |
| 1.8753 | 4.8 | 1530 | 2.0427 |
| 1.8911 | 4.8314 | 1540 | 2.0499 |
| 2.0614 | 4.8627 | 1550 | 2.0481 |
| 2.1184 | 4.8941 | 1560 | 2.0481 |
| 1.9504 | 4.9255 | 1570 | 2.0541 |
| 2.1337 | 4.9569 | 1580 | 2.0480 |
| 2.4391 | 4.9882 | 1590 | 2.0416 |
| 1.72 | 5.0196 | 1600 | 2.0412 |
| 2.0808 | 5.0510 | 1610 | 2.0458 |
| 1.8639 | 5.0824 | 1620 | 2.0438 |
| 1.9462 | 5.1137 | 1630 | 2.0428 |
| 2.0055 | 5.1451 | 1640 | 2.0366 |
| 2.0345 | 5.1765 | 1650 | 2.0644 |
| 1.9321 | 5.2078 | 1660 | 2.0454 |
| 1.8705 | 5.2392 | 1670 | 2.0394 |
| 2.0345 | 5.2706 | 1680 | 2.0475 |
| 1.9992 | 5.3020 | 1690 | 2.0567 |
| 2.2208 | 5.3333 | 1700 | 2.0558 |
| 1.8253 | 5.3647 | 1710 | 2.0413 |
| 2.0765 | 5.3961 | 1720 | 2.0319 |
| 2.2315 | 5.4275 | 1730 | 2.0360 |
| 2.2432 | 5.4588 | 1740 | 2.0436 |
| 2.0666 | 5.4902 | 1750 | 2.0451 |
| 2.0603 | 5.5216 | 1760 | 2.0296 |
| 1.6625 | 5.5529 | 1770 | 2.0513 |
| 2.0946 | 5.5843 | 1780 | 2.0306 |
| 1.9464 | 5.6157 | 1790 | 2.0315 |
| 2.0183 | 5.6471 | 1800 | 2.0276 |
| 2.0794 | 5.6784 | 1810 | 2.0512 |
| 2.0289 | 5.7098 | 1820 | 2.0369 |
| 2.1014 | 5.7412 | 1830 | 2.0520 |
| 1.9159 | 5.7725 | 1840 | 2.0491 |
| 2.2446 | 5.8039 | 1850 | 2.0508 |
| 1.9383 | 5.8353 | 1860 | 2.0327 |
| 2.0132 | 5.8667 | 1870 | 2.0161 |
| 2.2234 | 5.8980 | 1880 | 2.0406 |
| 2.2556 | 5.9294 | 1890 | 2.0365 |
| 2.2061 | 5.9608 | 1900 | 2.0314 |
| 1.7465 | 5.9922 | 1910 | 2.0543 |
| 1.9388 | 6.0235 | 1920 | 2.0525 |
| 1.9223 | 6.0549 | 1930 | 2.0325 |
| 1.9386 | 6.0863 | 1940 | 2.0282 |
| 1.9171 | 6.1176 | 1950 | 2.0462 |
| 1.9319 | 6.1490 | 1960 | 2.0369 |
| 1.7689 | 6.1804 | 1970 | 2.0364 |
| 2.0063 | 6.2118 | 1980 | 2.0388 |
| 2.1053 | 6.2431 | 1990 | 2.0346 |
| 2.1074 | 6.2745 | 2000 | 2.0368 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.19.2
- Tokenizers 0.19.1
| [
"not text",
"text"
] |
schoonhovenra/20240530 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20240530
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:------:|:---------------:|
| 1.8627 | 18.35 | 4000 | 1.5604 |
| 1.4599 | 36.7 | 8000 | 1.1805 |
| 1.2256 | 55.05 | 12000 | 0.9678 |
| 1.1121 | 73.39 | 16000 | 0.8867 |
| 1.0312 | 91.74 | 20000 | 0.8539 |
| 1.016 | 110.09 | 24000 | 0.8169 |
| 0.9564 | 128.44 | 28000 | 0.8027 |
| 0.9438 | 146.79 | 32000 | 0.7773 |
| 0.9099 | 165.14 | 36000 | 0.7705 |
| 0.8781 | 183.49 | 40000 | 0.7570 |
| 0.8743 | 201.83 | 44000 | 0.7558 |
| 0.8581 | 220.18 | 48000 | 0.7424 |
| 0.8447 | 238.53 | 52000 | 0.7356 |
| 0.8207 | 256.88 | 56000 | 0.7324 |
| 0.8018 | 275.23 | 60000 | 0.7266 |
| 0.793 | 293.58 | 64000 | 0.7279 |
| 0.7987 | 311.93 | 68000 | 0.7250 |
| 0.7643 | 330.28 | 72000 | 0.7245 |
| 0.7673 | 348.62 | 76000 | 0.7297 |
| 0.7509 | 366.97 | 80000 | 0.7169 |
| 0.758 | 385.32 | 84000 | 0.7202 |
| 0.7355 | 403.67 | 88000 | 0.7180 |
| 0.738 | 422.02 | 92000 | 0.7202 |
| 0.7296 | 440.37 | 96000 | 0.7229 |
| 0.7107 | 458.72 | 100000 | 0.7164 |
| 0.6961 | 477.06 | 104000 | 0.7161 |
| 0.7096 | 495.41 | 108000 | 0.7156 |
| 0.6837 | 513.76 | 112000 | 0.7145 |
| 0.7034 | 532.11 | 116000 | 0.7147 |
| 0.6868 | 550.46 | 120000 | 0.7201 |
| 0.6814 | 568.81 | 124000 | 0.7164 |
| 0.6896 | 587.16 | 128000 | 0.7167 |
| 0.6809 | 605.5 | 132000 | 0.7149 |
| 0.6583 | 623.85 | 136000 | 0.7196 |
| 0.6696 | 642.2 | 140000 | 0.7185 |
| 0.6704 | 660.55 | 144000 | 0.7156 |
| 0.6761 | 678.9 | 148000 | 0.7235 |
| 0.6577 | 697.25 | 152000 | 0.7207 |
| 0.6649 | 715.6 | 156000 | 0.7211 |
| 0.6589 | 733.94 | 160000 | 0.7203 |
| 0.6461 | 752.29 | 164000 | 0.7190 |
| 0.6406 | 770.64 | 168000 | 0.7213 |
| 0.638 | 788.99 | 172000 | 0.7191 |
| 0.6523 | 807.34 | 176000 | 0.7232 |
| 0.6336 | 825.69 | 180000 | 0.7177 |
| 0.6382 | 844.04 | 184000 | 0.7199 |
| 0.6394 | 862.39 | 188000 | 0.7241 |
| 0.6406 | 880.73 | 192000 | 0.7239 |
| 0.6366 | 899.08 | 196000 | 0.7226 |
| 0.65 | 917.43 | 200000 | 0.7198 |
| 0.6382 | 935.78 | 204000 | 0.7198 |
| 0.6257 | 954.13 | 208000 | 0.7241 |
| 0.6242 | 972.48 | 212000 | 0.7211 |
| 0.6405 | 990.83 | 216000 | 0.7211 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.12.0
- Tokenizers 0.15.1
| [
"antenna",
"brand_badge",
"filler_cap",
"head_light_left",
"head_light_right",
"license_plate",
"license_plate_holder",
"mirror_left",
"mirror_right",
"rear_light_left",
"rear_light_right",
"roof_rack",
"wheel_front_left",
"wheel_front_right",
"wheel_rear_left",
"wheel_rear_right"
] |
ChiJuiChen/Lab8_DETR_BOAT_Aug_trainlong |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Lab8_DETR_BOAT_Aug_trainlong
This model is a fine-tuned version of [ChiJuiChen/Lab8_DETR_BOAT](https://huggingface.co/ChiJuiChen/Lab8_DETR_BOAT) on the boat_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
vemolka/detr-resnet-50_dogs |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_dogs
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1
| [
"door",
"building"
] |
KIRANKALLA/detr_finetuned_cppe5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_cppe5
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2111
- Map: 0.321
- Map 50: 0.5918
- Map 75: 0.3184
- Map Small: 0.0907
- Map Medium: 0.2843
- Map Large: 0.5134
- Mar 1: 0.3198
- Mar 10: 0.4825
- Mar 100: 0.5039
- Mar Small: 0.2107
- Mar Medium: 0.4607
- Mar Large: 0.7011
- Map Coverall: 0.6194
- Mar 100 Coverall: 0.7227
- Map Face Shield: 0.3136
- Mar 100 Face Shield: 0.5069
- Map Gloves: 0.2044
- Mar 100 Gloves: 0.3771
- Map Goggles: 0.1251
- Mar 100 Goggles: 0.4545
- Map Mask: 0.3424
- Mar 100 Mask: 0.4585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:|
| No log | 1.0 | 107 | 2.3407 | 0.0426 | 0.0978 | 0.0326 | 0.0074 | 0.0354 | 0.0395 | 0.0973 | 0.2336 | 0.2754 | 0.0681 | 0.227 | 0.2954 | 0.1332 | 0.4977 | 0.0115 | 0.2514 | 0.0099 | 0.2297 | 0.0067 | 0.0523 | 0.0518 | 0.3462 |
| No log | 2.0 | 214 | 2.0573 | 0.0833 | 0.1634 | 0.0709 | 0.0181 | 0.0454 | 0.0882 | 0.1315 | 0.268 | 0.3119 | 0.1127 | 0.2675 | 0.3681 | 0.3125 | 0.639 | 0.0058 | 0.1472 | 0.0167 | 0.2656 | 0.0176 | 0.15 | 0.0637 | 0.3579 |
| No log | 3.0 | 321 | 1.8604 | 0.1146 | 0.2275 | 0.0986 | 0.0195 | 0.0708 | 0.1337 | 0.1584 | 0.3363 | 0.3769 | 0.1369 | 0.2901 | 0.4995 | 0.3902 | 0.6802 | 0.0154 | 0.2514 | 0.0284 | 0.2891 | 0.0073 | 0.2545 | 0.1319 | 0.4092 |
| No log | 4.0 | 428 | 1.7289 | 0.1378 | 0.2881 | 0.1121 | 0.0253 | 0.0842 | 0.1814 | 0.1651 | 0.3436 | 0.3859 | 0.1215 | 0.3017 | 0.5579 | 0.4459 | 0.6605 | 0.0237 | 0.3333 | 0.0459 | 0.2958 | 0.0152 | 0.2545 | 0.1581 | 0.3851 |
| 3.3191 | 5.0 | 535 | 1.6097 | 0.1723 | 0.3541 | 0.1494 | 0.0463 | 0.1274 | 0.2006 | 0.1869 | 0.3909 | 0.4249 | 0.1537 | 0.37 | 0.5833 | 0.5099 | 0.7023 | 0.0445 | 0.3319 | 0.064 | 0.3187 | 0.053 | 0.325 | 0.19 | 0.4467 |
| 3.3191 | 6.0 | 642 | 1.6111 | 0.1731 | 0.3782 | 0.1364 | 0.048 | 0.1325 | 0.2384 | 0.209 | 0.3909 | 0.4184 | 0.1485 | 0.3428 | 0.6139 | 0.4833 | 0.6465 | 0.0555 | 0.375 | 0.0952 | 0.3359 | 0.0366 | 0.2977 | 0.1948 | 0.4369 |
| 3.3191 | 7.0 | 749 | 1.4938 | 0.2079 | 0.4225 | 0.1832 | 0.0505 | 0.1639 | 0.2836 | 0.2305 | 0.4168 | 0.4429 | 0.1665 | 0.3838 | 0.6407 | 0.5325 | 0.711 | 0.0848 | 0.4 | 0.1284 | 0.3443 | 0.0486 | 0.3045 | 0.2453 | 0.4549 |
| 3.3191 | 8.0 | 856 | 1.4541 | 0.2245 | 0.4604 | 0.1805 | 0.06 | 0.1831 | 0.3044 | 0.2417 | 0.4187 | 0.438 | 0.1518 | 0.3741 | 0.6534 | 0.5497 | 0.711 | 0.0936 | 0.3889 | 0.1453 | 0.349 | 0.0642 | 0.3068 | 0.2697 | 0.4344 |
| 3.3191 | 9.0 | 963 | 1.4065 | 0.2315 | 0.4382 | 0.2109 | 0.0486 | 0.2036 | 0.3422 | 0.2501 | 0.435 | 0.4574 | 0.1815 | 0.4242 | 0.6164 | 0.5825 | 0.7163 | 0.1096 | 0.3889 | 0.1283 | 0.3562 | 0.0722 | 0.3795 | 0.2648 | 0.4462 |
| 1.4973 | 10.0 | 1070 | 1.3990 | 0.2593 | 0.5115 | 0.2569 | 0.0656 | 0.2132 | 0.3844 | 0.2975 | 0.4597 | 0.486 | 0.1816 | 0.4145 | 0.7054 | 0.5545 | 0.6971 | 0.1929 | 0.4514 | 0.1595 | 0.3797 | 0.0941 | 0.4364 | 0.2955 | 0.4656 |
| 1.4973 | 11.0 | 1177 | 1.3687 | 0.2472 | 0.4996 | 0.2141 | 0.0636 | 0.2021 | 0.395 | 0.2777 | 0.4438 | 0.4769 | 0.1755 | 0.4489 | 0.6849 | 0.5717 | 0.718 | 0.1489 | 0.4458 | 0.1599 | 0.3698 | 0.0891 | 0.4159 | 0.2662 | 0.4349 |
| 1.4973 | 12.0 | 1284 | 1.3334 | 0.2544 | 0.5158 | 0.2178 | 0.0721 | 0.212 | 0.4027 | 0.2841 | 0.4611 | 0.48 | 0.1934 | 0.4503 | 0.6624 | 0.5714 | 0.6977 | 0.1812 | 0.4556 | 0.1648 | 0.3745 | 0.0668 | 0.4227 | 0.2879 | 0.4497 |
| 1.4973 | 13.0 | 1391 | 1.3249 | 0.2501 | 0.4985 | 0.2084 | 0.0496 | 0.2238 | 0.4004 | 0.2832 | 0.4529 | 0.485 | 0.1911 | 0.4504 | 0.6732 | 0.5983 | 0.7203 | 0.1515 | 0.4667 | 0.156 | 0.3651 | 0.0548 | 0.4114 | 0.2899 | 0.4615 |
| 1.4973 | 14.0 | 1498 | 1.2735 | 0.2822 | 0.5366 | 0.2586 | 0.0787 | 0.2395 | 0.4652 | 0.313 | 0.4716 | 0.4955 | 0.1955 | 0.4434 | 0.7144 | 0.6008 | 0.7076 | 0.2116 | 0.4708 | 0.1843 | 0.3745 | 0.09 | 0.4614 | 0.3242 | 0.4631 |
| 1.2609 | 15.0 | 1605 | 1.2748 | 0.286 | 0.5519 | 0.2749 | 0.0813 | 0.2349 | 0.469 | 0.3004 | 0.4682 | 0.4893 | 0.181 | 0.4281 | 0.7042 | 0.5954 | 0.707 | 0.1981 | 0.5167 | 0.1918 | 0.374 | 0.1184 | 0.4159 | 0.3265 | 0.4328 |
| 1.2609 | 16.0 | 1712 | 1.2785 | 0.2918 | 0.565 | 0.2654 | 0.0996 | 0.2501 | 0.453 | 0.3002 | 0.4744 | 0.4916 | 0.2085 | 0.4362 | 0.6911 | 0.5878 | 0.6948 | 0.2662 | 0.5097 | 0.1865 | 0.3583 | 0.1005 | 0.4477 | 0.3181 | 0.4477 |
| 1.2609 | 17.0 | 1819 | 1.2680 | 0.2985 | 0.5593 | 0.2776 | 0.0911 | 0.2461 | 0.4957 | 0.31 | 0.4819 | 0.5055 | 0.2127 | 0.4349 | 0.7159 | 0.5999 | 0.7081 | 0.2304 | 0.4917 | 0.1759 | 0.3677 | 0.1424 | 0.4864 | 0.3437 | 0.4738 |
| 1.2609 | 18.0 | 1926 | 1.2360 | 0.3045 | 0.5657 | 0.2744 | 0.0839 | 0.2549 | 0.5003 | 0.3054 | 0.4863 | 0.5062 | 0.2052 | 0.454 | 0.7213 | 0.6091 | 0.7209 | 0.2405 | 0.5111 | 0.1883 | 0.3693 | 0.1392 | 0.475 | 0.3452 | 0.4549 |
| 1.1171 | 19.0 | 2033 | 1.2404 | 0.3022 | 0.5737 | 0.2743 | 0.0765 | 0.259 | 0.4799 | 0.3044 | 0.4796 | 0.5043 | 0.209 | 0.4614 | 0.6987 | 0.603 | 0.714 | 0.2509 | 0.4986 | 0.1882 | 0.3729 | 0.1231 | 0.4636 | 0.3459 | 0.4723 |
| 1.1171 | 20.0 | 2140 | 1.2419 | 0.2969 | 0.5527 | 0.2677 | 0.081 | 0.256 | 0.4772 | 0.3108 | 0.4842 | 0.5123 | 0.2301 | 0.459 | 0.7114 | 0.6033 | 0.7262 | 0.2605 | 0.4861 | 0.176 | 0.3797 | 0.1112 | 0.5159 | 0.3336 | 0.4538 |
| 1.1171 | 21.0 | 2247 | 1.2257 | 0.3178 | 0.5774 | 0.3081 | 0.0885 | 0.2836 | 0.4843 | 0.3157 | 0.4932 | 0.5152 | 0.2269 | 0.4768 | 0.7 | 0.6221 | 0.7326 | 0.3063 | 0.5125 | 0.1964 | 0.3849 | 0.1316 | 0.4909 | 0.3329 | 0.4554 |
| 1.1171 | 22.0 | 2354 | 1.2236 | 0.3217 | 0.588 | 0.3084 | 0.0973 | 0.2854 | 0.4703 | 0.3136 | 0.4828 | 0.5101 | 0.2142 | 0.4775 | 0.7127 | 0.6213 | 0.7285 | 0.3186 | 0.5319 | 0.2022 | 0.3854 | 0.1269 | 0.45 | 0.3396 | 0.4549 |
| 1.1171 | 23.0 | 2461 | 1.2148 | 0.3234 | 0.587 | 0.3118 | 0.0949 | 0.2848 | 0.5036 | 0.3235 | 0.4904 | 0.5139 | 0.2189 | 0.4787 | 0.7177 | 0.6277 | 0.7279 | 0.3078 | 0.5208 | 0.2024 | 0.3865 | 0.14 | 0.4795 | 0.3392 | 0.4549 |
| 1.0261 | 24.0 | 2568 | 1.2210 | 0.3128 | 0.5754 | 0.2984 | 0.0833 | 0.2851 | 0.4986 | 0.3248 | 0.484 | 0.5096 | 0.2128 | 0.4715 | 0.7037 | 0.6269 | 0.7297 | 0.2899 | 0.4986 | 0.1942 | 0.3807 | 0.1136 | 0.4818 | 0.3394 | 0.4574 |
| 1.0261 | 25.0 | 2675 | 1.2124 | 0.3237 | 0.5872 | 0.3141 | 0.0916 | 0.2888 | 0.5151 | 0.3203 | 0.4906 | 0.5153 | 0.2202 | 0.4732 | 0.7165 | 0.6276 | 0.7285 | 0.3158 | 0.5222 | 0.2006 | 0.3854 | 0.1314 | 0.4795 | 0.3431 | 0.461 |
| 1.0261 | 26.0 | 2782 | 1.2167 | 0.3207 | 0.5852 | 0.313 | 0.0903 | 0.2815 | 0.5109 | 0.3216 | 0.4863 | 0.5081 | 0.213 | 0.4583 | 0.7085 | 0.6185 | 0.7221 | 0.3061 | 0.5 | 0.2048 | 0.3828 | 0.1317 | 0.4773 | 0.3423 | 0.4585 |
| 1.0261 | 27.0 | 2889 | 1.2110 | 0.3192 | 0.5864 | 0.3102 | 0.0881 | 0.2833 | 0.5117 | 0.3208 | 0.485 | 0.5074 | 0.2112 | 0.464 | 0.7048 | 0.62 | 0.7215 | 0.3094 | 0.5028 | 0.2006 | 0.3807 | 0.1225 | 0.4705 | 0.3437 | 0.4615 |
| 1.0261 | 28.0 | 2996 | 1.2109 | 0.3207 | 0.5939 | 0.3087 | 0.089 | 0.2832 | 0.5126 | 0.3202 | 0.4819 | 0.5033 | 0.2093 | 0.4588 | 0.6984 | 0.6203 | 0.7233 | 0.3128 | 0.5042 | 0.2048 | 0.376 | 0.1256 | 0.4568 | 0.34 | 0.4564 |
| 0.97 | 29.0 | 3103 | 1.2112 | 0.321 | 0.5919 | 0.3185 | 0.0909 | 0.2843 | 0.5132 | 0.3205 | 0.4825 | 0.5039 | 0.211 | 0.4603 | 0.701 | 0.6191 | 0.7221 | 0.3136 | 0.5069 | 0.2034 | 0.3755 | 0.1267 | 0.4568 | 0.342 | 0.4579 |
| 0.97 | 30.0 | 3210 | 1.2111 | 0.321 | 0.5918 | 0.3184 | 0.0907 | 0.2843 | 0.5134 | 0.3198 | 0.4825 | 0.5039 | 0.2107 | 0.4607 | 0.7011 | 0.6194 | 0.7227 | 0.3136 | 0.5069 | 0.2044 | 0.3771 | 0.1251 | 0.4545 | 0.3424 | 0.4585 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
qubvel-hf/rtdetr-r50-cppe5-finetune-v3 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rtdetr-r50-cppe5-finetune-v3
This model is a fine-tuned version of [PekingU/rtdetr_r50vd_coco_o365](https://huggingface.co/PekingU/rtdetr_r50vd_coco_o365) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.9472
- Map: 0.3704
- Map 50: 0.5798
- Map 75: 0.3751
- Map Small: 0.1898
- Map Medium: 0.3794
- Map Large: 0.4798
- Mar 1: 0.3002
- Mar 10: 0.5262
- Mar 100: 0.6137
- Mar Small: 0.4445
- Mar Medium: 0.5564
- Mar Large: 0.8118
- Map Coverall: 0.4594
- Mar 100 Coverall: 0.6795
- Map Face Shield: 0.4864
- Mar 100 Face Shield: 0.6412
- Map Gloves: 0.3727
- Mar 100 Gloves: 0.6322
- Map Goggles: 0.1661
- Mar 100 Goggles: 0.4586
- Map Mask: 0.3674
- Mar 100 Mask: 0.6569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:|
| No log | 1.0 | 107 | 113.2495 | 0.011 | 0.0228 | 0.0089 | 0.0002 | 0.0063 | 0.0248 | 0.0285 | 0.1032 | 0.1933 | 0.0311 | 0.1471 | 0.3682 | 0.053 | 0.5482 | 0.0002 | 0.1316 | 0.0001 | 0.0379 | 0.0 | 0.0169 | 0.0016 | 0.232 |
| No log | 2.0 | 214 | 18.6938 | 0.1584 | 0.2939 | 0.1442 | 0.0773 | 0.1147 | 0.2221 | 0.1645 | 0.3785 | 0.4702 | 0.303 | 0.4358 | 0.7243 | 0.4389 | 0.6694 | 0.0065 | 0.4266 | 0.0819 | 0.4272 | 0.0187 | 0.3585 | 0.2462 | 0.4693 |
| No log | 3.0 | 321 | 12.9839 | 0.226 | 0.3923 | 0.2176 | 0.1203 | 0.1804 | 0.4483 | 0.2371 | 0.427 | 0.5082 | 0.3272 | 0.4915 | 0.7484 | 0.4397 | 0.668 | 0.0681 | 0.4608 | 0.1557 | 0.4585 | 0.1696 | 0.4662 | 0.2969 | 0.4876 |
| No log | 4.0 | 428 | 12.4463 | 0.215 | 0.4188 | 0.1966 | 0.1012 | 0.1805 | 0.4172 | 0.2229 | 0.4095 | 0.4812 | 0.2833 | 0.4737 | 0.6995 | 0.3622 | 0.6716 | 0.1302 | 0.4975 | 0.1221 | 0.4205 | 0.1687 | 0.3754 | 0.2916 | 0.4409 |
| 83.7659 | 5.0 | 535 | 12.3391 | 0.2336 | 0.465 | 0.2029 | 0.1382 | 0.2226 | 0.5186 | 0.2355 | 0.4423 | 0.5212 | 0.3433 | 0.5183 | 0.7393 | 0.2065 | 0.6703 | 0.2304 | 0.5051 | 0.1915 | 0.45 | 0.2376 | 0.4785 | 0.3021 | 0.5022 |
| 83.7659 | 6.0 | 642 | 12.3727 | 0.2059 | 0.3923 | 0.1857 | 0.1211 | 0.2121 | 0.4664 | 0.2274 | 0.4292 | 0.5156 | 0.3612 | 0.504 | 0.7148 | 0.1794 | 0.6694 | 0.2126 | 0.5329 | 0.1643 | 0.4455 | 0.1688 | 0.42 | 0.3042 | 0.5102 |
| 83.7659 | 7.0 | 749 | 12.7240 | 0.2058 | 0.41 | 0.1759 | 0.1217 | 0.2158 | 0.4525 | 0.2295 | 0.4086 | 0.4943 | 0.3464 | 0.4782 | 0.7071 | 0.2036 | 0.6077 | 0.2208 | 0.5013 | 0.1332 | 0.4201 | 0.2176 | 0.4323 | 0.2539 | 0.5102 |
| 83.7659 | 8.0 | 856 | 13.0090 | 0.2015 | 0.3831 | 0.1849 | 0.1056 | 0.2061 | 0.4551 | 0.2227 | 0.4194 | 0.5332 | 0.3719 | 0.5245 | 0.7451 | 0.2041 | 0.6482 | 0.1889 | 0.5747 | 0.1627 | 0.4692 | 0.1792 | 0.4462 | 0.2727 | 0.5276 |
| 83.7659 | 9.0 | 963 | 12.9929 | 0.2136 | 0.4059 | 0.1897 | 0.1307 | 0.1946 | 0.4823 | 0.2372 | 0.4132 | 0.5178 | 0.3559 | 0.5187 | 0.7195 | 0.2503 | 0.6248 | 0.22 | 0.5177 | 0.2002 | 0.4799 | 0.1888 | 0.46 | 0.2088 | 0.5067 |
| 12.788 | 10.0 | 1070 | 13.2438 | 0.1824 | 0.3375 | 0.1626 | 0.1 | 0.1911 | 0.4538 | 0.2061 | 0.3935 | 0.5026 | 0.3561 | 0.5017 | 0.6963 | 0.176 | 0.5644 | 0.1629 | 0.5228 | 0.1649 | 0.4741 | 0.1767 | 0.4769 | 0.2312 | 0.4747 |
| 12.788 | 11.0 | 1177 | 13.0890 | 0.2074 | 0.3885 | 0.1917 | 0.0937 | 0.1761 | 0.4555 | 0.2372 | 0.4106 | 0.508 | 0.3414 | 0.4956 | 0.7394 | 0.353 | 0.6599 | 0.182 | 0.4924 | 0.1565 | 0.4924 | 0.1524 | 0.4092 | 0.1931 | 0.4862 |
| 12.788 | 12.0 | 1284 | 12.2541 | 0.2428 | 0.44 | 0.2248 | 0.1213 | 0.202 | 0.5041 | 0.2484 | 0.4125 | 0.4938 | 0.3264 | 0.4726 | 0.7146 | 0.4213 | 0.6428 | 0.2319 | 0.4861 | 0.1937 | 0.4844 | 0.1301 | 0.32 | 0.2371 | 0.536 |
| 12.788 | 13.0 | 1391 | 13.4175 | 0.1572 | 0.2948 | 0.1408 | 0.0731 | 0.1404 | 0.4109 | 0.208 | 0.3659 | 0.4667 | 0.3022 | 0.4576 | 0.6952 | 0.2418 | 0.6063 | 0.1828 | 0.4835 | 0.1402 | 0.4826 | 0.0575 | 0.2754 | 0.1637 | 0.4858 |
| 12.788 | 14.0 | 1498 | 13.0720 | 0.2018 | 0.371 | 0.1912 | 0.0984 | 0.1543 | 0.4649 | 0.2229 | 0.3936 | 0.4802 | 0.2958 | 0.4639 | 0.715 | 0.3978 | 0.6392 | 0.2001 | 0.4734 | 0.1828 | 0.4884 | 0.077 | 0.3015 | 0.1512 | 0.4987 |
| 10.7992 | 15.0 | 1605 | 13.1509 | 0.2093 | 0.3882 | 0.1946 | 0.1149 | 0.1733 | 0.4644 | 0.2398 | 0.4148 | 0.4899 | 0.3288 | 0.4689 | 0.696 | 0.3568 | 0.6099 | 0.2329 | 0.5025 | 0.1722 | 0.4902 | 0.1078 | 0.3415 | 0.1767 | 0.5053 |
| 10.7992 | 16.0 | 1712 | 13.5416 | 0.1865 | 0.3502 | 0.1691 | 0.0943 | 0.1519 | 0.4479 | 0.2219 | 0.3868 | 0.4732 | 0.3295 | 0.4422 | 0.6977 | 0.3045 | 0.5698 | 0.2009 | 0.4949 | 0.1642 | 0.4728 | 0.0954 | 0.3554 | 0.1676 | 0.4729 |
| 10.7992 | 17.0 | 1819 | 13.8027 | 0.1419 | 0.2558 | 0.134 | 0.0598 | 0.1091 | 0.3792 | 0.2014 | 0.3602 | 0.4583 | 0.2925 | 0.4245 | 0.6809 | 0.2747 | 0.6063 | 0.1398 | 0.4709 | 0.1221 | 0.4504 | 0.0503 | 0.2877 | 0.1223 | 0.476 |
| 10.7992 | 18.0 | 1926 | 13.1241 | 0.205 | 0.389 | 0.1896 | 0.1006 | 0.1856 | 0.4619 | 0.2273 | 0.3813 | 0.4531 | 0.2848 | 0.4267 | 0.6944 | 0.3898 | 0.5995 | 0.2296 | 0.4848 | 0.1864 | 0.442 | 0.0754 | 0.2892 | 0.1436 | 0.4498 |
| 9.7939 | 19.0 | 2033 | 13.4709 | 0.2089 | 0.3828 | 0.1894 | 0.1018 | 0.1797 | 0.4627 | 0.2357 | 0.4 | 0.4783 | 0.3083 | 0.4537 | 0.6973 | 0.4201 | 0.6414 | 0.2105 | 0.5101 | 0.1717 | 0.4598 | 0.115 | 0.3215 | 0.1273 | 0.4587 |
| 9.7939 | 20.0 | 2140 | 13.6381 | 0.1755 | 0.3379 | 0.1459 | 0.0909 | 0.1501 | 0.4149 | 0.2176 | 0.3734 | 0.441 | 0.2658 | 0.4223 | 0.6946 | 0.3267 | 0.5896 | 0.1912 | 0.4861 | 0.1899 | 0.4446 | 0.0553 | 0.24 | 0.1146 | 0.4449 |
| 9.7939 | 21.0 | 2247 | 13.6187 | 0.1785 | 0.3454 | 0.159 | 0.0906 | 0.158 | 0.41 | 0.2274 | 0.3754 | 0.4539 | 0.2919 | 0.4327 | 0.6825 | 0.322 | 0.582 | 0.1915 | 0.4861 | 0.1896 | 0.4558 | 0.0585 | 0.2877 | 0.1307 | 0.4578 |
| 9.7939 | 22.0 | 2354 | 13.6789 | 0.1736 | 0.3213 | 0.1547 | 0.0804 | 0.1572 | 0.3997 | 0.2114 | 0.3971 | 0.4766 | 0.3206 | 0.4618 | 0.7021 | 0.3437 | 0.5995 | 0.2031 | 0.5215 | 0.1468 | 0.4647 | 0.0694 | 0.3477 | 0.1053 | 0.4493 |
| 9.7939 | 23.0 | 2461 | 13.5973 | 0.1853 | 0.3484 | 0.163 | 0.0873 | 0.1661 | 0.4546 | 0.2201 | 0.3794 | 0.4493 | 0.2877 | 0.4317 | 0.6857 | 0.3301 | 0.5847 | 0.2031 | 0.4937 | 0.1784 | 0.4317 | 0.0745 | 0.2938 | 0.1405 | 0.4427 |
| 9.0938 | 24.0 | 2568 | 13.2147 | 0.2232 | 0.4117 | 0.2114 | 0.1183 | 0.1953 | 0.4989 | 0.2347 | 0.3911 | 0.4575 | 0.3023 | 0.4273 | 0.6881 | 0.3994 | 0.6032 | 0.204 | 0.4734 | 0.2114 | 0.4621 | 0.1332 | 0.3062 | 0.1683 | 0.4427 |
| 9.0938 | 25.0 | 2675 | 13.4741 | 0.183 | 0.3428 | 0.1701 | 0.0874 | 0.1512 | 0.4309 | 0.2035 | 0.3643 | 0.4371 | 0.2686 | 0.4134 | 0.6784 | 0.3489 | 0.5842 | 0.1923 | 0.4443 | 0.1939 | 0.4469 | 0.0618 | 0.2938 | 0.1183 | 0.4164 |
| 9.0938 | 26.0 | 2782 | 13.6609 | 0.1955 | 0.3825 | 0.1691 | 0.1002 | 0.1798 | 0.4468 | 0.2164 | 0.3665 | 0.4337 | 0.2749 | 0.4048 | 0.6729 | 0.3661 | 0.5761 | 0.2018 | 0.4392 | 0.1914 | 0.4393 | 0.098 | 0.2954 | 0.1204 | 0.4187 |
| 9.0938 | 27.0 | 2889 | 13.5787 | 0.2059 | 0.384 | 0.1917 | 0.1021 | 0.1888 | 0.4571 | 0.224 | 0.3871 | 0.4541 | 0.2992 | 0.4273 | 0.6842 | 0.3905 | 0.5806 | 0.2016 | 0.4835 | 0.1938 | 0.45 | 0.1011 | 0.3092 | 0.1423 | 0.4471 |
| 9.0938 | 28.0 | 2996 | 13.2892 | 0.2043 | 0.3784 | 0.192 | 0.099 | 0.1847 | 0.4577 | 0.2276 | 0.386 | 0.456 | 0.3049 | 0.4245 | 0.6931 | 0.3972 | 0.5991 | 0.1786 | 0.4797 | 0.2112 | 0.4567 | 0.0698 | 0.2938 | 0.1646 | 0.4507 |
| 8.5991 | 29.0 | 3103 | 13.5481 | 0.2103 | 0.3954 | 0.197 | 0.11 | 0.1973 | 0.4571 | 0.225 | 0.3865 | 0.4592 | 0.3234 | 0.4292 | 0.6822 | 0.3794 | 0.5748 | 0.202 | 0.4823 | 0.2086 | 0.4616 | 0.1024 | 0.3185 | 0.1591 | 0.4591 |
| 8.5991 | 30.0 | 3210 | 13.5758 | 0.2024 | 0.3781 | 0.186 | 0.1041 | 0.1897 | 0.4491 | 0.2214 | 0.3882 | 0.46 | 0.3184 | 0.4341 | 0.6808 | 0.3815 | 0.5779 | 0.1844 | 0.4734 | 0.2072 | 0.458 | 0.0877 | 0.3323 | 0.1511 | 0.4582 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
ldianwu/detr-finetuned-board-v1 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24"
] |
evanslur/detr-finetuned-trotoar-50-epoch |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3"
] |
LynnKukunda/detr_finetunned_ocular |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetunned_ocular
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the dsi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0598
- Map: 0.3166
- Map 50: 0.5255
- Map 75: 0.3725
- Map Small: 0.3115
- Map Medium: 0.6744
- Map Large: -1.0
- Mar 1: 0.1043
- Mar 10: 0.3801
- Mar 100: 0.4224
- Mar Small: 0.4186
- Mar Medium: 0.7234
- Mar Large: -1.0
- Map Falciparum Trophozoite: 0.0341
- Mar 100 Falciparum Trophozoite: 0.1663
- Map Wbc: 0.599
- Mar 100 Wbc: 0.6785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Falciparum Trophozoite | Mar 100 Falciparum Trophozoite | Map Wbc | Mar 100 Wbc |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:--------------------------:|:------------------------------:|:-------:|:-----------:|
| No log | 1.0 | 86 | 1.1493 | 0.2788 | 0.5002 | 0.3024 | 0.274 | 0.6184 | -1.0 | 0.0918 | 0.3473 | 0.386 | 0.3823 | 0.6785 | -1.0 | 0.0196 | 0.1372 | 0.5381 | 0.6348 |
| No log | 2.0 | 172 | 1.1199 | 0.2924 | 0.5063 | 0.3371 | 0.2866 | 0.6545 | -1.0 | 0.0924 | 0.3509 | 0.3873 | 0.3805 | 0.729 | -1.0 | 0.0204 | 0.1264 | 0.5644 | 0.6483 |
| No log | 3.0 | 258 | 1.1616 | 0.2802 | 0.4941 | 0.3138 | 0.2746 | 0.6231 | -1.0 | 0.0891 | 0.3378 | 0.377 | 0.374 | 0.6598 | -1.0 | 0.0167 | 0.129 | 0.5438 | 0.6249 |
| No log | 4.0 | 344 | 1.1263 | 0.3014 | 0.517 | 0.3393 | 0.296 | 0.6609 | -1.0 | 0.0981 | 0.3588 | 0.3857 | 0.3806 | 0.7187 | -1.0 | 0.0258 | 0.1135 | 0.577 | 0.6579 |
| No log | 5.0 | 430 | 1.1219 | 0.2801 | 0.5117 | 0.297 | 0.2734 | 0.6555 | -1.0 | 0.0905 | 0.3458 | 0.3795 | 0.3737 | 0.7028 | -1.0 | 0.0218 | 0.1254 | 0.5385 | 0.6337 |
| 1.0831 | 6.0 | 516 | 1.1299 | 0.2646 | 0.485 | 0.2705 | 0.2581 | 0.6103 | -1.0 | 0.0885 | 0.3294 | 0.371 | 0.3636 | 0.7 | -1.0 | 0.0117 | 0.1288 | 0.5175 | 0.6131 |
| 1.0831 | 7.0 | 602 | 1.1003 | 0.2934 | 0.5064 | 0.3254 | 0.286 | 0.6706 | -1.0 | 0.0933 | 0.357 | 0.3962 | 0.3905 | 0.7206 | -1.0 | 0.0207 | 0.1397 | 0.5661 | 0.6528 |
| 1.0831 | 8.0 | 688 | 1.1063 | 0.2945 | 0.5028 | 0.3407 | 0.2871 | 0.6606 | -1.0 | 0.0946 | 0.3568 | 0.3975 | 0.3938 | 0.6925 | -1.0 | 0.0243 | 0.1454 | 0.5647 | 0.6495 |
| 1.0831 | 9.0 | 774 | 1.1364 | 0.2824 | 0.4979 | 0.3114 | 0.2774 | 0.622 | -1.0 | 0.0928 | 0.3445 | 0.3844 | 0.3818 | 0.671 | -1.0 | 0.017 | 0.1297 | 0.5479 | 0.6392 |
| 1.0831 | 10.0 | 860 | 1.0997 | 0.2904 | 0.501 | 0.3299 | 0.2841 | 0.6483 | -1.0 | 0.0908 | 0.3515 | 0.3917 | 0.387 | 0.7065 | -1.0 | 0.02 | 0.1329 | 0.5609 | 0.6505 |
| 1.0831 | 11.0 | 946 | 1.1198 | 0.2826 | 0.496 | 0.3186 | 0.277 | 0.6299 | -1.0 | 0.0915 | 0.3426 | 0.3822 | 0.3778 | 0.6832 | -1.0 | 0.0225 | 0.1342 | 0.5427 | 0.6303 |
| 1.0585 | 12.0 | 1032 | 1.0999 | 0.2921 | 0.5038 | 0.3196 | 0.2867 | 0.6334 | -1.0 | 0.0953 | 0.3556 | 0.3954 | 0.3916 | 0.6897 | -1.0 | 0.0244 | 0.1454 | 0.5599 | 0.6453 |
| 1.0585 | 13.0 | 1118 | 1.1097 | 0.2966 | 0.5183 | 0.3365 | 0.29 | 0.6667 | -1.0 | 0.0995 | 0.3549 | 0.3983 | 0.3923 | 0.7178 | -1.0 | 0.0297 | 0.1493 | 0.5636 | 0.6472 |
| 1.0585 | 14.0 | 1204 | 1.0932 | 0.2964 | 0.5113 | 0.335 | 0.2913 | 0.6494 | -1.0 | 0.0969 | 0.3556 | 0.396 | 0.391 | 0.7037 | -1.0 | 0.0279 | 0.1474 | 0.565 | 0.6447 |
| 1.0585 | 15.0 | 1290 | 1.0951 | 0.2958 | 0.5173 | 0.3287 | 0.2915 | 0.6321 | -1.0 | 0.0969 | 0.3596 | 0.4018 | 0.3962 | 0.7187 | -1.0 | 0.0321 | 0.1505 | 0.5595 | 0.653 |
| 1.0585 | 16.0 | 1376 | 1.1036 | 0.3048 | 0.5215 | 0.3482 | 0.2997 | 0.6588 | -1.0 | 0.102 | 0.3633 | 0.4029 | 0.3974 | 0.7234 | -1.0 | 0.0321 | 0.1481 | 0.5775 | 0.6578 |
| 1.0585 | 17.0 | 1462 | 1.0973 | 0.2997 | 0.5169 | 0.3437 | 0.2943 | 0.6445 | -1.0 | 0.1006 | 0.3589 | 0.3994 | 0.3963 | 0.6907 | -1.0 | 0.0306 | 0.1448 | 0.5688 | 0.654 |
| 0.968 | 18.0 | 1548 | 1.1322 | 0.3029 | 0.5193 | 0.3482 | 0.2973 | 0.6525 | -1.0 | 0.0983 | 0.3636 | 0.4015 | 0.3977 | 0.6991 | -1.0 | 0.032 | 0.1499 | 0.5738 | 0.6532 |
| 0.968 | 19.0 | 1634 | 1.0698 | 0.3049 | 0.5114 | 0.3471 | 0.2989 | 0.6757 | -1.0 | 0.0992 | 0.3665 | 0.4138 | 0.4084 | 0.729 | -1.0 | 0.0288 | 0.1634 | 0.581 | 0.6643 |
| 0.968 | 20.0 | 1720 | 1.0780 | 0.3093 | 0.516 | 0.3556 | 0.3036 | 0.6647 | -1.0 | 0.101 | 0.3694 | 0.4145 | 0.4096 | 0.7252 | -1.0 | 0.0299 | 0.1618 | 0.5887 | 0.6673 |
| 0.968 | 21.0 | 1806 | 1.0825 | 0.3044 | 0.522 | 0.3357 | 0.2981 | 0.6642 | -1.0 | 0.0982 | 0.3653 | 0.4071 | 0.4029 | 0.7075 | -1.0 | 0.0319 | 0.1564 | 0.5768 | 0.6578 |
| 0.968 | 22.0 | 1892 | 1.0660 | 0.3142 | 0.5195 | 0.3691 | 0.3096 | 0.6679 | -1.0 | 0.1028 | 0.3764 | 0.4195 | 0.4158 | 0.7187 | -1.0 | 0.0352 | 0.164 | 0.5933 | 0.675 |
| 0.968 | 23.0 | 1978 | 1.0604 | 0.3145 | 0.5256 | 0.3633 | 0.3093 | 0.674 | -1.0 | 0.1031 | 0.3774 | 0.4199 | 0.4152 | 0.729 | -1.0 | 0.0368 | 0.1669 | 0.5922 | 0.6729 |
| 0.9092 | 24.0 | 2064 | 1.0607 | 0.3168 | 0.5266 | 0.3768 | 0.3114 | 0.6848 | -1.0 | 0.1039 | 0.3785 | 0.4233 | 0.4186 | 0.7374 | -1.0 | 0.034 | 0.1654 | 0.5996 | 0.6812 |
| 0.9092 | 25.0 | 2150 | 1.0681 | 0.3163 | 0.5283 | 0.3656 | 0.3113 | 0.6751 | -1.0 | 0.1053 | 0.3769 | 0.4185 | 0.4148 | 0.7196 | -1.0 | 0.0352 | 0.1616 | 0.5975 | 0.6755 |
| 0.9092 | 26.0 | 2236 | 1.0641 | 0.3158 | 0.5239 | 0.3708 | 0.3106 | 0.6715 | -1.0 | 0.1045 | 0.378 | 0.4217 | 0.4181 | 0.7196 | -1.0 | 0.0339 | 0.1656 | 0.5977 | 0.6777 |
| 0.9092 | 27.0 | 2322 | 1.0644 | 0.3162 | 0.526 | 0.3721 | 0.311 | 0.6785 | -1.0 | 0.1035 | 0.3775 | 0.42 | 0.4164 | 0.7206 | -1.0 | 0.0336 | 0.1624 | 0.5988 | 0.6777 |
| 0.9092 | 28.0 | 2408 | 1.0606 | 0.3165 | 0.5241 | 0.374 | 0.3114 | 0.6784 | -1.0 | 0.1052 | 0.3794 | 0.4223 | 0.4184 | 0.7252 | -1.0 | 0.0343 | 0.1665 | 0.5988 | 0.6782 |
| 0.9092 | 29.0 | 2494 | 1.0600 | 0.3161 | 0.5249 | 0.3728 | 0.311 | 0.6744 | -1.0 | 0.1043 | 0.3795 | 0.4219 | 0.418 | 0.7234 | -1.0 | 0.0341 | 0.1661 | 0.5981 | 0.6777 |
| 0.8509 | 30.0 | 2580 | 1.0598 | 0.3166 | 0.5255 | 0.3725 | 0.3115 | 0.6744 | -1.0 | 0.1043 | 0.3801 | 0.4224 | 0.4186 | 0.7234 | -1.0 | 0.0341 | 0.1663 | 0.599 | 0.6785 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| [
"falciparum_trophozoite",
"wbc"
] |
evanslur/detr-finetuned-trotoar-100-epoch |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3"
] |
ldianwu/detr-finetuned-board-50-v1 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24"
] |
evanslur/detr-finetuned-trotoar-100-epoch-resnet-101 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3"
] |
ldianwu/detr-finetuned-board-101-v1 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24"
] |
ldianwu/detr-finetuned-board-50-v2 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24"
] |
marthakk/detr_finetuned_oculardataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_oculardataset
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the dsi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0672
- Map: 0.3032
- Map 50: 0.4973
- Map 75: 0.3701
- Map Small: 0.2981
- Map Medium: 0.6746
- Map Large: -1.0
- Mar 1: 0.1
- Mar 10: 0.3678
- Mar 100: 0.4114
- Mar Small: 0.4054
- Mar Medium: 0.7421
- Mar Large: -1.0
- Map Falciparum Trophozoite: 0.0156
- Mar 100 Falciparum Trophozoite: 0.1511
- Map Wbc: 0.5908
- Mar 100 Wbc: 0.6716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Falciparum Trophozoite | Mar 100 Falciparum Trophozoite | Map Wbc | Mar 100 Wbc |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:--------------------------:|:------------------------------:|:-------:|:-----------:|
| No log | 1.0 | 86 | 1.6645 | 0.131 | 0.2562 | 0.1153 | 0.1289 | 0.3974 | -1.0 | 0.0647 | 0.2312 | 0.3164 | 0.314 | 0.6159 | -1.0 | 0.0004 | 0.0456 | 0.2616 | 0.5873 |
| No log | 2.0 | 172 | 1.4800 | 0.2028 | 0.4079 | 0.1766 | 0.1993 | 0.4876 | -1.0 | 0.0677 | 0.2725 | 0.3282 | 0.3251 | 0.628 | -1.0 | 0.0007 | 0.0648 | 0.405 | 0.5915 |
| No log | 3.0 | 258 | 1.3829 | 0.2264 | 0.4496 | 0.1936 | 0.2193 | 0.5542 | -1.0 | 0.0729 | 0.2807 | 0.3215 | 0.3168 | 0.629 | -1.0 | 0.0019 | 0.0706 | 0.451 | 0.5725 |
| No log | 4.0 | 344 | 1.3318 | 0.2089 | 0.4403 | 0.1427 | 0.2056 | 0.4726 | -1.0 | 0.0691 | 0.2751 | 0.3221 | 0.3116 | 0.6748 | -1.0 | 0.002 | 0.0941 | 0.4158 | 0.5502 |
| No log | 5.0 | 430 | 1.2739 | 0.2454 | 0.4562 | 0.2342 | 0.2354 | 0.614 | -1.0 | 0.0777 | 0.3046 | 0.3482 | 0.338 | 0.7262 | -1.0 | 0.002 | 0.0906 | 0.4888 | 0.6058 |
| 1.7665 | 6.0 | 516 | 1.2365 | 0.2599 | 0.4744 | 0.2599 | 0.2522 | 0.6258 | -1.0 | 0.0846 | 0.3217 | 0.361 | 0.354 | 0.7 | -1.0 | 0.005 | 0.1047 | 0.5149 | 0.6173 |
| 1.7665 | 7.0 | 602 | 1.2548 | 0.2488 | 0.4689 | 0.2302 | 0.2434 | 0.5622 | -1.0 | 0.0788 | 0.31 | 0.3519 | 0.3446 | 0.6888 | -1.0 | 0.0038 | 0.1012 | 0.4938 | 0.6026 |
| 1.7665 | 8.0 | 688 | 1.2031 | 0.2715 | 0.474 | 0.3074 | 0.2664 | 0.6153 | -1.0 | 0.0897 | 0.3309 | 0.3744 | 0.3723 | 0.657 | -1.0 | 0.0058 | 0.1164 | 0.5373 | 0.6325 |
| 1.7665 | 9.0 | 774 | 1.2492 | 0.2417 | 0.4715 | 0.2154 | 0.2349 | 0.5753 | -1.0 | 0.0789 | 0.3064 | 0.3503 | 0.342 | 0.686 | -1.0 | 0.0043 | 0.1129 | 0.4791 | 0.5877 |
| 1.7665 | 10.0 | 860 | 1.1861 | 0.2752 | 0.4772 | 0.2891 | 0.2683 | 0.6259 | -1.0 | 0.0872 | 0.3342 | 0.3823 | 0.379 | 0.6813 | -1.0 | 0.0061 | 0.1217 | 0.5443 | 0.6429 |
| 1.7665 | 11.0 | 946 | 1.1996 | 0.2607 | 0.4605 | 0.2779 | 0.2565 | 0.5972 | -1.0 | 0.085 | 0.326 | 0.3722 | 0.3669 | 0.6813 | -1.0 | 0.0041 | 0.1254 | 0.5173 | 0.6189 |
| 1.2663 | 12.0 | 1032 | 1.1664 | 0.2764 | 0.4753 | 0.3137 | 0.2718 | 0.6148 | -1.0 | 0.0892 | 0.333 | 0.3781 | 0.3741 | 0.685 | -1.0 | 0.0054 | 0.1188 | 0.5473 | 0.6375 |
| 1.2663 | 13.0 | 1118 | 1.1451 | 0.2804 | 0.4694 | 0.3212 | 0.2732 | 0.6595 | -1.0 | 0.092 | 0.3412 | 0.3852 | 0.3787 | 0.7187 | -1.0 | 0.0051 | 0.1282 | 0.5557 | 0.6421 |
| 1.2663 | 14.0 | 1204 | 1.1251 | 0.2889 | 0.4761 | 0.3401 | 0.2835 | 0.6619 | -1.0 | 0.0926 | 0.3496 | 0.3979 | 0.393 | 0.714 | -1.0 | 0.0091 | 0.1391 | 0.5687 | 0.6567 |
| 1.2663 | 15.0 | 1290 | 1.1493 | 0.2778 | 0.4695 | 0.3126 | 0.2706 | 0.6531 | -1.0 | 0.0911 | 0.3415 | 0.3881 | 0.3792 | 0.743 | -1.0 | 0.0054 | 0.1382 | 0.5502 | 0.6379 |
| 1.2663 | 16.0 | 1376 | 1.1125 | 0.2846 | 0.4799 | 0.3307 | 0.2804 | 0.6415 | -1.0 | 0.0926 | 0.3498 | 0.4005 | 0.3954 | 0.7159 | -1.0 | 0.0075 | 0.1452 | 0.5617 | 0.6558 |
| 1.2663 | 17.0 | 1462 | 1.1002 | 0.2909 | 0.4816 | 0.3471 | 0.2859 | 0.6545 | -1.0 | 0.0956 | 0.3554 | 0.4036 | 0.3969 | 0.7421 | -1.0 | 0.0077 | 0.145 | 0.5741 | 0.6622 |
| 1.1448 | 18.0 | 1548 | 1.1066 | 0.2853 | 0.484 | 0.3205 | 0.2796 | 0.6647 | -1.0 | 0.0918 | 0.3472 | 0.3944 | 0.3883 | 0.7196 | -1.0 | 0.0092 | 0.1415 | 0.5613 | 0.6474 |
| 1.1448 | 19.0 | 1634 | 1.0993 | 0.2933 | 0.4838 | 0.3441 | 0.2884 | 0.6683 | -1.0 | 0.0978 | 0.3581 | 0.401 | 0.3958 | 0.7252 | -1.0 | 0.0079 | 0.1374 | 0.5787 | 0.6645 |
| 1.1448 | 20.0 | 1720 | 1.0850 | 0.298 | 0.4855 | 0.3594 | 0.2923 | 0.6669 | -1.0 | 0.0963 | 0.3606 | 0.4011 | 0.3952 | 0.7374 | -1.0 | 0.0093 | 0.1348 | 0.5867 | 0.6675 |
| 1.1448 | 21.0 | 1806 | 1.0814 | 0.3006 | 0.4908 | 0.3618 | 0.2951 | 0.6868 | -1.0 | 0.0994 | 0.3628 | 0.4056 | 0.4001 | 0.7355 | -1.0 | 0.0117 | 0.1413 | 0.5896 | 0.67 |
| 1.1448 | 22.0 | 1892 | 1.0836 | 0.2975 | 0.495 | 0.3541 | 0.2924 | 0.6712 | -1.0 | 0.0989 | 0.3628 | 0.4084 | 0.4036 | 0.7196 | -1.0 | 0.0135 | 0.1534 | 0.5816 | 0.6633 |
| 1.1448 | 23.0 | 1978 | 1.0813 | 0.2996 | 0.4965 | 0.3567 | 0.2941 | 0.6792 | -1.0 | 0.0979 | 0.3625 | 0.408 | 0.402 | 0.7364 | -1.0 | 0.015 | 0.1505 | 0.5842 | 0.6655 |
| 1.0601 | 24.0 | 2064 | 1.0707 | 0.3048 | 0.4952 | 0.3624 | 0.2987 | 0.6876 | -1.0 | 0.0981 | 0.3659 | 0.4118 | 0.4054 | 0.7486 | -1.0 | 0.0144 | 0.1501 | 0.5951 | 0.6735 |
| 1.0601 | 25.0 | 2150 | 1.0736 | 0.2982 | 0.4935 | 0.3584 | 0.2931 | 0.6732 | -1.0 | 0.0992 | 0.3638 | 0.41 | 0.4053 | 0.7224 | -1.0 | 0.0126 | 0.1521 | 0.5839 | 0.6678 |
| 1.0601 | 26.0 | 2236 | 1.0717 | 0.3034 | 0.4978 | 0.3622 | 0.2986 | 0.6788 | -1.0 | 0.0995 | 0.3659 | 0.411 | 0.405 | 0.7421 | -1.0 | 0.015 | 0.1501 | 0.5918 | 0.6719 |
| 1.0601 | 27.0 | 2322 | 1.0688 | 0.3025 | 0.4978 | 0.3622 | 0.2975 | 0.6747 | -1.0 | 0.1 | 0.3674 | 0.4108 | 0.4047 | 0.7421 | -1.0 | 0.0161 | 0.1524 | 0.5888 | 0.6693 |
| 1.0601 | 28.0 | 2408 | 1.0679 | 0.3031 | 0.4968 | 0.3638 | 0.2976 | 0.6805 | -1.0 | 0.0999 | 0.3679 | 0.4106 | 0.4046 | 0.7421 | -1.0 | 0.0156 | 0.1507 | 0.5905 | 0.6705 |
| 1.0601 | 29.0 | 2494 | 1.0669 | 0.3035 | 0.4976 | 0.3717 | 0.2985 | 0.6751 | -1.0 | 0.0999 | 0.368 | 0.4115 | 0.4055 | 0.743 | -1.0 | 0.0156 | 0.1509 | 0.5915 | 0.6721 |
| 1.0103 | 30.0 | 2580 | 1.0672 | 0.3032 | 0.4973 | 0.3701 | 0.2981 | 0.6746 | -1.0 | 0.1 | 0.3678 | 0.4114 | 0.4054 | 0.7421 | -1.0 | 0.0156 | 0.1511 | 0.5908 | 0.6716 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| [
"falciparum_trophozoite",
"wbc"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr1e6_dec1e5_bs4 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr1e6_dec1e5_bs4
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr1e6_dec1e5_bs4 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr1e6_dec1e5_bs4
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
ldianwu/detr-finetuned-board-101-v2 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr1e5_dec1e4_bs8 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr1e5_dec1e4_bs8
This model is a fine-tuned version of [nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr1e6_dec1e5_bs4](https://huggingface.co/nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr1e6_dec1e5_bs4) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr1e5_dec1e4_bs8 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr1e5_dec1e4_bs8
This model is a fine-tuned version of [nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr1e6_dec1e5_bs4](https://huggingface.co/nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr1e6_dec1e5_bs4) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
LynnKukunda/detr_finetunned_air |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetunned_air
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the dsi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8865
- Map: 0.32
- Map 50: 0.778
- Map 75: 0.1885
- Map Small: 0.3219
- Map Medium: 0.0139
- Map Large: -1.0
- Mar 1: 0.0252
- Mar 10: 0.1994
- Mar 100: 0.487
- Mar Small: 0.4896
- Mar Medium: 0.0122
- Mar Large: -1.0
- Map Falciparum Trophozoite: 0.32
- Mar 100 Falciparum Trophozoite: 0.487
- Map Wbc: -1.0
- Mar 100 Wbc: -1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Falciparum Trophozoite | Mar 100 Falciparum Trophozoite | Map Wbc | Mar 100 Wbc |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:--------------------------:|:------------------------------:|:-------:|:-----------:|
| No log | 1.0 | 209 | 1.2045 | 0.1719 | 0.4768 | 0.0896 | 0.1728 | 0.0167 | -1.0 | 0.0188 | 0.139 | 0.4221 | 0.4242 | 0.0184 | -1.0 | 0.1719 | 0.4221 | -1.0 | -1.0 |
| No log | 2.0 | 418 | 1.1177 | 0.2155 | 0.5922 | 0.1044 | 0.2165 | 0.005 | -1.0 | 0.021 | 0.1601 | 0.4249 | 0.4272 | 0.0041 | -1.0 | 0.2155 | 0.4249 | -1.0 | -1.0 |
| 1.339 | 3.0 | 627 | 1.0571 | 0.249 | 0.6576 | 0.1328 | 0.2503 | 0.0229 | -1.0 | 0.0236 | 0.1739 | 0.44 | 0.4422 | 0.0224 | -1.0 | 0.249 | 0.44 | -1.0 | -1.0 |
| 1.339 | 4.0 | 836 | 1.0675 | 0.2358 | 0.6473 | 0.1175 | 0.2369 | 0.0139 | -1.0 | 0.0216 | 0.166 | 0.4242 | 0.4265 | 0.0122 | -1.0 | 0.2358 | 0.4242 | -1.0 | -1.0 |
| 1.0423 | 5.0 | 1045 | 1.0266 | 0.2509 | 0.6759 | 0.1224 | 0.2525 | 0.0 | -1.0 | 0.0228 | 0.1711 | 0.4318 | 0.4341 | 0.0 | -1.0 | 0.2509 | 0.4318 | -1.0 | -1.0 |
| 1.0423 | 6.0 | 1254 | 1.0054 | 0.2569 | 0.6847 | 0.1269 | 0.2579 | 0.0091 | -1.0 | 0.0226 | 0.1754 | 0.4467 | 0.4491 | 0.0122 | -1.0 | 0.2569 | 0.4467 | -1.0 | -1.0 |
| 1.0423 | 7.0 | 1463 | 0.9812 | 0.2767 | 0.7163 | 0.1454 | 0.2782 | 0.0114 | -1.0 | 0.0238 | 0.1831 | 0.4539 | 0.4562 | 0.0224 | -1.0 | 0.2767 | 0.4539 | -1.0 | -1.0 |
| 0.99 | 8.0 | 1672 | 1.0019 | 0.271 | 0.7169 | 0.1358 | 0.2724 | 0.0096 | -1.0 | 0.0236 | 0.1801 | 0.4515 | 0.4538 | 0.0143 | -1.0 | 0.271 | 0.4515 | -1.0 | -1.0 |
| 0.99 | 9.0 | 1881 | 0.9623 | 0.2873 | 0.731 | 0.1597 | 0.2886 | 0.0064 | -1.0 | 0.0251 | 0.1865 | 0.4608 | 0.4633 | 0.0061 | -1.0 | 0.2873 | 0.4608 | -1.0 | -1.0 |
| 0.9521 | 10.0 | 2090 | 0.9763 | 0.273 | 0.711 | 0.1419 | 0.2742 | 0.011 | -1.0 | 0.0229 | 0.178 | 0.4482 | 0.4506 | 0.0122 | -1.0 | 0.273 | 0.4482 | -1.0 | -1.0 |
| 0.9521 | 11.0 | 2299 | 0.9551 | 0.2906 | 0.7354 | 0.1634 | 0.2925 | 0.0064 | -1.0 | 0.0248 | 0.188 | 0.4654 | 0.4679 | 0.0082 | -1.0 | 0.2906 | 0.4654 | -1.0 | -1.0 |
| 0.92 | 12.0 | 2508 | 0.9430 | 0.2956 | 0.7454 | 0.1685 | 0.297 | 0.0052 | -1.0 | 0.0248 | 0.1886 | 0.4696 | 0.4721 | 0.0061 | -1.0 | 0.2956 | 0.4696 | -1.0 | -1.0 |
| 0.92 | 13.0 | 2717 | 0.9434 | 0.2953 | 0.7445 | 0.1721 | 0.2968 | 0.0233 | -1.0 | 0.0245 | 0.1895 | 0.4673 | 0.4697 | 0.0265 | -1.0 | 0.2953 | 0.4673 | -1.0 | -1.0 |
| 0.92 | 14.0 | 2926 | 0.9228 | 0.3001 | 0.7498 | 0.1716 | 0.3015 | 0.0132 | -1.0 | 0.0244 | 0.1899 | 0.4767 | 0.4792 | 0.0143 | -1.0 | 0.3001 | 0.4767 | -1.0 | -1.0 |
| 0.8986 | 15.0 | 3135 | 0.9194 | 0.3036 | 0.7566 | 0.1757 | 0.3055 | 0.0119 | -1.0 | 0.0243 | 0.1929 | 0.4778 | 0.4803 | 0.0143 | -1.0 | 0.3036 | 0.4778 | -1.0 | -1.0 |
| 0.8986 | 16.0 | 3344 | 0.9166 | 0.3063 | 0.7558 | 0.1791 | 0.3081 | 0.0129 | -1.0 | 0.0253 | 0.1928 | 0.4762 | 0.4787 | 0.0122 | -1.0 | 0.3063 | 0.4762 | -1.0 | -1.0 |
| 0.875 | 17.0 | 3553 | 0.9218 | 0.3021 | 0.7573 | 0.1653 | 0.3041 | 0.0089 | -1.0 | 0.0251 | 0.1908 | 0.4721 | 0.4746 | 0.0082 | -1.0 | 0.3021 | 0.4721 | -1.0 | -1.0 |
| 0.875 | 18.0 | 3762 | 0.9094 | 0.3032 | 0.7548 | 0.1705 | 0.3052 | 0.0079 | -1.0 | 0.0242 | 0.1921 | 0.4769 | 0.4795 | 0.0061 | -1.0 | 0.3032 | 0.4769 | -1.0 | -1.0 |
| 0.875 | 19.0 | 3971 | 0.8965 | 0.3156 | 0.7713 | 0.1873 | 0.3171 | 0.0187 | -1.0 | 0.0247 | 0.1963 | 0.484 | 0.4865 | 0.0184 | -1.0 | 0.3156 | 0.484 | -1.0 | -1.0 |
| 0.8575 | 20.0 | 4180 | 0.8995 | 0.3101 | 0.7674 | 0.1854 | 0.3116 | 0.0069 | -1.0 | 0.0247 | 0.1964 | 0.4803 | 0.4829 | 0.0061 | -1.0 | 0.3101 | 0.4803 | -1.0 | -1.0 |
| 0.8575 | 21.0 | 4389 | 0.8992 | 0.3118 | 0.7676 | 0.1852 | 0.3131 | 0.0119 | -1.0 | 0.0252 | 0.1954 | 0.4794 | 0.4819 | 0.0102 | -1.0 | 0.3118 | 0.4794 | -1.0 | -1.0 |
| 0.834 | 22.0 | 4598 | 0.8912 | 0.3169 | 0.7744 | 0.1894 | 0.3186 | 0.0089 | -1.0 | 0.0254 | 0.1977 | 0.4876 | 0.4902 | 0.0082 | -1.0 | 0.3169 | 0.4876 | -1.0 | -1.0 |
| 0.834 | 23.0 | 4807 | 0.8922 | 0.3175 | 0.7761 | 0.1895 | 0.3195 | 0.0083 | -1.0 | 0.0255 | 0.1984 | 0.4881 | 0.4907 | 0.0102 | -1.0 | 0.3175 | 0.4881 | -1.0 | -1.0 |
| 0.8217 | 24.0 | 5016 | 0.8946 | 0.3153 | 0.7735 | 0.1809 | 0.3167 | 0.0119 | -1.0 | 0.0249 | 0.1972 | 0.4819 | 0.4844 | 0.0102 | -1.0 | 0.3153 | 0.4819 | -1.0 | -1.0 |
| 0.8217 | 25.0 | 5225 | 0.8891 | 0.3198 | 0.7801 | 0.1877 | 0.3213 | 0.0089 | -1.0 | 0.025 | 0.1981 | 0.4878 | 0.4903 | 0.0082 | -1.0 | 0.3198 | 0.4878 | -1.0 | -1.0 |
| 0.8217 | 26.0 | 5434 | 0.8867 | 0.3206 | 0.7794 | 0.1894 | 0.3223 | 0.0139 | -1.0 | 0.0254 | 0.1987 | 0.4875 | 0.4901 | 0.0122 | -1.0 | 0.3206 | 0.4875 | -1.0 | -1.0 |
| 0.8153 | 27.0 | 5643 | 0.8859 | 0.3207 | 0.7787 | 0.1897 | 0.3224 | 0.0139 | -1.0 | 0.0255 | 0.1991 | 0.4879 | 0.4905 | 0.0122 | -1.0 | 0.3207 | 0.4879 | -1.0 | -1.0 |
| 0.8153 | 28.0 | 5852 | 0.8862 | 0.3203 | 0.7785 | 0.1882 | 0.3222 | 0.0139 | -1.0 | 0.0255 | 0.1994 | 0.4867 | 0.4892 | 0.0122 | -1.0 | 0.3203 | 0.4867 | -1.0 | -1.0 |
| 0.8022 | 29.0 | 6061 | 0.8864 | 0.32 | 0.7776 | 0.1892 | 0.3219 | 0.0139 | -1.0 | 0.0253 | 0.1994 | 0.4871 | 0.4897 | 0.0122 | -1.0 | 0.32 | 0.4871 | -1.0 | -1.0 |
| 0.8022 | 30.0 | 6270 | 0.8865 | 0.32 | 0.778 | 0.1885 | 0.3219 | 0.0139 | -1.0 | 0.0252 | 0.1994 | 0.487 | 0.4896 | 0.0122 | -1.0 | 0.32 | 0.487 | -1.0 | -1.0 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| [
"falciparum_trophozoite",
"wbc"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr5e5_dec1e4_bs12 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr5e5_dec1e4_bs12
This model is a fine-tuned version of [nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr1e5_dec1e4_bs8](https://huggingface.co/nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr1e5_dec1e4_bs8) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr5e5_dec1e4_bs12 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr5e5_dec1e4_bs12
This model is a fine-tuned version of [nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr1e5_dec1e4_bs8](https://huggingface.co/nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr1e5_dec1e4_bs8) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
iasjkk/MV_EC_Detr_Model |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr1e4_dec1e3_bs16 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr1e4_dec1e3_bs16
This model is a fine-tuned version of [nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr5e5_dec1e4_bs12](https://huggingface.co/nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr5e5_dec1e4_bs12) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr1e4_dec1e3_bs16 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr1e4_dec1e3_bs16
This model is a fine-tuned version of [nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr5e5_dec1e4_bs12](https://huggingface.co/nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_semicplx_v1_s1_226s_adjpar6_lr5e5_dec1e4_bs12) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
marthakk/detr_finetuned_airdataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_airdataset
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the dsi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8959
- Map: 0.3195
- Map 50: 0.7784
- Map 75: 0.1925
- Map Small: 0.3211
- Map Medium: 0.0079
- Map Large: -1.0
- Mar 1: 0.0256
- Mar 10: 0.1995
- Mar 100: 0.487
- Mar Small: 0.4896
- Mar Medium: 0.0061
- Mar Large: -1.0
- Map Falciparum Trophozoite: 0.3195
- Mar 100 Falciparum Trophozoite: 0.487
- Map Wbc: -1.0
- Mar 100 Wbc: -1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Falciparum Trophozoite | Mar 100 Falciparum Trophozoite | Map Wbc | Mar 100 Wbc |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:--------------------------:|:------------------------------:|:-------:|:-----------:|
| No log | 1.0 | 209 | 1.2206 | 0.1424 | 0.401 | 0.07 | 0.1429 | 0.0328 | -1.0 | 0.0168 | 0.1239 | 0.4185 | 0.4204 | 0.0612 | -1.0 | 0.1424 | 0.4185 | -1.0 | -1.0 |
| No log | 2.0 | 418 | 1.1354 | 0.2136 | 0.585 | 0.1077 | 0.2145 | 0.0224 | -1.0 | 0.0212 | 0.1608 | 0.4102 | 0.4123 | 0.0224 | -1.0 | 0.2136 | 0.4102 | -1.0 | -1.0 |
| 1.3747 | 3.0 | 627 | 1.0729 | 0.2353 | 0.6428 | 0.1092 | 0.2365 | 0.0229 | -1.0 | 0.0216 | 0.1669 | 0.4247 | 0.4268 | 0.0245 | -1.0 | 0.2353 | 0.4247 | -1.0 | -1.0 |
| 1.3747 | 4.0 | 836 | 1.0260 | 0.2548 | 0.6701 | 0.1339 | 0.2563 | 0.0178 | -1.0 | 0.0234 | 0.1792 | 0.4424 | 0.4447 | 0.0163 | -1.0 | 0.2548 | 0.4424 | -1.0 | -1.0 |
| 1.0467 | 5.0 | 1045 | 1.0116 | 0.2576 | 0.6811 | 0.1321 | 0.2589 | 0.0208 | -1.0 | 0.0229 | 0.1773 | 0.4422 | 0.4445 | 0.0184 | -1.0 | 0.2576 | 0.4422 | -1.0 | -1.0 |
| 1.0467 | 6.0 | 1254 | 1.0150 | 0.2526 | 0.6842 | 0.1191 | 0.2537 | 0.0089 | -1.0 | 0.0226 | 0.1724 | 0.4463 | 0.4486 | 0.0082 | -1.0 | 0.2526 | 0.4463 | -1.0 | -1.0 |
| 1.0467 | 7.0 | 1463 | 0.9933 | 0.2627 | 0.699 | 0.1376 | 0.2639 | 0.0211 | -1.0 | 0.0215 | 0.1773 | 0.4458 | 0.4481 | 0.0224 | -1.0 | 0.2627 | 0.4458 | -1.0 | -1.0 |
| 0.9905 | 8.0 | 1672 | 0.9642 | 0.2797 | 0.7188 | 0.1511 | 0.2809 | 0.0112 | -1.0 | 0.0241 | 0.1858 | 0.459 | 0.4614 | 0.0143 | -1.0 | 0.2797 | 0.459 | -1.0 | -1.0 |
| 0.9905 | 9.0 | 1881 | 0.9641 | 0.2786 | 0.7209 | 0.1453 | 0.2803 | 0.0103 | -1.0 | 0.0231 | 0.1861 | 0.4534 | 0.4558 | 0.0102 | -1.0 | 0.2786 | 0.4534 | -1.0 | -1.0 |
| 0.955 | 10.0 | 2090 | 0.9869 | 0.2685 | 0.7158 | 0.1366 | 0.27 | 0.0023 | -1.0 | 0.0225 | 0.1789 | 0.4442 | 0.4465 | 0.0041 | -1.0 | 0.2685 | 0.4442 | -1.0 | -1.0 |
| 0.955 | 11.0 | 2299 | 0.9612 | 0.2837 | 0.7238 | 0.1534 | 0.2856 | 0.0067 | -1.0 | 0.0242 | 0.1878 | 0.4568 | 0.4592 | 0.0082 | -1.0 | 0.2837 | 0.4568 | -1.0 | -1.0 |
| 0.9248 | 12.0 | 2508 | 0.9437 | 0.2938 | 0.7368 | 0.1635 | 0.2954 | 0.005 | -1.0 | 0.0239 | 0.1882 | 0.4701 | 0.4727 | 0.0041 | -1.0 | 0.2938 | 0.4701 | -1.0 | -1.0 |
| 0.9248 | 13.0 | 2717 | 0.9390 | 0.289 | 0.7371 | 0.16 | 0.2903 | 0.0149 | -1.0 | 0.0254 | 0.191 | 0.4685 | 0.471 | 0.0122 | -1.0 | 0.289 | 0.4685 | -1.0 | -1.0 |
| 0.9248 | 14.0 | 2926 | 0.9321 | 0.2986 | 0.7428 | 0.1744 | 0.3002 | 0.005 | -1.0 | 0.0251 | 0.1928 | 0.4743 | 0.4768 | 0.0041 | -1.0 | 0.2986 | 0.4743 | -1.0 | -1.0 |
| 0.9027 | 15.0 | 3135 | 0.9448 | 0.2911 | 0.7418 | 0.1588 | 0.2924 | 0.0139 | -1.0 | 0.0241 | 0.1877 | 0.4678 | 0.4702 | 0.0122 | -1.0 | 0.2911 | 0.4678 | -1.0 | -1.0 |
| 0.9027 | 16.0 | 3344 | 0.9259 | 0.3033 | 0.7549 | 0.174 | 0.3047 | 0.005 | -1.0 | 0.0249 | 0.1931 | 0.4736 | 0.4762 | 0.0041 | -1.0 | 0.3033 | 0.4736 | -1.0 | -1.0 |
| 0.8725 | 17.0 | 3553 | 0.9200 | 0.3039 | 0.7554 | 0.1795 | 0.3055 | 0.0069 | -1.0 | 0.0259 | 0.1949 | 0.4764 | 0.479 | 0.0061 | -1.0 | 0.3039 | 0.4764 | -1.0 | -1.0 |
| 0.8725 | 18.0 | 3762 | 0.9129 | 0.3068 | 0.7622 | 0.1786 | 0.3083 | 0.0089 | -1.0 | 0.026 | 0.1961 | 0.4817 | 0.4842 | 0.0082 | -1.0 | 0.3068 | 0.4817 | -1.0 | -1.0 |
| 0.8725 | 19.0 | 3971 | 0.9053 | 0.3129 | 0.7699 | 0.182 | 0.3146 | 0.0119 | -1.0 | 0.0253 | 0.1986 | 0.4806 | 0.4832 | 0.0102 | -1.0 | 0.3129 | 0.4806 | -1.0 | -1.0 |
| 0.8532 | 20.0 | 4180 | 0.9124 | 0.3076 | 0.7661 | 0.1794 | 0.3093 | 0.0069 | -1.0 | 0.0252 | 0.1972 | 0.4798 | 0.4823 | 0.0061 | -1.0 | 0.3076 | 0.4798 | -1.0 | -1.0 |
| 0.8532 | 21.0 | 4389 | 0.9060 | 0.3129 | 0.7694 | 0.182 | 0.3146 | 0.0139 | -1.0 | 0.0254 | 0.1988 | 0.4811 | 0.4837 | 0.0122 | -1.0 | 0.3129 | 0.4811 | -1.0 | -1.0 |
| 0.8362 | 22.0 | 4598 | 0.9007 | 0.3157 | 0.7733 | 0.1886 | 0.3173 | 0.0079 | -1.0 | 0.0255 | 0.2005 | 0.4834 | 0.4859 | 0.0061 | -1.0 | 0.3157 | 0.4834 | -1.0 | -1.0 |
| 0.8362 | 23.0 | 4807 | 0.9036 | 0.3148 | 0.7702 | 0.1859 | 0.3159 | 0.0119 | -1.0 | 0.0255 | 0.1982 | 0.4859 | 0.4884 | 0.0102 | -1.0 | 0.3148 | 0.4859 | -1.0 | -1.0 |
| 0.8211 | 24.0 | 5016 | 0.8988 | 0.3159 | 0.7733 | 0.1875 | 0.3172 | 0.005 | -1.0 | 0.0253 | 0.1988 | 0.4844 | 0.487 | 0.0041 | -1.0 | 0.3159 | 0.4844 | -1.0 | -1.0 |
| 0.8211 | 25.0 | 5225 | 0.8989 | 0.3175 | 0.7741 | 0.1888 | 0.3189 | 0.0079 | -1.0 | 0.0256 | 0.1995 | 0.486 | 0.4886 | 0.0061 | -1.0 | 0.3175 | 0.486 | -1.0 | -1.0 |
| 0.8211 | 26.0 | 5434 | 0.8980 | 0.3188 | 0.776 | 0.1918 | 0.3204 | 0.005 | -1.0 | 0.0258 | 0.1998 | 0.4867 | 0.4893 | 0.0041 | -1.0 | 0.3188 | 0.4867 | -1.0 | -1.0 |
| 0.8091 | 27.0 | 5643 | 0.8953 | 0.3204 | 0.7786 | 0.1931 | 0.3219 | 0.0079 | -1.0 | 0.026 | 0.2002 | 0.4863 | 0.4889 | 0.0061 | -1.0 | 0.3204 | 0.4863 | -1.0 | -1.0 |
| 0.8091 | 28.0 | 5852 | 0.8973 | 0.3192 | 0.7784 | 0.1911 | 0.3208 | 0.0079 | -1.0 | 0.0255 | 0.199 | 0.4867 | 0.4892 | 0.0061 | -1.0 | 0.3192 | 0.4867 | -1.0 | -1.0 |
| 0.8001 | 29.0 | 6061 | 0.8962 | 0.3196 | 0.7785 | 0.1926 | 0.3211 | 0.0079 | -1.0 | 0.0257 | 0.1994 | 0.487 | 0.4896 | 0.0061 | -1.0 | 0.3196 | 0.487 | -1.0 | -1.0 |
| 0.8001 | 30.0 | 6270 | 0.8959 | 0.3195 | 0.7784 | 0.1925 | 0.3211 | 0.0079 | -1.0 | 0.0256 | 0.1995 | 0.487 | 0.4896 | 0.0061 | -1.0 | 0.3195 | 0.487 | -1.0 | -1.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| [
"falciparum_trophozoite",
"wbc"
] |
tosa-no-onchan/detr-resnet-50_finetuned_cppe5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [tosa-no-onchan/detr-resnet-50_finetuned_cppe5](https://huggingface.co/tosa-no-onchan/detr-resnet-50_finetuned_cppe5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr5e4_dec1e3_bs16 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr5e4_dec1e3_bs16
This model is a fine-tuned version of [nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr5e5_dec1e4_bs12](https://huggingface.co/nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr5e5_dec1e4_bs12) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
aromo17/detr-resnet-50_finetuned_cppe5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
iasjkk/MV_final_6500EPOCH |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16"
] |
nsugianto/tblstructrecog_tuned_tbltransstrucrecog_noncomplex_complex_conlash_b5_1807s_lr1e6_dec1e5_bs4 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_tuned_tbltransstrucrecog_noncomplex_complex_conlash_b5_1807s_lr1e6_dec1e5_bs4
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 750
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
nsugianto/tblstruct_tbltransstrucrecog_noncomplx_complx_conlash_b5_1807s_adjpar6_lr1e6_dec1e5_bs4 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstruct_tbltransstrucrecog_noncomplx_complx_conlash_b5_1807s_adjpar6_lr1e6_dec1e5_bs4
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
iasjkk/MV_BIG_Epoch_12000 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.