model_id
stringlengths 9
102
| model_card
stringlengths 4
343k
| model_labels
listlengths 2
50.8k
|
---|---|---|
uisikdag/autotrain-detr-resnet-50-safety-vest-detection |
# Model Trained Using AutoTrain
- Problem type: Object Detection
## Validation Metrics
loss: 0.8105313181877136
map: 0.4295
map_50: 0.7538
map_75: 0.4237
map_small: 0.1306
map_medium: 0.3983
map_large: 0.4773
mar_1: 0.2729
mar_10: 0.5889
mar_100: 0.6462
mar_small: 0.2889
mar_medium: 0.5652
mar_large: 0.7258
| [
"class_0",
"class_1",
"class_2"
] |
edgarromo/detr-finetuned-cppe-5-10k-steps |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-finetuned-cppe-5-10k-steps
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the danelcsb/cppe-5-v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9557
- Map: 0.3883
- Map 50: 0.6828
- Map 75: 0.3786
- Map Small: 0.1193
- Map Medium: 0.3371
- Map Large: 0.5338
- Mar 1: 0.3968
- Mar 10: 0.5689
- Mar 100: 0.58
- Mar Small: 0.3679
- Mar Medium: 0.5385
- Mar Large: 0.7028
- Map Coverall: 0.6436
- Mar 100 Coverall: 0.7944
- Map Face Shield: 0.2789
- Mar 100 Face Shield: 0.5927
- Map Gloves: 0.312
- Mar 100 Gloves: 0.4922
- Map Goggles: 0.2522
- Mar 100 Goggles: 0.4714
- Map Mask: 0.4546
- Mar 100 Mask: 0.5494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Coverall | Map Face Shield | Map Gloves | Map Goggles | Map Mask | Map Large | Map Medium | Map Small | Mar 1 | Mar 10 | Mar 100 | Mar 100 Coverall | Mar 100 Face Shield | Mar 100 Gloves | Mar 100 Goggles | Mar 100 Mask | Mar Large | Mar Medium | Mar Small |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------------:|:---------------:|:----------:|:-----------:|:--------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:----------------:|:-------------------:|:--------------:|:---------------:|:------------:|:---------:|:----------:|:---------:|
| 2.5068 | 1.0 | 102 | 1.9580 | 0.028 | 0.063 | 0.0222 | 0.1165 | 0.0 | 0.005 | 0.0 | 0.0186 | 0.0324 | 0.0068 | 0.0098 | 0.0737 | 0.196 | 0.2324 | 0.6757 | 0.0 | 0.2056 | 0.0 | 0.2809 | 0.2481 | 0.1698 | 0.0874 |
| 2.0122 | 2.0 | 204 | 1.7956 | 0.0523 | 0.1085 | 0.0436 | 0.2106 | 0.0 | 0.0105 | 0.0 | 0.0404 | 0.0543 | 0.0244 | 0.017 | 0.1129 | 0.2302 | 0.2701 | 0.6958 | 0.0 | 0.2676 | 0.0 | 0.387 | 0.2585 | 0.2839 | 0.145 |
| 2.0014 | 3.0 | 306 | 1.8244 | 0.0531 | 0.107 | 0.044 | 0.2287 | 0.0 | 0.0111 | 0.0 | 0.0257 | 0.0595 | 0.0233 | 0.0094 | 0.1079 | 0.2132 | 0.2413 | 0.7201 | 0.0 | 0.2335 | 0.0 | 0.2531 | 0.2344 | 0.2067 | 0.1223 |
| 1.9516 | 4.0 | 408 | 1.8204 | 0.0668 | 0.1439 | 0.0575 | 0.2519 | 0.0 | 0.01 | 0.0 | 0.0723 | 0.0688 | 0.0714 | 0.0253 | 0.1177 | 0.2157 | 0.2485 | 0.6979 | 0.0 | 0.238 | 0.0 | 0.3068 | 0.2604 | 0.2796 | 0.1137 |
| 1.7527 | 5.0 | 510 | 1.6670 | 0.1042 | 0.2045 | 0.0884 | 0.3563 | 0.012 | 0.0219 | 0.0 | 0.131 | 0.1157 | 0.1257 | 0.0317 | 0.1478 | 0.2544 | 0.2863 | 0.7201 | 0.0545 | 0.324 | 0.0 | 0.3327 | 0.2876 | 0.3268 | 0.133 |
| 1.8596 | 6.0 | 612 | 1.6266 | 0.1193 | 0.2393 | 0.0995 | 0.3829 | 0.0094 | 0.0217 | 0.0 | 0.1825 | 0.1317 | 0.1388 | 0.0364 | 0.1457 | 0.2592 | 0.2891 | 0.6979 | 0.0782 | 0.2916 | 0.0 | 0.3778 | 0.3035 | 0.3271 | 0.124 |
| 1.6759 | 7.0 | 714 | 1.4716 | 0.1598 | 0.3152 | 0.1384 | 0.4726 | 0.0347 | 0.0482 | 0.0104 | 0.233 | 0.1809 | 0.1632 | 0.048 | 0.2023 | 0.3335 | 0.3585 | 0.7514 | 0.18 | 0.3346 | 0.0905 | 0.4358 | 0.4061 | 0.3458 | 0.1378 |
| 1.612 | 8.0 | 816 | 1.4603 | 0.1794 | 0.346 | 0.1549 | 0.509 | 0.0674 | 0.0507 | 0.0288 | 0.241 | 0.2141 | 0.1659 | 0.0597 | 0.2235 | 0.3712 | 0.3969 | 0.7507 | 0.3382 | 0.3056 | 0.1444 | 0.4457 | 0.4514 | 0.3659 | 0.2952 |
| 1.5547 | 9.0 | 918 | 1.4071 | 0.1953 | 0.3824 | 0.1667 | 0.5281 | 0.0766 | 0.05 | 0.0317 | 0.2902 | 0.2372 | 0.1779 | 0.0693 | 0.2377 | 0.4103 | 0.4321 | 0.7431 | 0.3927 | 0.3095 | 0.2556 | 0.4599 | 0.5102 | 0.3892 | 0.3014 |
| 1.4847 | 10.0 | 1020 | 1.4339 | 0.1997 | 0.4047 | 0.1715 | 0.518 | 0.0834 | 0.0588 | 0.0538 | 0.2844 | 0.2371 | 0.1897 | 0.0714 | 0.2308 | 0.3932 | 0.4133 | 0.7792 | 0.3473 | 0.2849 | 0.2254 | 0.4296 | 0.4858 | 0.3825 | 0.1816 |
| 1.4681 | 11.0 | 1122 | 1.3431 | 0.2109 | 0.4195 | 0.182 | 0.4944 | 0.0947 | 0.0779 | 0.061 | 0.3266 | 0.2622 | 0.2017 | 0.0854 | 0.2488 | 0.4273 | 0.451 | 0.7375 | 0.4073 | 0.343 | 0.2937 | 0.4735 | 0.5428 | 0.423 | 0.2525 |
| 1.4665 | 12.0 | 1224 | 1.3346 | 0.2235 | 0.4539 | 0.1869 | 0.5328 | 0.1148 | 0.0888 | 0.0498 | 0.3313 | 0.2762 | 0.2063 | 0.093 | 0.2681 | 0.4279 | 0.443 | 0.7674 | 0.4182 | 0.2564 | 0.3048 | 0.4685 | 0.5185 | 0.4064 | 0.2863 |
| 1.4067 | 13.0 | 1326 | 1.2385 | 0.2444 | 0.4487 | 0.2147 | 0.5738 | 0.1172 | 0.0953 | 0.0684 | 0.3674 | 0.3054 | 0.2272 | 0.0914 | 0.2886 | 0.4702 | 0.488 | 0.7764 | 0.5 | 0.3453 | 0.3222 | 0.4963 | 0.5797 | 0.4361 | 0.3035 |
| 1.3582 | 14.0 | 1428 | 1.3032 | 0.2213 | 0.4326 | 0.1972 | 0.5591 | 0.0791 | 0.101 | 0.0403 | 0.3268 | 0.2666 | 0.1948 | 0.1149 | 0.2565 | 0.438 | 0.4608 | 0.7729 | 0.42 | 0.3793 | 0.2825 | 0.4494 | 0.5332 | 0.4138 | 0.3004 |
| 1.3927 | 15.0 | 1530 | 1.3047 | 0.2134 | 0.4279 | 0.1705 | 0.546 | 0.1051 | 0.0928 | 0.0309 | 0.2921 | 0.2623 | 0.2018 | 0.1082 | 0.2699 | 0.4284 | 0.4508 | 0.7632 | 0.4182 | 0.3464 | 0.2794 | 0.4469 | 0.5472 | 0.3983 | 0.3321 |
| 1.3589 | 16.0 | 1632 | 1.2959 | 0.2063 | 0.4044 | 0.1811 | 0.55 | 0.0757 | 0.1116 | 0.0186 | 0.2756 | 0.2634 | 0.2029 | 0.0733 | 0.2588 | 0.4274 | 0.4442 | 0.7597 | 0.44 | 0.3715 | 0.219 | 0.4309 | 0.5285 | 0.3845 | 0.2993 |
| 1.3704 | 17.0 | 1734 | 1.3093 | 0.225 | 0.4359 | 0.2054 | 0.5552 | 0.127 | 0.0876 | 0.0502 | 0.3048 | 0.2784 | 0.2058 | 0.0672 | 0.2743 | 0.4419 | 0.4595 | 0.7611 | 0.4564 | 0.3358 | 0.3095 | 0.4346 | 0.5652 | 0.3969 | 0.1763 |
| 1.3464 | 18.0 | 1836 | 1.2737 | 0.2253 | 0.4451 | 0.1894 | 0.5269 | 0.105 | 0.1119 | 0.046 | 0.3367 | 0.2715 | 0.2237 | 0.0943 | 0.2899 | 0.4611 | 0.4872 | 0.75 | 0.5382 | 0.3598 | 0.3381 | 0.45 | 0.5967 | 0.4398 | 0.3134 |
| 1.2997 | 19.0 | 1938 | 1.3513 | 0.2108 | 0.4438 | 0.1639 | 0.5721 | 0.0839 | 0.0841 | 0.049 | 0.2648 | 0.2461 | 0.1924 | 0.0764 | 0.2401 | 0.4147 | 0.4407 | 0.7479 | 0.3673 | 0.3654 | 0.3143 | 0.4086 | 0.529 | 0.3867 | 0.2768 |
| 1.3026 | 20.0 | 2040 | 1.2170 | 0.2452 | 0.47 | 0.2203 | 0.0944 | 0.2252 | 0.3061 | 0.2962 | 0.4682 | 0.4834 | 0.2715 | 0.4083 | 0.5942 | 0.5735 | 0.7604 | 0.1328 | 0.4855 | 0.1514 | 0.4067 | 0.0475 | 0.3111 | 0.321 | 0.4531 |
| 1.2999 | 21.0 | 2142 | 1.1721 | 0.2549 | 0.476 | 0.2494 | 0.0937 | 0.2324 | 0.3261 | 0.3091 | 0.4881 | 0.5026 | 0.2858 | 0.4534 | 0.6074 | 0.5904 | 0.7729 | 0.1445 | 0.4927 | 0.1217 | 0.4123 | 0.0523 | 0.346 | 0.3654 | 0.4889 |
| 1.2456 | 22.0 | 2244 | 1.2621 | 0.2326 | 0.4787 | 0.1896 | 0.1004 | 0.2244 | 0.283 | 0.2898 | 0.4705 | 0.4871 | 0.3 | 0.4122 | 0.5997 | 0.5195 | 0.7451 | 0.1486 | 0.4745 | 0.1433 | 0.3911 | 0.0608 | 0.4032 | 0.2908 | 0.4216 |
| 1.2347 | 23.0 | 2346 | 1.1968 | 0.2498 | 0.4855 | 0.221 | 0.1597 | 0.2285 | 0.3066 | 0.3105 | 0.4779 | 0.4912 | 0.2936 | 0.4203 | 0.6006 | 0.5644 | 0.7611 | 0.1351 | 0.4782 | 0.1783 | 0.4358 | 0.0475 | 0.3302 | 0.3236 | 0.4506 |
| 1.1921 | 24.0 | 2448 | 1.1848 | 0.2547 | 0.4882 | 0.2391 | 0.0778 | 0.2204 | 0.3315 | 0.2964 | 0.4737 | 0.4873 | 0.2524 | 0.394 | 0.5864 | 0.5565 | 0.7535 | 0.1296 | 0.46 | 0.1757 | 0.4385 | 0.0592 | 0.3032 | 0.3528 | 0.4815 |
| 1.2161 | 25.0 | 2550 | 1.1540 | 0.2748 | 0.5 | 0.2762 | 0.1216 | 0.2465 | 0.3579 | 0.3206 | 0.4912 | 0.5028 | 0.2267 | 0.4431 | 0.6232 | 0.5849 | 0.8014 | 0.1621 | 0.4618 | 0.1715 | 0.4017 | 0.0842 | 0.3508 | 0.3711 | 0.4981 |
| 1.2265 | 26.0 | 2652 | 1.1872 | 0.2588 | 0.5142 | 0.2234 | 0.1049 | 0.2254 | 0.3452 | 0.305 | 0.4834 | 0.4956 | 0.2824 | 0.4573 | 0.6008 | 0.5858 | 0.7826 | 0.124 | 0.4727 | 0.1688 | 0.3849 | 0.0996 | 0.373 | 0.316 | 0.4648 |
| 1.1619 | 27.0 | 2754 | 1.1557 | 0.261 | 0.5139 | 0.2223 | 0.1001 | 0.2469 | 0.3395 | 0.3001 | 0.4991 | 0.5138 | 0.2947 | 0.4964 | 0.6136 | 0.6035 | 0.791 | 0.1446 | 0.5091 | 0.1859 | 0.4441 | 0.0738 | 0.3905 | 0.2972 | 0.4346 |
| 1.1671 | 28.0 | 2856 | 1.1103 | 0.2723 | 0.5256 | 0.2255 | 0.0904 | 0.2222 | 0.3648 | 0.3276 | 0.5035 | 0.518 | 0.2321 | 0.4173 | 0.6441 | 0.5958 | 0.7778 | 0.1333 | 0.4982 | 0.1702 | 0.438 | 0.0932 | 0.3889 | 0.3691 | 0.487 |
| 1.1463 | 29.0 | 2958 | 1.1234 | 0.2679 | 0.5065 | 0.2356 | 0.1073 | 0.2455 | 0.3687 | 0.3378 | 0.5025 | 0.5181 | 0.3251 | 0.4521 | 0.6292 | 0.5826 | 0.7688 | 0.1427 | 0.4982 | 0.1845 | 0.4508 | 0.0767 | 0.3968 | 0.3532 | 0.4759 |
| 1.1374 | 30.0 | 3060 | 1.1353 | 0.2815 | 0.5379 | 0.2442 | 0.1098 | 0.2314 | 0.3621 | 0.3236 | 0.5069 | 0.5243 | 0.2765 | 0.4815 | 0.628 | 0.5967 | 0.7694 | 0.1941 | 0.54 | 0.1818 | 0.4363 | 0.068 | 0.3905 | 0.3668 | 0.4852 |
| 1.1409 | 31.0 | 3162 | 1.1136 | 0.2944 | 0.5876 | 0.2576 | 0.1244 | 0.2457 | 0.3953 | 0.3349 | 0.5033 | 0.517 | 0.3338 | 0.4603 | 0.6293 | 0.6012 | 0.7708 | 0.1623 | 0.4982 | 0.2146 | 0.443 | 0.1384 | 0.3968 | 0.3556 | 0.4759 |
| 1.1216 | 32.0 | 3264 | 1.1179 | 0.2835 | 0.5332 | 0.2656 | 0.098 | 0.2522 | 0.3857 | 0.3304 | 0.5174 | 0.532 | 0.3335 | 0.4975 | 0.638 | 0.5944 | 0.784 | 0.15 | 0.5018 | 0.1956 | 0.4441 | 0.1165 | 0.4317 | 0.3608 | 0.4981 |
| 1.1202 | 33.0 | 3366 | 1.1690 | 0.2505 | 0.5131 | 0.2151 | 0.1067 | 0.2247 | 0.3412 | 0.2935 | 0.4917 | 0.5099 | 0.3288 | 0.4888 | 0.6058 | 0.4839 | 0.7375 | 0.124 | 0.4964 | 0.205 | 0.4374 | 0.0794 | 0.3905 | 0.3598 | 0.4877 |
| 1.1005 | 34.0 | 3468 | 1.0671 | 0.2997 | 0.5595 | 0.2821 | 0.1116 | 0.2669 | 0.4054 | 0.3435 | 0.5154 | 0.5322 | 0.3146 | 0.4567 | 0.6486 | 0.6208 | 0.7903 | 0.1752 | 0.5527 | 0.2243 | 0.4413 | 0.1169 | 0.3651 | 0.361 | 0.5117 |
| 1.095 | 35.0 | 3570 | 1.1332 | 0.2782 | 0.5383 | 0.2462 | 0.1002 | 0.2871 | 0.3769 | 0.3318 | 0.4947 | 0.5085 | 0.2792 | 0.4959 | 0.6081 | 0.5438 | 0.7722 | 0.1983 | 0.5236 | 0.1886 | 0.4112 | 0.0997 | 0.3476 | 0.3605 | 0.4877 |
| 1.0893 | 36.0 | 3672 | 1.1231 | 0.2884 | 0.5378 | 0.2562 | 0.0925 | 0.2626 | 0.3803 | 0.3295 | 0.5049 | 0.5224 | 0.2844 | 0.4926 | 0.6166 | 0.5897 | 0.7639 | 0.1526 | 0.5164 | 0.2039 | 0.4441 | 0.1188 | 0.3825 | 0.3768 | 0.5049 |
| 1.1081 | 37.0 | 3774 | 1.0871 | 0.2936 | 0.5527 | 0.2571 | 0.1084 | 0.2656 | 0.3852 | 0.3351 | 0.513 | 0.5316 | 0.2868 | 0.5013 | 0.6222 | 0.6108 | 0.7764 | 0.1648 | 0.5455 | 0.1958 | 0.4302 | 0.103 | 0.3937 | 0.3934 | 0.5123 |
| 1.0929 | 38.0 | 3876 | 1.1043 | 0.3024 | 0.5575 | 0.2804 | 0.1256 | 0.2607 | 0.409 | 0.3453 | 0.5166 | 0.5305 | 0.309 | 0.4687 | 0.64 | 0.6099 | 0.7729 | 0.1977 | 0.5236 | 0.1875 | 0.4257 | 0.1212 | 0.4111 | 0.3959 | 0.5191 |
| 1.0806 | 39.0 | 3978 | 1.1396 | 0.2842 | 0.5494 | 0.2531 | 0.1204 | 0.2438 | 0.378 | 0.3186 | 0.4869 | 0.4977 | 0.2946 | 0.4269 | 0.5969 | 0.5722 | 0.7674 | 0.1555 | 0.4982 | 0.2185 | 0.4196 | 0.1183 | 0.3365 | 0.3566 | 0.4667 |
| 1.0974 | 40.0 | 4080 | 1.0866 | 0.291 | 0.5598 | 0.2596 | 0.1355 | 0.2464 | 0.4061 | 0.3438 | 0.4977 | 0.5084 | 0.3066 | 0.4423 | 0.621 | 0.5999 | 0.791 | 0.1368 | 0.5091 | 0.2265 | 0.4447 | 0.126 | 0.3063 | 0.3659 | 0.4907 |
| 1.073 | 41.0 | 4182 | 1.0857 | 0.2995 | 0.5459 | 0.2801 | 0.1843 | 0.2416 | 0.4097 | 0.3386 | 0.5078 | 0.5186 | 0.3106 | 0.4305 | 0.629 | 0.6073 | 0.7924 | 0.1644 | 0.5 | 0.2256 | 0.4274 | 0.0944 | 0.3619 | 0.4056 | 0.5111 |
| 1.0465 | 42.0 | 4284 | 1.0739 | 0.3066 | 0.5775 | 0.2667 | 0.1102 | 0.2644 | 0.4119 | 0.3534 | 0.5116 | 0.5237 | 0.3007 | 0.4541 | 0.6397 | 0.5951 | 0.791 | 0.1646 | 0.5055 | 0.2441 | 0.4592 | 0.1208 | 0.3651 | 0.4084 | 0.4975 |
| 1.0748 | 43.0 | 4386 | 1.1166 | 0.2943 | 0.5668 | 0.2546 | 0.1363 | 0.2789 | 0.3821 | 0.3395 | 0.4978 | 0.5051 | 0.296 | 0.493 | 0.6026 | 0.6161 | 0.7792 | 0.1919 | 0.5055 | 0.2007 | 0.4067 | 0.1002 | 0.3556 | 0.3627 | 0.4784 |
| 1.0632 | 44.0 | 4488 | 1.1081 | 0.2808 | 0.5393 | 0.2424 | 0.0885 | 0.2449 | 0.4 | 0.3458 | 0.5062 | 0.5158 | 0.2818 | 0.4733 | 0.6337 | 0.5962 | 0.7875 | 0.1076 | 0.5055 | 0.2123 | 0.4369 | 0.1076 | 0.3444 | 0.3805 | 0.5049 |
| 1.047 | 45.0 | 4590 | 1.0471 | 0.3169 | 0.5899 | 0.2866 | 0.1514 | 0.2864 | 0.4332 | 0.3533 | 0.5187 | 0.5286 | 0.3206 | 0.4866 | 0.6547 | 0.6178 | 0.7979 | 0.1907 | 0.5055 | 0.2525 | 0.4374 | 0.1322 | 0.3952 | 0.3911 | 0.5068 |
| 1.0293 | 46.0 | 4692 | 1.0620 | 0.3132 | 0.5894 | 0.2843 | 0.146 | 0.272 | 0.4305 | 0.3423 | 0.5257 | 0.538 | 0.3437 | 0.4895 | 0.6481 | 0.6203 | 0.7799 | 0.1997 | 0.5455 | 0.2634 | 0.462 | 0.1141 | 0.4016 | 0.3683 | 0.5012 |
| 1.0105 | 47.0 | 4794 | 1.0400 | 0.3331 | 0.6222 | 0.3026 | 0.1343 | 0.2826 | 0.4504 | 0.3694 | 0.5295 | 0.5401 | 0.3587 | 0.4644 | 0.6548 | 0.6184 | 0.7854 | 0.2126 | 0.54 | 0.2666 | 0.4581 | 0.1769 | 0.4079 | 0.3911 | 0.5093 |
| 0.991 | 48.0 | 4896 | 1.0645 | 0.3199 | 0.6 | 0.2857 | 0.1505 | 0.2765 | 0.4383 | 0.3624 | 0.5282 | 0.541 | 0.3301 | 0.5116 | 0.6508 | 0.5956 | 0.7792 | 0.2264 | 0.5509 | 0.2666 | 0.4721 | 0.165 | 0.4238 | 0.3459 | 0.479 |
| 0.976 | 49.0 | 4998 | 1.0483 | 0.3306 | 0.6056 | 0.2997 | 0.1136 | 0.2684 | 0.4568 | 0.3617 | 0.543 | 0.5565 | 0.3358 | 0.5038 | 0.676 | 0.6141 | 0.7924 | 0.1803 | 0.5491 | 0.2743 | 0.4709 | 0.1892 | 0.4603 | 0.395 | 0.5099 |
| 0.9888 | 50.0 | 5100 | 1.0753 | 0.321 | 0.6015 | 0.2876 | 0.143 | 0.2657 | 0.4313 | 0.3474 | 0.5291 | 0.541 | 0.3402 | 0.438 | 0.6515 | 0.5819 | 0.7604 | 0.1978 | 0.5255 | 0.261 | 0.4542 | 0.1709 | 0.4476 | 0.3934 | 0.5173 |
| 1.0303 | 51.0 | 5202 | 1.0248 | 0.3282 | 0.5991 | 0.3065 | 0.1455 | 0.2856 | 0.4454 | 0.3718 | 0.5362 | 0.5502 | 0.3088 | 0.4787 | 0.6828 | 0.6119 | 0.7826 | 0.2229 | 0.5527 | 0.2601 | 0.4665 | 0.1506 | 0.4349 | 0.3953 | 0.5142 |
| 0.9842 | 52.0 | 5304 | 1.0505 | 0.3214 | 0.5961 | 0.3128 | 0.2077 | 0.2931 | 0.4294 | 0.3723 | 0.5303 | 0.5421 | 0.3205 | 0.4851 | 0.6747 | 0.6116 | 0.7792 | 0.2311 | 0.56 | 0.263 | 0.4637 | 0.1045 | 0.4127 | 0.3969 | 0.4951 |
| 0.9696 | 53.0 | 5406 | 1.0265 | 0.3293 | 0.6019 | 0.3018 | 0.106 | 0.2979 | 0.4461 | 0.3822 | 0.5369 | 0.5509 | 0.2992 | 0.4985 | 0.6821 | 0.6457 | 0.7889 | 0.1915 | 0.5218 | 0.2584 | 0.4598 | 0.1462 | 0.4587 | 0.4048 | 0.5253 |
| 0.9543 | 54.0 | 5508 | 1.0281 | 0.347 | 0.6122 | 0.3339 | 0.1872 | 0.3124 | 0.4716 | 0.3799 | 0.5466 | 0.5557 | 0.3377 | 0.5343 | 0.6737 | 0.6382 | 0.7743 | 0.2366 | 0.5873 | 0.2535 | 0.4497 | 0.1863 | 0.4381 | 0.4203 | 0.529 |
| 0.9422 | 55.0 | 5610 | 1.0071 | 0.3306 | 0.5881 | 0.3249 | 0.1449 | 0.284 | 0.4468 | 0.3578 | 0.5375 | 0.5489 | 0.3157 | 0.4889 | 0.6696 | 0.6414 | 0.7819 | 0.1648 | 0.4945 | 0.2706 | 0.4749 | 0.1572 | 0.4714 | 0.4188 | 0.5216 |
| 0.9466 | 56.0 | 5712 | 1.0298 | 0.3511 | 0.6424 | 0.3371 | 0.132 | 0.3184 | 0.4571 | 0.366 | 0.5341 | 0.5456 | 0.2969 | 0.4929 | 0.6627 | 0.6421 | 0.7826 | 0.2288 | 0.5109 | 0.2988 | 0.4955 | 0.1643 | 0.4206 | 0.4217 | 0.5185 |
| 0.9531 | 57.0 | 5814 | 1.0100 | 0.3488 | 0.6207 | 0.3287 | 0.1347 | 0.3035 | 0.4695 | 0.3805 | 0.5389 | 0.5484 | 0.3205 | 0.4835 | 0.6672 | 0.6501 | 0.7812 | 0.206 | 0.52 | 0.3229 | 0.4978 | 0.1529 | 0.4175 | 0.412 | 0.5253 |
| 0.9513 | 58.0 | 5916 | 1.0160 | 0.3365 | 0.6032 | 0.3187 | 0.1036 | 0.2929 | 0.4571 | 0.3723 | 0.547 | 0.5577 | 0.3462 | 0.5054 | 0.6651 | 0.6416 | 0.7757 | 0.2133 | 0.5345 | 0.2843 | 0.4777 | 0.1382 | 0.4698 | 0.405 | 0.5309 |
| 0.9247 | 59.0 | 6018 | 1.0064 | 0.3383 | 0.6145 | 0.3065 | 0.1476 | 0.2866 | 0.454 | 0.3707 | 0.541 | 0.5512 | 0.3218 | 0.4915 | 0.6703 | 0.6455 | 0.7868 | 0.229 | 0.5673 | 0.2756 | 0.4698 | 0.132 | 0.4286 | 0.4093 | 0.5037 |
| 0.9127 | 60.0 | 6120 | 1.0243 | 0.3527 | 0.6155 | 0.3353 | 0.1245 | 0.2904 | 0.4712 | 0.3676 | 0.5367 | 0.5512 | 0.3445 | 0.4531 | 0.6745 | 0.644 | 0.7972 | 0.2242 | 0.5345 | 0.2887 | 0.4872 | 0.1663 | 0.4079 | 0.44 | 0.529 |
| 0.924 | 61.0 | 6222 | 1.0201 | 0.3467 | 0.6187 | 0.3291 | 0.1259 | 0.3009 | 0.4658 | 0.3669 | 0.542 | 0.5563 | 0.3672 | 0.491 | 0.6729 | 0.6197 | 0.7778 | 0.2337 | 0.5345 | 0.2796 | 0.4899 | 0.1768 | 0.454 | 0.4237 | 0.5253 |
| 0.9289 | 62.0 | 6324 | 1.0056 | 0.3575 | 0.6328 | 0.3288 | 0.1522 | 0.3019 | 0.489 | 0.3743 | 0.5457 | 0.5581 | 0.3284 | 0.5325 | 0.6811 | 0.6302 | 0.7979 | 0.2679 | 0.5491 | 0.2943 | 0.4777 | 0.1619 | 0.4397 | 0.433 | 0.5259 |
| 0.9038 | 63.0 | 6426 | 1.0303 | 0.3533 | 0.6337 | 0.3302 | 0.1265 | 0.3128 | 0.4751 | 0.3771 | 0.5478 | 0.5629 | 0.3565 | 0.4998 | 0.683 | 0.6373 | 0.7917 | 0.2518 | 0.5673 | 0.3044 | 0.4922 | 0.1579 | 0.4429 | 0.415 | 0.5204 |
| 0.8977 | 64.0 | 6528 | 1.0282 | 0.354 | 0.639 | 0.3309 | 0.1772 | 0.2979 | 0.4844 | 0.3783 | 0.5482 | 0.5601 | 0.3423 | 0.5408 | 0.6769 | 0.6306 | 0.7854 | 0.2532 | 0.5655 | 0.2898 | 0.4872 | 0.1809 | 0.4349 | 0.4156 | 0.5278 |
| 0.8874 | 65.0 | 6630 | 1.0007 | 0.362 | 0.6424 | 0.3444 | 0.1194 | 0.3101 | 0.4973 | 0.3896 | 0.5495 | 0.5616 | 0.3523 | 0.4885 | 0.6787 | 0.6247 | 0.7833 | 0.2444 | 0.5327 | 0.2965 | 0.4849 | 0.2013 | 0.4603 | 0.4432 | 0.5469 |
| 0.8984 | 66.0 | 6732 | 1.0091 | 0.3664 | 0.6534 | 0.333 | 0.1443 | 0.3091 | 0.5039 | 0.3763 | 0.5434 | 0.5532 | 0.3357 | 0.4781 | 0.6748 | 0.6249 | 0.7812 | 0.2678 | 0.5527 | 0.3095 | 0.4849 | 0.2083 | 0.4222 | 0.4217 | 0.5247 |
| 0.8936 | 67.0 | 6834 | 1.0070 | 0.3602 | 0.6401 | 0.3458 | 0.1338 | 0.2956 | 0.4938 | 0.377 | 0.5432 | 0.5578 | 0.3382 | 0.4857 | 0.6854 | 0.626 | 0.7826 | 0.2655 | 0.5709 | 0.2838 | 0.4687 | 0.1844 | 0.4302 | 0.4414 | 0.5364 |
| 0.8855 | 68.0 | 6936 | 0.9896 | 0.3699 | 0.6389 | 0.3531 | 0.1421 | 0.331 | 0.4972 | 0.3846 | 0.5506 | 0.5585 | 0.2986 | 0.5207 | 0.6792 | 0.6444 | 0.7826 | 0.2899 | 0.5473 | 0.2925 | 0.4777 | 0.1962 | 0.4571 | 0.4267 | 0.5278 |
| 0.8741 | 69.0 | 7038 | 1.0024 | 0.3603 | 0.6323 | 0.3335 | 0.1377 | 0.3261 | 0.4786 | 0.3917 | 0.5522 | 0.563 | 0.3197 | 0.5005 | 0.6814 | 0.6281 | 0.7819 | 0.2722 | 0.5582 | 0.2651 | 0.4721 | 0.2004 | 0.4698 | 0.4354 | 0.5327 |
| 0.8719 | 70.0 | 7140 | 0.9893 | 0.3697 | 0.6651 | 0.348 | 0.1304 | 0.3254 | 0.495 | 0.3944 | 0.5581 | 0.5679 | 0.3414 | 0.5242 | 0.6777 | 0.6364 | 0.7819 | 0.2604 | 0.5891 | 0.2951 | 0.4849 | 0.2136 | 0.4381 | 0.443 | 0.5457 |
| 0.8492 | 71.0 | 7242 | 1.0143 | 0.3706 | 0.6529 | 0.3508 | 0.1309 | 0.3338 | 0.4981 | 0.3865 | 0.5528 | 0.5679 | 0.3542 | 0.523 | 0.6888 | 0.6563 | 0.784 | 0.2454 | 0.5618 | 0.288 | 0.4804 | 0.2199 | 0.4667 | 0.4436 | 0.5463 |
| 0.874 | 72.0 | 7344 | 0.9918 | 0.3635 | 0.6614 | 0.346 | 0.1364 | 0.3137 | 0.4868 | 0.3873 | 0.5526 | 0.5642 | 0.3492 | 0.5059 | 0.6787 | 0.6283 | 0.7875 | 0.2586 | 0.5727 | 0.3168 | 0.4894 | 0.1953 | 0.4302 | 0.4187 | 0.5414 |
| 0.8537 | 73.0 | 7446 | 1.0099 | 0.3648 | 0.6577 | 0.3484 | 0.1369 | 0.3132 | 0.4894 | 0.3839 | 0.553 | 0.567 | 0.3366 | 0.5157 | 0.6864 | 0.6386 | 0.7826 | 0.2608 | 0.5727 | 0.295 | 0.476 | 0.2088 | 0.473 | 0.421 | 0.5309 |
| 0.8628 | 74.0 | 7548 | 0.9912 | 0.3652 | 0.6553 | 0.3489 | 0.1292 | 0.317 | 0.4924 | 0.3826 | 0.5525 | 0.5652 | 0.3246 | 0.484 | 0.6899 | 0.636 | 0.7826 | 0.2547 | 0.5673 | 0.3105 | 0.4749 | 0.2057 | 0.4667 | 0.419 | 0.5346 |
| 0.8435 | 75.0 | 7650 | 1.0066 | 0.3628 | 0.661 | 0.3537 | 0.1267 | 0.3396 | 0.4915 | 0.3857 | 0.5514 | 0.5687 | 0.3389 | 0.5295 | 0.6942 | 0.6159 | 0.7722 | 0.2487 | 0.5927 | 0.3045 | 0.476 | 0.2281 | 0.481 | 0.4166 | 0.5216 |
| 0.8523 | 76.0 | 7752 | 0.9969 | 0.3678 | 0.6697 | 0.3632 | 0.1314 | 0.3259 | 0.4933 | 0.3774 | 0.5485 | 0.5652 | 0.3371 | 0.508 | 0.679 | 0.6266 | 0.7819 | 0.274 | 0.6036 | 0.3101 | 0.4883 | 0.2031 | 0.4254 | 0.425 | 0.5265 |
| 0.8321 | 77.0 | 7854 | 0.9937 | 0.3696 | 0.6599 | 0.3612 | 0.1106 | 0.3178 | 0.4972 | 0.3847 | 0.5484 | 0.5602 | 0.3454 | 0.5154 | 0.6671 | 0.6321 | 0.7819 | 0.2673 | 0.5836 | 0.3088 | 0.4827 | 0.2074 | 0.4317 | 0.4321 | 0.521 |
| 0.8258 | 78.0 | 7956 | 0.9844 | 0.372 | 0.6629 | 0.3677 | 0.1203 | 0.2982 | 0.4978 | 0.3911 | 0.5545 | 0.5682 | 0.3331 | 0.5148 | 0.691 | 0.6356 | 0.7903 | 0.2807 | 0.5982 | 0.3059 | 0.4676 | 0.1962 | 0.4556 | 0.4417 | 0.5296 |
| 0.8197 | 79.0 | 8058 | 0.9836 | 0.368 | 0.6454 | 0.3624 | 0.1287 | 0.3025 | 0.483 | 0.3724 | 0.5564 | 0.5695 | 0.374 | 0.5109 | 0.679 | 0.6286 | 0.7917 | 0.2743 | 0.5818 | 0.3096 | 0.4888 | 0.1798 | 0.4444 | 0.4477 | 0.5407 |
| 0.8275 | 80.0 | 8160 | 0.9880 | 0.3736 | 0.6596 | 0.3682 | 0.1202 | 0.3169 | 0.5054 | 0.3846 | 0.5521 | 0.5686 | 0.3481 | 0.5117 | 0.6832 | 0.6328 | 0.7917 | 0.2677 | 0.6 | 0.3204 | 0.4844 | 0.21 | 0.4476 | 0.4371 | 0.5191 |
| 0.8069 | 81.0 | 8262 | 0.9647 | 0.3675 | 0.6532 | 0.3514 | 0.1037 | 0.3088 | 0.497 | 0.3846 | 0.5552 | 0.5682 | 0.3302 | 0.5314 | 0.6806 | 0.6353 | 0.7889 | 0.245 | 0.58 | 0.3242 | 0.4866 | 0.1947 | 0.4476 | 0.4381 | 0.5377 |
| 0.8197 | 82.0 | 8364 | 0.9878 | 0.3676 | 0.643 | 0.3578 | 0.1095 | 0.3083 | 0.5017 | 0.3885 | 0.5572 | 0.5705 | 0.3481 | 0.532 | 0.684 | 0.6323 | 0.7792 | 0.2451 | 0.5909 | 0.3103 | 0.4877 | 0.2097 | 0.4587 | 0.4406 | 0.5358 |
| 0.8334 | 83.0 | 8466 | 0.9788 | 0.3707 | 0.6533 | 0.3705 | 0.1078 | 0.3213 | 0.5099 | 0.3864 | 0.5551 | 0.5679 | 0.3407 | 0.5266 | 0.6804 | 0.641 | 0.7861 | 0.2614 | 0.5836 | 0.3015 | 0.4888 | 0.2104 | 0.4413 | 0.4394 | 0.5395 |
| 0.8289 | 84.0 | 8568 | 0.9885 | 0.3734 | 0.6652 | 0.3639 | 0.1058 | 0.3204 | 0.5154 | 0.3872 | 0.5556 | 0.5694 | 0.3177 | 0.5334 | 0.6848 | 0.6405 | 0.7861 | 0.2605 | 0.5764 | 0.3046 | 0.4838 | 0.2081 | 0.4476 | 0.453 | 0.5531 |
| 0.8053 | 85.0 | 8670 | 0.9810 | 0.3755 | 0.658 | 0.3742 | 0.1175 | 0.3345 | 0.5092 | 0.3919 | 0.559 | 0.5759 | 0.3565 | 0.5274 | 0.6873 | 0.6351 | 0.7819 | 0.2796 | 0.6055 | 0.3104 | 0.4939 | 0.2044 | 0.4556 | 0.4478 | 0.5426 |
| 0.8035 | 86.0 | 8772 | 0.9798 | 0.3744 | 0.6654 | 0.3603 | 0.1094 | 0.3334 | 0.514 | 0.3876 | 0.5582 | 0.579 | 0.3611 | 0.5293 | 0.6945 | 0.6344 | 0.7875 | 0.2713 | 0.6091 | 0.3032 | 0.4916 | 0.2087 | 0.4587 | 0.4543 | 0.5481 |
| 0.7939 | 87.0 | 8874 | 0.9846 | 0.3736 | 0.6551 | 0.3568 | 0.1068 | 0.3352 | 0.5084 | 0.3896 | 0.5636 | 0.5782 | 0.3494 | 0.5635 | 0.6899 | 0.6297 | 0.7937 | 0.2678 | 0.5964 | 0.2965 | 0.4849 | 0.2197 | 0.4746 | 0.4542 | 0.5414 |
| 0.8006 | 88.0 | 8976 | 0.9720 | 0.3788 | 0.669 | 0.3698 | 0.1053 | 0.3169 | 0.5167 | 0.3902 | 0.5664 | 0.5805 | 0.3739 | 0.5166 | 0.689 | 0.6322 | 0.7986 | 0.2813 | 0.6036 | 0.3 | 0.4899 | 0.2262 | 0.4651 | 0.4545 | 0.5451 |
| 0.7917 | 89.0 | 9078 | 0.9779 | 0.3807 | 0.6676 | 0.3682 | 0.1162 | 0.3355 | 0.5185 | 0.3952 | 0.5612 | 0.5725 | 0.3483 | 0.5361 | 0.683 | 0.6412 | 0.791 | 0.2766 | 0.58 | 0.3009 | 0.4927 | 0.2255 | 0.4476 | 0.4592 | 0.5512 |
| 0.7827 | 90.0 | 9180 | 0.9686 | 0.3829 | 0.6731 | 0.3804 | 0.1092 | 0.3431 | 0.5228 | 0.3963 | 0.5632 | 0.5758 | 0.3337 | 0.5297 | 0.6945 | 0.6399 | 0.7924 | 0.294 | 0.5855 | 0.3017 | 0.4933 | 0.2279 | 0.4571 | 0.4512 | 0.5506 |
| 0.7949 | 91.0 | 9282 | 0.9680 | 0.3856 | 0.6755 | 0.3858 | 0.1182 | 0.3354 | 0.5257 | 0.3984 | 0.5648 | 0.5781 | 0.3553 | 0.5147 | 0.6978 | 0.6373 | 0.7861 | 0.2978 | 0.6018 | 0.3076 | 0.4911 | 0.2311 | 0.4635 | 0.4542 | 0.5481 |
| 0.7757 | 92.0 | 9384 | 0.9670 | 0.3828 | 0.6697 | 0.3923 | 0.1173 | 0.33 | 0.523 | 0.3943 | 0.5654 | 0.5779 | 0.3357 | 0.5157 | 0.6958 | 0.6448 | 0.7896 | 0.2825 | 0.5964 | 0.3008 | 0.4849 | 0.231 | 0.473 | 0.4551 | 0.5457 |
| 0.7868 | 93.0 | 9486 | 0.9613 | 0.3822 | 0.6686 | 0.3802 | 0.1122 | 0.3308 | 0.5263 | 0.3958 | 0.5645 | 0.5802 | 0.3406 | 0.5108 | 0.702 | 0.643 | 0.7972 | 0.2767 | 0.5909 | 0.3004 | 0.4905 | 0.2393 | 0.4762 | 0.4518 | 0.5463 |
| 0.7753 | 94.0 | 9588 | 0.9541 | 0.3827 | 0.6779 | 0.3753 | 0.1163 | 0.3333 | 0.5277 | 0.3949 | 0.5635 | 0.5753 | 0.3495 | 0.4991 | 0.6969 | 0.6426 | 0.791 | 0.2727 | 0.5855 | 0.3073 | 0.4888 | 0.2429 | 0.4635 | 0.448 | 0.5475 |
| 0.7633 | 95.0 | 9690 | 0.9495 | 0.3853 | 0.6736 | 0.3662 | 0.1124 | 0.3367 | 0.527 | 0.3972 | 0.5672 | 0.5808 | 0.3759 | 0.5119 | 0.7011 | 0.6456 | 0.7924 | 0.272 | 0.6018 | 0.3069 | 0.4899 | 0.2474 | 0.4683 | 0.4544 | 0.5519 |
| 0.7649 | 96.0 | 9792 | 0.9528 | 0.3867 | 0.6783 | 0.3753 | 0.1214 | 0.3328 | 0.5277 | 0.3956 | 0.5666 | 0.5815 | 0.3645 | 0.5205 | 0.7049 | 0.6516 | 0.7993 | 0.2656 | 0.5964 | 0.3118 | 0.4966 | 0.243 | 0.4603 | 0.4614 | 0.5549 |
| 0.7626 | 97.0 | 9894 | 0.9535 | 0.3878 | 0.685 | 0.3731 | 0.1215 | 0.3361 | 0.5332 | 0.3989 | 0.5678 | 0.5795 | 0.3647 | 0.5246 | 0.6998 | 0.6435 | 0.7903 | 0.2752 | 0.5891 | 0.3148 | 0.4916 | 0.2499 | 0.4746 | 0.4556 | 0.5519 |
| 0.7667 | 98.0 | 9996 | 0.9552 | 0.3872 | 0.682 | 0.3704 | 0.121 | 0.3366 | 0.5295 | 0.3952 | 0.5667 | 0.5789 | 0.3512 | 0.528 | 0.7004 | 0.644 | 0.7944 | 0.2739 | 0.5909 | 0.3099 | 0.4894 | 0.2487 | 0.4667 | 0.4593 | 0.5531 |
| 0.7695 | 99.0 | 10098 | 0.9507 | 0.3886 | 0.6838 | 0.3786 | 0.1207 | 0.3341 | 0.5331 | 0.3974 | 0.5676 | 0.5805 | 0.3699 | 0.5364 | 0.7038 | 0.6446 | 0.7951 | 0.2783 | 0.5927 | 0.3102 | 0.4899 | 0.2532 | 0.4746 | 0.4567 | 0.55 |
| 0.7535 | 100.0 | 10200 | 0.9557 | 0.3883 | 0.6828 | 0.3786 | 0.1193 | 0.3371 | 0.5338 | 0.3968 | 0.5689 | 0.58 | 0.3679 | 0.5385 | 0.7028 | 0.6436 | 0.7944 | 0.2789 | 0.5927 | 0.312 | 0.4922 | 0.2522 | 0.4714 | 0.4546 | 0.5494 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+xpu
- Datasets 3.6.0
- Tokenizers 0.21.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
omni-devel/PySols-OCR-DETR |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"[pad]",
"[unk]",
"[cls]",
"[sep]",
"[mask]",
" ",
"a",
"b",
"c",
"d",
"e",
"f",
"g",
"h",
"i",
"j",
"k",
"l",
"m",
"n",
"o",
"p",
"q",
"r",
"s",
"t",
"u",
"v",
"w",
"x",
"y",
"z",
"a",
"b",
"c",
"d",
"e",
"f",
"g",
"h",
"i",
"j",
"k",
"l",
"m",
"n",
"o",
"p",
"q",
"r",
"s",
"t",
"u",
"v",
"w",
"x",
"y",
"z",
"ё",
"а",
"б",
"в",
"г",
"д",
"е",
"ж",
"з",
"и",
"й",
"к",
"л",
"м",
"н",
"о",
"п",
"р",
"с",
"т",
"у",
"ф",
"х",
"ц",
"ч",
"ш",
"щ",
"ъ",
"ы",
"ь",
"э",
"ю",
"я",
"а",
"б",
"в",
"г",
"д",
"е",
"ж",
"з",
"и",
"й",
"к",
"л",
"м",
"н",
"о",
"п",
"р",
"с",
"т",
"у",
"ф",
"х",
"ц",
"ч",
"ш",
"щ",
"ъ",
"ы",
"ь",
"э",
"ю",
"я",
"ё"
] |
toukapy/detr_finetuned_kitti_mots-bright |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_kitti_mots-bright
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6703
- Map: 0.4803
- Map 50: 0.7799
- Map 75: 0.494
- Map Small: 0.2239
- Map Medium: 0.4794
- Map Large: 0.7442
- Mar 1: 0.1586
- Mar 10: 0.5224
- Mar 100: 0.6196
- Mar Small: 0.4422
- Mar Medium: 0.6281
- Mar Large: 0.8123
- Map Car: 0.6182
- Mar 100 Car: 0.7081
- Map Pedestrian: 0.3423
- Mar 100 Pedestrian: 0.5311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Car | Mar 100 Car | Map Pedestrian | Mar 100 Pedestrian |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:-------:|:-----------:|:--------------:|:------------------:|
| 0.9436 | 1.0 | 743 | 0.8850 | 0.3518 | 0.6491 | 0.3381 | 0.1025 | 0.3437 | 0.6351 | 0.1336 | 0.4063 | 0.5187 | 0.3121 | 0.5288 | 0.7428 | 0.4742 | 0.5947 | 0.2293 | 0.4428 |
| 0.8864 | 2.0 | 1486 | 0.8514 | 0.3714 | 0.6685 | 0.3602 | 0.1196 | 0.3636 | 0.6536 | 0.1384 | 0.4226 | 0.5403 | 0.3347 | 0.5536 | 0.7549 | 0.5002 | 0.6203 | 0.2426 | 0.4603 |
| 0.8577 | 3.0 | 2229 | 0.8346 | 0.3815 | 0.6809 | 0.3741 | 0.1251 | 0.3727 | 0.6703 | 0.1428 | 0.4299 | 0.5424 | 0.3401 | 0.5521 | 0.7616 | 0.5075 | 0.6206 | 0.2555 | 0.4642 |
| 0.8454 | 4.0 | 2972 | 0.8225 | 0.3872 | 0.6848 | 0.3845 | 0.1313 | 0.3804 | 0.671 | 0.142 | 0.4376 | 0.5496 | 0.3441 | 0.5631 | 0.763 | 0.5173 | 0.6291 | 0.2571 | 0.47 |
| 0.8258 | 5.0 | 3715 | 0.8161 | 0.3929 | 0.6901 | 0.3893 | 0.1383 | 0.3859 | 0.6733 | 0.1434 | 0.4402 | 0.5486 | 0.3469 | 0.561 | 0.7605 | 0.5246 | 0.6332 | 0.2613 | 0.464 |
| 0.8186 | 6.0 | 4458 | 0.8088 | 0.3963 | 0.6967 | 0.3967 | 0.1404 | 0.3882 | 0.6784 | 0.1441 | 0.4437 | 0.5516 | 0.3549 | 0.5603 | 0.7673 | 0.5232 | 0.6351 | 0.2695 | 0.4681 |
| 0.8058 | 7.0 | 5201 | 0.7959 | 0.4054 | 0.7096 | 0.4078 | 0.1448 | 0.4036 | 0.6818 | 0.1444 | 0.4565 | 0.5626 | 0.3653 | 0.5749 | 0.7706 | 0.5338 | 0.6433 | 0.2771 | 0.4819 |
| 0.7957 | 8.0 | 5944 | 0.7860 | 0.4115 | 0.7072 | 0.4119 | 0.1455 | 0.4088 | 0.6917 | 0.1469 | 0.4561 | 0.5666 | 0.3603 | 0.5817 | 0.7772 | 0.543 | 0.6507 | 0.28 | 0.4825 |
| 0.7842 | 9.0 | 6687 | 0.7766 | 0.4155 | 0.7136 | 0.4165 | 0.1508 | 0.4105 | 0.6974 | 0.1479 | 0.459 | 0.5711 | 0.3734 | 0.5819 | 0.7826 | 0.547 | 0.6543 | 0.2841 | 0.4879 |
| 0.7814 | 10.0 | 7430 | 0.7803 | 0.4128 | 0.7122 | 0.4133 | 0.1481 | 0.41 | 0.6995 | 0.1467 | 0.4578 | 0.5646 | 0.3611 | 0.577 | 0.7794 | 0.5464 | 0.6516 | 0.2793 | 0.4776 |
| 0.771 | 11.0 | 8173 | 0.7707 | 0.4145 | 0.7134 | 0.4199 | 0.1524 | 0.4116 | 0.696 | 0.1459 | 0.4624 | 0.5691 | 0.3691 | 0.5823 | 0.7773 | 0.5474 | 0.6548 | 0.2816 | 0.4834 |
| 0.7674 | 12.0 | 8916 | 0.7643 | 0.4228 | 0.7207 | 0.4258 | 0.1581 | 0.4223 | 0.6966 | 0.1477 | 0.4668 | 0.5762 | 0.3753 | 0.5912 | 0.7799 | 0.5566 | 0.6607 | 0.289 | 0.4916 |
| 0.7584 | 13.0 | 9659 | 0.7559 | 0.4246 | 0.7265 | 0.4316 | 0.1629 | 0.4213 | 0.7056 | 0.1472 | 0.4698 | 0.5772 | 0.3817 | 0.5872 | 0.7875 | 0.5574 | 0.6611 | 0.2919 | 0.4933 |
| 0.7516 | 14.0 | 10402 | 0.7555 | 0.4269 | 0.7277 | 0.4334 | 0.1615 | 0.4246 | 0.7072 | 0.1481 | 0.4716 | 0.5787 | 0.3765 | 0.5922 | 0.7879 | 0.5619 | 0.6641 | 0.2919 | 0.4933 |
| 0.7456 | 15.0 | 11145 | 0.7448 | 0.4328 | 0.7333 | 0.4432 | 0.1679 | 0.4316 | 0.7122 | 0.1507 | 0.4785 | 0.5819 | 0.3883 | 0.5916 | 0.7915 | 0.5686 | 0.6714 | 0.297 | 0.4923 |
| 0.7404 | 16.0 | 11888 | 0.7475 | 0.4336 | 0.7359 | 0.4363 | 0.1668 | 0.4304 | 0.7181 | 0.1501 | 0.4798 | 0.579 | 0.3815 | 0.5885 | 0.7936 | 0.5668 | 0.6671 | 0.3005 | 0.4909 |
| 0.7404 | 17.0 | 12631 | 0.7393 | 0.4361 | 0.7386 | 0.4473 | 0.1752 | 0.4337 | 0.7128 | 0.1509 | 0.4821 | 0.583 | 0.3997 | 0.5905 | 0.787 | 0.5745 | 0.6745 | 0.2978 | 0.4915 |
| 0.7279 | 18.0 | 13374 | 0.7331 | 0.4384 | 0.7406 | 0.4417 | 0.1771 | 0.4376 | 0.7105 | 0.1508 | 0.4833 | 0.5879 | 0.3965 | 0.5984 | 0.7926 | 0.578 | 0.6775 | 0.2988 | 0.4982 |
| 0.722 | 19.0 | 14117 | 0.7329 | 0.4407 | 0.742 | 0.4443 | 0.1834 | 0.4393 | 0.7138 | 0.1505 | 0.4866 | 0.5898 | 0.4042 | 0.5995 | 0.7887 | 0.5748 | 0.6751 | 0.3067 | 0.5044 |
| 0.7184 | 20.0 | 14860 | 0.7240 | 0.4484 | 0.748 | 0.4557 | 0.1872 | 0.4476 | 0.7186 | 0.153 | 0.4915 | 0.5943 | 0.4045 | 0.6062 | 0.7926 | 0.5846 | 0.6819 | 0.3121 | 0.5066 |
| 0.7177 | 21.0 | 15603 | 0.7266 | 0.4447 | 0.75 | 0.4517 | 0.1856 | 0.4419 | 0.7154 | 0.1515 | 0.4883 | 0.5893 | 0.4061 | 0.5988 | 0.7866 | 0.58 | 0.6777 | 0.3095 | 0.5009 |
| 0.7077 | 22.0 | 16346 | 0.7172 | 0.4496 | 0.752 | 0.4618 | 0.1861 | 0.4486 | 0.7199 | 0.1524 | 0.4921 | 0.5935 | 0.4065 | 0.6031 | 0.7946 | 0.5856 | 0.6812 | 0.3137 | 0.5057 |
| 0.7073 | 23.0 | 17089 | 0.7199 | 0.4471 | 0.7489 | 0.4598 | 0.1882 | 0.4443 | 0.7203 | 0.1518 | 0.4898 | 0.5944 | 0.4094 | 0.6039 | 0.7936 | 0.5819 | 0.6807 | 0.3123 | 0.5081 |
| 0.7043 | 24.0 | 17832 | 0.7139 | 0.4525 | 0.7506 | 0.4618 | 0.1893 | 0.4508 | 0.7258 | 0.1542 | 0.4964 | 0.5994 | 0.4122 | 0.6084 | 0.8026 | 0.589 | 0.6827 | 0.316 | 0.516 |
| 0.6988 | 25.0 | 18575 | 0.7132 | 0.4527 | 0.7543 | 0.4627 | 0.19 | 0.4498 | 0.7296 | 0.1538 | 0.4957 | 0.5967 | 0.4039 | 0.6064 | 0.805 | 0.591 | 0.6854 | 0.3144 | 0.5081 |
| 0.6957 | 26.0 | 19318 | 0.7092 | 0.4545 | 0.7561 | 0.4626 | 0.1934 | 0.4516 | 0.7304 | 0.1539 | 0.4973 | 0.5984 | 0.4111 | 0.6069 | 0.8027 | 0.5887 | 0.6838 | 0.3203 | 0.513 |
| 0.6864 | 27.0 | 20061 | 0.7065 | 0.4559 | 0.7552 | 0.4667 | 0.1973 | 0.4536 | 0.7279 | 0.1542 | 0.4987 | 0.5998 | 0.4103 | 0.6117 | 0.7982 | 0.5941 | 0.6895 | 0.3178 | 0.5101 |
| 0.684 | 28.0 | 20804 | 0.7045 | 0.458 | 0.7582 | 0.4746 | 0.1966 | 0.4572 | 0.7311 | 0.1545 | 0.4997 | 0.6022 | 0.415 | 0.6116 | 0.8053 | 0.594 | 0.6893 | 0.322 | 0.5152 |
| 0.681 | 29.0 | 21547 | 0.7040 | 0.4574 | 0.7603 | 0.4715 | 0.1971 | 0.4563 | 0.7296 | 0.1536 | 0.4988 | 0.5987 | 0.4136 | 0.6073 | 0.8004 | 0.591 | 0.6872 | 0.3239 | 0.5102 |
| 0.6769 | 30.0 | 22290 | 0.7023 | 0.4585 | 0.7613 | 0.4703 | 0.2004 | 0.4565 | 0.7335 | 0.1539 | 0.5012 | 0.6019 | 0.4214 | 0.6084 | 0.8038 | 0.5922 | 0.6902 | 0.3247 | 0.5136 |
| 0.6774 | 31.0 | 23033 | 0.6974 | 0.4607 | 0.7646 | 0.4775 | 0.2032 | 0.4594 | 0.7304 | 0.1543 | 0.502 | 0.6048 | 0.4317 | 0.6094 | 0.8032 | 0.5963 | 0.6924 | 0.3251 | 0.5173 |
| 0.6678 | 32.0 | 23776 | 0.6914 | 0.4654 | 0.7623 | 0.4756 | 0.2076 | 0.4642 | 0.7337 | 0.1559 | 0.5067 | 0.6088 | 0.4287 | 0.6175 | 0.8047 | 0.6021 | 0.6976 | 0.3287 | 0.5201 |
| 0.6733 | 33.0 | 24519 | 0.6896 | 0.4664 | 0.767 | 0.4805 | 0.212 | 0.4653 | 0.7326 | 0.1552 | 0.5086 | 0.6078 | 0.4246 | 0.6166 | 0.8067 | 0.6038 | 0.6979 | 0.329 | 0.5177 |
| 0.6656 | 34.0 | 25262 | 0.6878 | 0.4687 | 0.769 | 0.4857 | 0.2133 | 0.4682 | 0.7353 | 0.1558 | 0.5112 | 0.6078 | 0.4241 | 0.6173 | 0.8055 | 0.6048 | 0.6975 | 0.3326 | 0.5181 |
| 0.6599 | 35.0 | 26005 | 0.6848 | 0.4716 | 0.7718 | 0.492 | 0.2121 | 0.4717 | 0.7364 | 0.156 | 0.5135 | 0.6121 | 0.4292 | 0.6218 | 0.8082 | 0.6081 | 0.7002 | 0.3351 | 0.524 |
| 0.6646 | 36.0 | 26748 | 0.6857 | 0.4709 | 0.7721 | 0.487 | 0.2129 | 0.4711 | 0.7369 | 0.1565 | 0.5137 | 0.6109 | 0.4316 | 0.6184 | 0.8092 | 0.6073 | 0.7001 | 0.3344 | 0.5217 |
| 0.6568 | 37.0 | 27491 | 0.6867 | 0.4707 | 0.7729 | 0.4843 | 0.2147 | 0.4694 | 0.7393 | 0.1564 | 0.5117 | 0.6102 | 0.4252 | 0.6195 | 0.8094 | 0.6065 | 0.6985 | 0.3349 | 0.5219 |
| 0.6493 | 38.0 | 28234 | 0.6830 | 0.4713 | 0.771 | 0.4835 | 0.2121 | 0.4734 | 0.7357 | 0.1573 | 0.5131 | 0.6118 | 0.4277 | 0.622 | 0.8083 | 0.6081 | 0.7002 | 0.3345 | 0.5234 |
| 0.6567 | 39.0 | 28977 | 0.6813 | 0.4724 | 0.771 | 0.4841 | 0.2117 | 0.4729 | 0.7396 | 0.1573 | 0.515 | 0.6135 | 0.4351 | 0.6213 | 0.8097 | 0.6098 | 0.701 | 0.3351 | 0.526 |
| 0.6532 | 40.0 | 29720 | 0.6797 | 0.4743 | 0.7751 | 0.4848 | 0.2137 | 0.4761 | 0.7369 | 0.1573 | 0.516 | 0.6149 | 0.4354 | 0.6243 | 0.8077 | 0.6101 | 0.7019 | 0.3384 | 0.5279 |
| 0.6475 | 41.0 | 30463 | 0.6769 | 0.4755 | 0.773 | 0.4903 | 0.219 | 0.4742 | 0.7397 | 0.1572 | 0.5193 | 0.6169 | 0.4418 | 0.6248 | 0.8088 | 0.6125 | 0.7044 | 0.3384 | 0.5295 |
| 0.6432 | 42.0 | 31206 | 0.6779 | 0.4762 | 0.7757 | 0.4926 | 0.2171 | 0.4777 | 0.739 | 0.158 | 0.5184 | 0.6168 | 0.4384 | 0.6262 | 0.8079 | 0.6122 | 0.703 | 0.3403 | 0.5305 |
| 0.6482 | 43.0 | 31949 | 0.6762 | 0.4759 | 0.7756 | 0.4897 | 0.218 | 0.4755 | 0.74 | 0.1579 | 0.5169 | 0.6141 | 0.4329 | 0.624 | 0.8071 | 0.6132 | 0.7042 | 0.3385 | 0.524 |
| 0.6427 | 44.0 | 32692 | 0.6744 | 0.4771 | 0.776 | 0.49 | 0.2167 | 0.4766 | 0.7445 | 0.1591 | 0.5195 | 0.6159 | 0.4333 | 0.6258 | 0.8112 | 0.616 | 0.7064 | 0.3382 | 0.5254 |
| 0.6409 | 45.0 | 33435 | 0.6758 | 0.4767 | 0.777 | 0.4882 | 0.2189 | 0.4762 | 0.7426 | 0.1581 | 0.5181 | 0.6155 | 0.437 | 0.6239 | 0.8099 | 0.6141 | 0.7046 | 0.3393 | 0.5264 |
| 0.6361 | 46.0 | 34178 | 0.6748 | 0.4758 | 0.7762 | 0.4888 | 0.2178 | 0.4744 | 0.7448 | 0.1577 | 0.5177 | 0.6139 | 0.4299 | 0.6234 | 0.8116 | 0.6135 | 0.704 | 0.338 | 0.5238 |
| 0.6383 | 47.0 | 34921 | 0.6757 | 0.475 | 0.7788 | 0.4883 | 0.217 | 0.4751 | 0.7424 | 0.158 | 0.5184 | 0.6139 | 0.4278 | 0.6244 | 0.8116 | 0.6115 | 0.7031 | 0.3384 | 0.5247 |
| 0.6421 | 48.0 | 35664 | 0.6717 | 0.4793 | 0.7796 | 0.4909 | 0.2217 | 0.4788 | 0.7447 | 0.1589 | 0.5208 | 0.6186 | 0.4413 | 0.627 | 0.8114 | 0.6161 | 0.7071 | 0.3426 | 0.5301 |
| 0.6357 | 49.0 | 36407 | 0.6712 | 0.4789 | 0.7787 | 0.4916 | 0.2215 | 0.4789 | 0.7425 | 0.1592 | 0.5219 | 0.6188 | 0.4403 | 0.6279 | 0.8114 | 0.6161 | 0.7069 | 0.3418 | 0.5308 |
| 0.6322 | 50.0 | 37150 | 0.6715 | 0.4792 | 0.7795 | 0.4922 | 0.2219 | 0.4792 | 0.7436 | 0.1587 | 0.5223 | 0.6188 | 0.4368 | 0.629 | 0.8124 | 0.6174 | 0.7074 | 0.3409 | 0.5302 |
| 0.6324 | 51.0 | 37893 | 0.6729 | 0.478 | 0.7787 | 0.4906 | 0.2206 | 0.4772 | 0.7447 | 0.1585 | 0.5202 | 0.6171 | 0.4379 | 0.6254 | 0.8126 | 0.6153 | 0.7048 | 0.3407 | 0.5293 |
| 0.6402 | 52.0 | 38636 | 0.6707 | 0.4806 | 0.7792 | 0.4978 | 0.2222 | 0.4795 | 0.747 | 0.1592 | 0.5221 | 0.6196 | 0.4419 | 0.6278 | 0.8135 | 0.6174 | 0.7076 | 0.3438 | 0.5317 |
| 0.6328 | 53.0 | 39379 | 0.6716 | 0.4796 | 0.7794 | 0.4964 | 0.2231 | 0.4789 | 0.7445 | 0.1587 | 0.5212 | 0.6184 | 0.4405 | 0.6269 | 0.812 | 0.6173 | 0.707 | 0.342 | 0.5299 |
| 0.6349 | 54.0 | 40122 | 0.6715 | 0.4795 | 0.7796 | 0.4941 | 0.223 | 0.4782 | 0.7453 | 0.1587 | 0.5216 | 0.6186 | 0.4399 | 0.6268 | 0.8135 | 0.6165 | 0.7066 | 0.3425 | 0.5305 |
| 0.6293 | 55.0 | 40865 | 0.6705 | 0.4798 | 0.779 | 0.4921 | 0.2232 | 0.479 | 0.7445 | 0.159 | 0.5222 | 0.6192 | 0.4408 | 0.628 | 0.8123 | 0.6177 | 0.7073 | 0.3419 | 0.5311 |
| 0.6324 | 56.0 | 41608 | 0.6705 | 0.4804 | 0.78 | 0.4939 | 0.2238 | 0.48 | 0.7446 | 0.1588 | 0.5222 | 0.6198 | 0.4418 | 0.6285 | 0.8127 | 0.618 | 0.7079 | 0.3428 | 0.5318 |
| 0.6293 | 57.0 | 42351 | 0.6702 | 0.4803 | 0.7796 | 0.4947 | 0.2235 | 0.4792 | 0.7452 | 0.159 | 0.5228 | 0.6197 | 0.4415 | 0.6283 | 0.813 | 0.6178 | 0.708 | 0.3428 | 0.5314 |
| 0.6353 | 58.0 | 43094 | 0.6701 | 0.4804 | 0.7798 | 0.4943 | 0.224 | 0.4795 | 0.7444 | 0.1588 | 0.5223 | 0.6198 | 0.4422 | 0.6284 | 0.8128 | 0.6183 | 0.7082 | 0.3424 | 0.5315 |
| 0.6323 | 59.0 | 43837 | 0.6703 | 0.4803 | 0.78 | 0.4935 | 0.2238 | 0.4794 | 0.7443 | 0.1586 | 0.5223 | 0.6196 | 0.4419 | 0.6282 | 0.8124 | 0.6183 | 0.7082 | 0.3423 | 0.5309 |
| 0.6384 | 60.0 | 44580 | 0.6703 | 0.4803 | 0.7799 | 0.494 | 0.2239 | 0.4794 | 0.7442 | 0.1586 | 0.5224 | 0.6196 | 0.4422 | 0.6281 | 0.8123 | 0.6182 | 0.7081 | 0.3423 | 0.5311 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.2
- Tokenizers 0.21.0
| [
"car",
"pedestrian"
] |
toukapy/detr_domain_shift |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_domain_shift
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4246
- Map: 0.8353
- Map 50: 0.9529
- Map 75: 0.9029
- Map Small: 0.0159
- Map Medium: 0.6742
- Map Large: 0.8707
- Mar 1: 0.7106
- Mar 10: 0.8977
- Mar 100: 0.9152
- Mar Small: 0.2984
- Mar Medium: 0.8276
- Mar Large: 0.9365
- Map Garbage bag: 0.8185
- Mar 100 Garbage bag: 0.9059
- Map Paper bag: 0.8446
- Mar 100 Paper bag: 0.9239
- Map Plastic bag: 0.8429
- Mar 100 Plastic bag: 0.9159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Garbage bag | Mar 100 Garbage bag | Map Paper bag | Mar 100 Paper bag | Map Plastic bag | Mar 100 Plastic bag |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:---------------:|:-------------------:|:-------------:|:-----------------:|:---------------:|:-------------------:|
| 0.9775 | 1.0 | 1557 | 0.8739 | 0.2419 | 0.3276 | 0.2719 | 0.0003 | 0.1808 | 0.2566 | 0.4246 | 0.773 | 0.8186 | 0.0271 | 0.6849 | 0.8501 | 0.1999 | 0.808 | 0.3045 | 0.8292 | 0.2213 | 0.8187 |
| 0.919 | 2.0 | 3114 | 0.8527 | 0.4465 | 0.6037 | 0.5018 | 0.0043 | 0.2877 | 0.4787 | 0.5098 | 0.7627 | 0.8137 | 0.0123 | 0.6656 | 0.8474 | 0.5496 | 0.8075 | 0.4518 | 0.8235 | 0.3382 | 0.8101 |
| 0.85 | 3.0 | 4671 | 0.7649 | 0.5508 | 0.7268 | 0.6183 | 0.0013 | 0.3684 | 0.5898 | 0.5619 | 0.7944 | 0.8335 | 0.0652 | 0.7147 | 0.8613 | 0.5635 | 0.8206 | 0.5877 | 0.8471 | 0.5012 | 0.8327 |
| 0.844 | 4.0 | 6228 | 0.7562 | 0.605 | 0.7924 | 0.6857 | 0.0023 | 0.4402 | 0.6405 | 0.5732 | 0.7901 | 0.8296 | 0.0579 | 0.7174 | 0.856 | 0.6113 | 0.8213 | 0.6125 | 0.8364 | 0.5912 | 0.8313 |
| 0.8332 | 5.0 | 7785 | 0.7811 | 0.5982 | 0.8004 | 0.6799 | 0.007 | 0.4141 | 0.6365 | 0.5683 | 0.7801 | 0.8214 | 0.0194 | 0.6914 | 0.852 | 0.5785 | 0.8091 | 0.6326 | 0.8361 | 0.5834 | 0.8188 |
| 0.9303 | 6.0 | 9342 | 0.9325 | 0.5179 | 0.7395 | 0.5906 | 0.0001 | 0.3665 | 0.5506 | 0.5192 | 0.7288 | 0.7756 | 0.0097 | 0.6358 | 0.8074 | 0.52 | 0.7664 | 0.5744 | 0.7825 | 0.4593 | 0.7778 |
| 0.8773 | 7.0 | 10899 | 0.8869 | 0.5561 | 0.7748 | 0.626 | 0.0 | 0.374 | 0.5948 | 0.547 | 0.748 | 0.7895 | 0.0 | 0.6483 | 0.8223 | 0.5435 | 0.7752 | 0.5909 | 0.8132 | 0.5339 | 0.7801 |
| 1.011 | 8.0 | 12456 | 0.8729 | 0.5604 | 0.7945 | 0.6437 | 0.0013 | 0.3991 | 0.5954 | 0.538 | 0.7475 | 0.7971 | 0.0218 | 0.6489 | 0.831 | 0.5251 | 0.7729 | 0.6152 | 0.8222 | 0.541 | 0.7962 |
| 1.0019 | 9.0 | 14013 | 1.2187 | 0.313 | 0.5258 | 0.3396 | 0.0 | 0.2026 | 0.337 | 0.3859 | 0.6118 | 0.6892 | 0.0 | 0.5141 | 0.7282 | 0.4395 | 0.7076 | 0.2496 | 0.6916 | 0.25 | 0.6685 |
| 0.9286 | 10.0 | 15570 | 0.8059 | 0.5848 | 0.7852 | 0.6584 | 0.0 | 0.3874 | 0.6257 | 0.5595 | 0.7725 | 0.8196 | 0.0179 | 0.6689 | 0.8539 | 0.5762 | 0.8015 | 0.6239 | 0.8429 | 0.5544 | 0.8143 |
| 0.8782 | 11.0 | 17127 | 0.8458 | 0.5587 | 0.7551 | 0.627 | 0.01 | 0.377 | 0.596 | 0.5551 | 0.7678 | 0.812 | 0.0516 | 0.6635 | 0.8453 | 0.5726 | 0.7966 | 0.6164 | 0.8355 | 0.4872 | 0.804 |
| 0.8493 | 12.0 | 18684 | 0.7872 | 0.5899 | 0.7787 | 0.6622 | 0.0012 | 0.3803 | 0.6329 | 0.57 | 0.7832 | 0.8287 | 0.019 | 0.6719 | 0.8643 | 0.5813 | 0.8215 | 0.6336 | 0.8412 | 0.5548 | 0.8235 |
| 0.793 | 13.0 | 20241 | 0.7784 | 0.6112 | 0.8128 | 0.6924 | 0.0023 | 0.4275 | 0.6502 | 0.5785 | 0.7783 | 0.8249 | 0.0434 | 0.6872 | 0.8566 | 0.6157 | 0.8253 | 0.6326 | 0.8344 | 0.5852 | 0.8151 |
| 0.7712 | 14.0 | 21798 | 0.7881 | 0.6089 | 0.8162 | 0.688 | 0.0001 | 0.4211 | 0.6497 | 0.5772 | 0.7785 | 0.8184 | 0.039 | 0.6874 | 0.8491 | 0.5984 | 0.803 | 0.6429 | 0.84 | 0.5853 | 0.8123 |
| 0.7892 | 15.0 | 23355 | 0.7204 | 0.6548 | 0.8515 | 0.742 | 0.0007 | 0.486 | 0.6916 | 0.6018 | 0.7991 | 0.8376 | 0.0359 | 0.7139 | 0.8668 | 0.653 | 0.8377 | 0.6716 | 0.8447 | 0.6399 | 0.8302 |
| 0.7735 | 16.0 | 24912 | 0.7768 | 0.604 | 0.7908 | 0.6807 | 0.0152 | 0.4142 | 0.6445 | 0.5803 | 0.7794 | 0.8258 | 0.043 | 0.7167 | 0.8522 | 0.5854 | 0.8359 | 0.6421 | 0.8307 | 0.5844 | 0.8106 |
| 0.8052 | 17.0 | 26469 | 0.7466 | 0.6389 | 0.8417 | 0.7325 | 0.0001 | 0.4658 | 0.6767 | 0.5883 | 0.7847 | 0.8231 | 0.0245 | 0.7291 | 0.8469 | 0.6493 | 0.8262 | 0.6572 | 0.8329 | 0.6101 | 0.8101 |
| 0.7366 | 18.0 | 28026 | 0.7490 | 0.6372 | 0.8435 | 0.7198 | 0.0069 | 0.4492 | 0.6783 | 0.5883 | 0.7881 | 0.8287 | 0.0271 | 0.7084 | 0.8572 | 0.6427 | 0.8307 | 0.6486 | 0.8376 | 0.6202 | 0.8178 |
| 0.7316 | 19.0 | 29583 | 0.6964 | 0.6697 | 0.869 | 0.7578 | 0.0028 | 0.5095 | 0.704 | 0.6068 | 0.8029 | 0.839 | 0.0705 | 0.7283 | 0.8651 | 0.6529 | 0.8277 | 0.6915 | 0.8538 | 0.6646 | 0.8353 |
| 0.7243 | 20.0 | 31140 | 0.7165 | 0.6616 | 0.8605 | 0.753 | 0.0033 | 0.4782 | 0.7003 | 0.6046 | 0.7972 | 0.8299 | 0.0581 | 0.7148 | 0.8574 | 0.6366 | 0.8126 | 0.6791 | 0.8452 | 0.6692 | 0.832 |
| 0.7189 | 21.0 | 32697 | 0.6921 | 0.6748 | 0.8694 | 0.7621 | 0.001 | 0.5019 | 0.7121 | 0.6105 | 0.8042 | 0.8382 | 0.0788 | 0.7253 | 0.8649 | 0.6422 | 0.8241 | 0.7087 | 0.8581 | 0.6735 | 0.8324 |
| 0.6802 | 22.0 | 34254 | 0.6381 | 0.7091 | 0.886 | 0.7875 | 0.0007 | 0.5366 | 0.7465 | 0.6318 | 0.8291 | 0.8618 | 0.0716 | 0.752 | 0.8883 | 0.6953 | 0.8572 | 0.7253 | 0.8718 | 0.7067 | 0.8565 |
| 0.6676 | 23.0 | 35811 | 0.6252 | 0.7186 | 0.8865 | 0.7994 | 0.0034 | 0.5423 | 0.7573 | 0.6391 | 0.8373 | 0.8649 | 0.0665 | 0.7597 | 0.8908 | 0.7021 | 0.8567 | 0.7358 | 0.8752 | 0.7181 | 0.8628 |
| 0.6624 | 24.0 | 37368 | 0.6432 | 0.7117 | 0.8986 | 0.8041 | 0.0019 | 0.5384 | 0.7488 | 0.63 | 0.8205 | 0.8535 | 0.0974 | 0.7492 | 0.8786 | 0.6871 | 0.8394 | 0.7287 | 0.8639 | 0.7194 | 0.8572 |
| 0.6356 | 25.0 | 38925 | 0.6101 | 0.7284 | 0.905 | 0.8178 | 0.004 | 0.5444 | 0.7676 | 0.6448 | 0.8322 | 0.863 | 0.1108 | 0.7616 | 0.8874 | 0.7125 | 0.8556 | 0.7417 | 0.8709 | 0.7311 | 0.8626 |
| 0.6319 | 26.0 | 40482 | 0.6330 | 0.7191 | 0.9005 | 0.8113 | 0.0085 | 0.557 | 0.754 | 0.6392 | 0.8262 | 0.8513 | 0.106 | 0.7566 | 0.8745 | 0.702 | 0.849 | 0.7371 | 0.8558 | 0.7182 | 0.8489 |
| 0.6069 | 27.0 | 42039 | 0.5855 | 0.7461 | 0.9072 | 0.824 | 0.0082 | 0.5615 | 0.7852 | 0.6512 | 0.845 | 0.8742 | 0.2484 | 0.7625 | 0.9003 | 0.7159 | 0.8596 | 0.7615 | 0.884 | 0.7609 | 0.8791 |
| 0.5898 | 28.0 | 43596 | 0.5582 | 0.7581 | 0.9158 | 0.8405 | 0.0091 | 0.5844 | 0.7957 | 0.6649 | 0.8552 | 0.878 | 0.2474 | 0.785 | 0.9008 | 0.7384 | 0.8677 | 0.7742 | 0.8882 | 0.7616 | 0.8782 |
| 0.5777 | 29.0 | 45153 | 0.5412 | 0.7706 | 0.9226 | 0.8492 | 0.0029 | 0.6117 | 0.8054 | 0.6718 | 0.8591 | 0.8878 | 0.2147 | 0.7882 | 0.9118 | 0.7542 | 0.8826 | 0.7799 | 0.8953 | 0.7777 | 0.8854 |
| 0.5461 | 30.0 | 46710 | 0.5424 | 0.7714 | 0.9259 | 0.8563 | 0.0024 | 0.6033 | 0.8081 | 0.6708 | 0.8571 | 0.8829 | 0.206 | 0.7823 | 0.9069 | 0.7602 | 0.8784 | 0.7814 | 0.8906 | 0.7726 | 0.8798 |
| 0.5392 | 31.0 | 48267 | 0.5274 | 0.7773 | 0.9219 | 0.8591 | 0.0026 | 0.5986 | 0.8159 | 0.6752 | 0.8648 | 0.8888 | 0.2832 | 0.795 | 0.9111 | 0.7549 | 0.8835 | 0.7902 | 0.894 | 0.7867 | 0.8889 |
| 0.5367 | 32.0 | 49824 | 0.5181 | 0.7863 | 0.9312 | 0.8654 | 0.0038 | 0.6234 | 0.8216 | 0.678 | 0.8663 | 0.8882 | 0.2332 | 0.7956 | 0.9109 | 0.7689 | 0.8805 | 0.7985 | 0.8962 | 0.7915 | 0.8879 |
| 0.5187 | 33.0 | 51381 | 0.5079 | 0.7853 | 0.934 | 0.8672 | 0.0187 | 0.6245 | 0.8206 | 0.6814 | 0.8681 | 0.8917 | 0.2412 | 0.7984 | 0.9143 | 0.7692 | 0.89 | 0.7937 | 0.8966 | 0.7931 | 0.8884 |
| 0.5102 | 34.0 | 52938 | 0.4861 | 0.8049 | 0.9381 | 0.8761 | 0.0318 | 0.6481 | 0.8388 | 0.6912 | 0.8811 | 0.9047 | 0.2659 | 0.818 | 0.9262 | 0.7893 | 0.8991 | 0.8191 | 0.914 | 0.8062 | 0.901 |
| 0.4868 | 35.0 | 54495 | 0.4753 | 0.8046 | 0.9392 | 0.8827 | 0.0084 | 0.6389 | 0.8413 | 0.6933 | 0.8796 | 0.9015 | 0.2495 | 0.8056 | 0.9249 | 0.7884 | 0.8981 | 0.811 | 0.9036 | 0.8143 | 0.903 |
| 0.4821 | 36.0 | 56052 | 0.4714 | 0.8096 | 0.9427 | 0.8861 | 0.0225 | 0.6485 | 0.8448 | 0.6955 | 0.8828 | 0.9042 | 0.278 | 0.8159 | 0.9258 | 0.7923 | 0.8981 | 0.8221 | 0.9121 | 0.8143 | 0.9024 |
| 0.4714 | 37.0 | 57609 | 0.4447 | 0.8232 | 0.9433 | 0.8911 | 0.0219 | 0.6619 | 0.8599 | 0.7032 | 0.8935 | 0.9126 | 0.269 | 0.8235 | 0.9344 | 0.8072 | 0.9051 | 0.8295 | 0.9173 | 0.8329 | 0.9154 |
| 0.4653 | 38.0 | 59166 | 0.4554 | 0.819 | 0.946 | 0.8912 | 0.0178 | 0.6567 | 0.8548 | 0.7021 | 0.8873 | 0.9059 | 0.3004 | 0.8138 | 0.9281 | 0.8025 | 0.899 | 0.8281 | 0.9122 | 0.8264 | 0.9065 |
| 0.4494 | 39.0 | 60723 | 0.4310 | 0.8308 | 0.9451 | 0.8939 | 0.0243 | 0.6685 | 0.8663 | 0.7091 | 0.8985 | 0.918 | 0.3242 | 0.8358 | 0.9381 | 0.8127 | 0.9114 | 0.8401 | 0.9237 | 0.8397 | 0.919 |
| 0.4389 | 40.0 | 62280 | 0.4289 | 0.8336 | 0.9489 | 0.8967 | 0.0177 | 0.6757 | 0.8684 | 0.7102 | 0.8976 | 0.9165 | 0.2958 | 0.8329 | 0.9369 | 0.8167 | 0.9102 | 0.8425 | 0.9219 | 0.8416 | 0.9174 |
| 0.4376 | 41.0 | 63837 | 0.4245 | 0.8365 | 0.9498 | 0.9011 | 0.0179 | 0.6795 | 0.8709 | 0.7122 | 0.8985 | 0.9182 | 0.3335 | 0.8382 | 0.9377 | 0.8167 | 0.9067 | 0.8477 | 0.9272 | 0.845 | 0.9205 |
| 0.4252 | 42.0 | 65394 | 0.4244 | 0.8368 | 0.9511 | 0.8998 | 0.0175 | 0.6802 | 0.8713 | 0.7106 | 0.8988 | 0.9166 | 0.3604 | 0.8331 | 0.9369 | 0.8183 | 0.9089 | 0.8467 | 0.9229 | 0.8456 | 0.9179 |
| 0.4215 | 43.0 | 66951 | 0.4281 | 0.8342 | 0.9517 | 0.8987 | 0.0297 | 0.6731 | 0.8696 | 0.7088 | 0.8985 | 0.9159 | 0.3176 | 0.83 | 0.9369 | 0.8177 | 0.9092 | 0.8431 | 0.9217 | 0.8418 | 0.9169 |
| 0.4279 | 44.0 | 68508 | 0.4235 | 0.8375 | 0.9527 | 0.9012 | 0.0191 | 0.6777 | 0.873 | 0.7116 | 0.8984 | 0.9164 | 0.32 | 0.8288 | 0.9376 | 0.8191 | 0.9094 | 0.8479 | 0.9226 | 0.8453 | 0.9172 |
| 0.4133 | 45.0 | 70065 | 0.4220 | 0.837 | 0.9525 | 0.9014 | 0.0168 | 0.6764 | 0.8719 | 0.7117 | 0.9 | 0.9175 | 0.3059 | 0.8353 | 0.9377 | 0.8168 | 0.9081 | 0.8488 | 0.9244 | 0.8455 | 0.9199 |
| 0.4085 | 46.0 | 71622 | 0.4231 | 0.837 | 0.9538 | 0.9026 | 0.0158 | 0.679 | 0.8721 | 0.7118 | 0.8986 | 0.9157 | 0.2905 | 0.8276 | 0.9371 | 0.8196 | 0.908 | 0.8463 | 0.9227 | 0.8451 | 0.9165 |
| 0.4138 | 47.0 | 73179 | 0.4269 | 0.8335 | 0.9529 | 0.9019 | 0.014 | 0.6735 | 0.8689 | 0.7097 | 0.8965 | 0.9141 | 0.2954 | 0.8246 | 0.9358 | 0.8145 | 0.9048 | 0.8436 | 0.9226 | 0.8424 | 0.9149 |
| 0.4147 | 48.0 | 74736 | 0.4246 | 0.8353 | 0.9529 | 0.9023 | 0.0158 | 0.6736 | 0.8709 | 0.7108 | 0.8979 | 0.9155 | 0.2982 | 0.8266 | 0.937 | 0.8182 | 0.9062 | 0.8447 | 0.9242 | 0.843 | 0.916 |
| 0.4145 | 49.0 | 76293 | 0.4237 | 0.8362 | 0.9531 | 0.9027 | 0.0159 | 0.6751 | 0.8715 | 0.7108 | 0.8984 | 0.916 | 0.2984 | 0.8286 | 0.9373 | 0.8192 | 0.9067 | 0.8454 | 0.9248 | 0.8439 | 0.9167 |
| 0.406 | 50.0 | 77850 | 0.4246 | 0.8353 | 0.9529 | 0.9029 | 0.0159 | 0.6742 | 0.8707 | 0.7106 | 0.8977 | 0.9152 | 0.2984 | 0.8276 | 0.9365 | 0.8185 | 0.9059 | 0.8446 | 0.9239 | 0.8429 | 0.9159 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu118
- Datasets 3.3.2
- Tokenizers 0.21.0
| [
"garbage bag",
"paper bag",
"plastic bag"
] |
toukapy/detr_finetuned_kitti_mots-noaug-good-1 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_kitti_mots-noaug-good-1
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3675
- Map: 0.2538
- Map 50: 0.5322
- Map 75: 0.2143
- Map Small: 0.0306
- Map Medium: 0.2748
- Map Large: 0.5189
- Mar 1: 0.1217
- Mar 10: 0.3156
- Mar 100: 0.392
- Mar Small: 0.1285
- Mar Medium: 0.4301
- Mar Large: 0.6443
- Map Pedestrian: 0.1681
- Mar 100 Pedestrian: 0.3351
- Map Ignore: -1.0
- Mar 100 Ignore: -1.0
- Map Car: 0.3395
- Mar 100 Car: 0.4489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Pedestrian | Mar 100 Pedestrian | Map Ignore | Mar 100 Ignore | Map Car | Mar 100 Car |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:--------------:|:------------------:|:----------:|:--------------:|:-------:|:-----------:|
| 1.177 | 1.0 | 625 | 1.4322 | 0.1364 | 0.3647 | 0.0647 | 0.0114 | 0.1339 | 0.3615 | 0.0821 | 0.2247 | 0.3215 | 0.0989 | 0.3222 | 0.607 | 0.0607 | 0.2792 | -1.0 | -1.0 | 0.212 | 0.3638 |
| 1.0939 | 2.0 | 1250 | 1.4303 | 0.1575 | 0.3775 | 0.1093 | 0.0207 | 0.1468 | 0.3678 | 0.0871 | 0.2277 | 0.3194 | 0.1119 | 0.34 | 0.5426 | 0.0689 | 0.2749 | -1.0 | -1.0 | 0.2462 | 0.3639 |
| 1.0674 | 3.0 | 1875 | 1.3514 | 0.1811 | 0.4065 | 0.1345 | 0.017 | 0.1875 | 0.4097 | 0.0991 | 0.2603 | 0.3511 | 0.1063 | 0.3717 | 0.62 | 0.0758 | 0.2779 | -1.0 | -1.0 | 0.2863 | 0.4244 |
| 0.9822 | 4.0 | 2500 | 1.3675 | 0.1708 | 0.4027 | 0.1232 | 0.0166 | 0.1821 | 0.3745 | 0.0962 | 0.2523 | 0.346 | 0.1167 | 0.3623 | 0.6005 | 0.0775 | 0.2844 | -1.0 | -1.0 | 0.2642 | 0.4077 |
| 0.9495 | 5.0 | 3125 | 1.4739 | 0.1647 | 0.397 | 0.0956 | 0.0217 | 0.1599 | 0.4014 | 0.0921 | 0.2457 | 0.3271 | 0.0961 | 0.3422 | 0.5966 | 0.0769 | 0.2502 | -1.0 | -1.0 | 0.2525 | 0.4041 |
| 0.9387 | 6.0 | 3750 | 1.3182 | 0.194 | 0.4233 | 0.1596 | 0.0171 | 0.213 | 0.3984 | 0.1086 | 0.2704 | 0.3659 | 0.1105 | 0.4008 | 0.6124 | 0.0902 | 0.3083 | -1.0 | -1.0 | 0.2979 | 0.4235 |
| 0.9332 | 7.0 | 4375 | 1.3399 | 0.2021 | 0.4429 | 0.1508 | 0.0154 | 0.2055 | 0.4387 | 0.1071 | 0.2667 | 0.3501 | 0.111 | 0.3784 | 0.5978 | 0.1076 | 0.2919 | -1.0 | -1.0 | 0.2966 | 0.4083 |
| 0.8834 | 8.0 | 5000 | 1.3403 | 0.21 | 0.4456 | 0.1741 | 0.0216 | 0.2185 | 0.4446 | 0.1094 | 0.2836 | 0.3747 | 0.117 | 0.4075 | 0.6311 | 0.1102 | 0.3106 | -1.0 | -1.0 | 0.3099 | 0.4388 |
| 0.8489 | 9.0 | 5625 | 1.3444 | 0.2083 | 0.4493 | 0.1755 | 0.0208 | 0.2189 | 0.4433 | 0.1068 | 0.2774 | 0.3699 | 0.1262 | 0.3958 | 0.6256 | 0.1028 | 0.2967 | -1.0 | -1.0 | 0.3139 | 0.4432 |
| 0.8368 | 10.0 | 6250 | 1.3675 | 0.2077 | 0.4556 | 0.165 | 0.0195 | 0.2149 | 0.4571 | 0.1063 | 0.2752 | 0.365 | 0.1145 | 0.3974 | 0.6167 | 0.1121 | 0.3125 | -1.0 | -1.0 | 0.3032 | 0.4175 |
| 0.8254 | 11.0 | 6875 | 1.3734 | 0.2043 | 0.4469 | 0.166 | 0.0163 | 0.2085 | 0.4471 | 0.1063 | 0.2736 | 0.3587 | 0.1119 | 0.3831 | 0.6211 | 0.12 | 0.3111 | -1.0 | -1.0 | 0.2887 | 0.4063 |
| 0.7777 | 12.0 | 7500 | 1.3421 | 0.2221 | 0.4765 | 0.1863 | 0.0251 | 0.2372 | 0.4422 | 0.112 | 0.2868 | 0.3661 | 0.114 | 0.4123 | 0.5855 | 0.1198 | 0.3029 | -1.0 | -1.0 | 0.3243 | 0.4294 |
| 0.7543 | 13.0 | 8125 | 1.3643 | 0.2068 | 0.4643 | 0.1552 | 0.0208 | 0.2189 | 0.4503 | 0.1048 | 0.277 | 0.364 | 0.1266 | 0.3929 | 0.6074 | 0.1117 | 0.2962 | -1.0 | -1.0 | 0.3019 | 0.4319 |
| 0.7411 | 14.0 | 8750 | 1.3261 | 0.2298 | 0.4905 | 0.1943 | 0.0264 | 0.2453 | 0.4729 | 0.1139 | 0.296 | 0.3803 | 0.1241 | 0.4168 | 0.6243 | 0.1263 | 0.3121 | -1.0 | -1.0 | 0.3332 | 0.4484 |
| 0.7089 | 15.0 | 9375 | 1.3016 | 0.2356 | 0.5073 | 0.1925 | 0.0311 | 0.2456 | 0.493 | 0.1143 | 0.3011 | 0.3897 | 0.134 | 0.4233 | 0.6404 | 0.1475 | 0.3352 | -1.0 | -1.0 | 0.3238 | 0.4441 |
| 0.6699 | 16.0 | 10000 | 1.3116 | 0.2401 | 0.4956 | 0.209 | 0.0259 | 0.2596 | 0.4876 | 0.1183 | 0.2998 | 0.3889 | 0.1338 | 0.4264 | 0.6333 | 0.1394 | 0.3276 | -1.0 | -1.0 | 0.3408 | 0.4502 |
| 0.6488 | 17.0 | 10625 | 1.3128 | 0.2485 | 0.5142 | 0.2173 | 0.0327 | 0.2681 | 0.5097 | 0.1204 | 0.3139 | 0.4007 | 0.1249 | 0.4425 | 0.6549 | 0.1575 | 0.3454 | -1.0 | -1.0 | 0.3395 | 0.4559 |
| 0.6409 | 18.0 | 11250 | 1.3760 | 0.2218 | 0.4971 | 0.1709 | 0.0263 | 0.2363 | 0.4789 | 0.1099 | 0.2864 | 0.3592 | 0.1296 | 0.389 | 0.5936 | 0.1266 | 0.288 | -1.0 | -1.0 | 0.317 | 0.4305 |
| 0.6173 | 19.0 | 11875 | 1.3561 | 0.2387 | 0.518 | 0.1908 | 0.0276 | 0.2562 | 0.5002 | 0.1158 | 0.3049 | 0.383 | 0.1177 | 0.4209 | 0.6384 | 0.1521 | 0.3233 | -1.0 | -1.0 | 0.3253 | 0.4427 |
| 0.5787 | 20.0 | 12500 | 1.3105 | 0.2557 | 0.5258 | 0.2245 | 0.0292 | 0.2747 | 0.5163 | 0.1207 | 0.3192 | 0.4005 | 0.1284 | 0.44 | 0.6581 | 0.1664 | 0.3476 | -1.0 | -1.0 | 0.345 | 0.4534 |
| 0.5574 | 21.0 | 13125 | 1.3450 | 0.2512 | 0.5275 | 0.2095 | 0.0285 | 0.2725 | 0.5134 | 0.121 | 0.3122 | 0.3899 | 0.1277 | 0.4264 | 0.6434 | 0.1636 | 0.328 | -1.0 | -1.0 | 0.3389 | 0.4518 |
| 0.5452 | 22.0 | 13750 | 1.3460 | 0.2546 | 0.527 | 0.2155 | 0.0313 | 0.273 | 0.5167 | 0.1231 | 0.3163 | 0.3955 | 0.1306 | 0.4367 | 0.6411 | 0.1621 | 0.335 | -1.0 | -1.0 | 0.3471 | 0.456 |
| 0.5285 | 23.0 | 14375 | 1.3530 | 0.2474 | 0.5259 | 0.1986 | 0.0303 | 0.2657 | 0.5171 | 0.119 | 0.3111 | 0.39 | 0.1297 | 0.4273 | 0.6407 | 0.166 | 0.3381 | -1.0 | -1.0 | 0.3289 | 0.442 |
| 0.5034 | 24.0 | 15000 | 1.3436 | 0.2531 | 0.5296 | 0.2141 | 0.0334 | 0.2758 | 0.5099 | 0.1214 | 0.3166 | 0.395 | 0.1308 | 0.4372 | 0.6379 | 0.1635 | 0.3396 | -1.0 | -1.0 | 0.3428 | 0.4504 |
| 0.4929 | 25.0 | 15625 | 1.3706 | 0.251 | 0.5315 | 0.2048 | 0.029 | 0.2698 | 0.5149 | 0.1206 | 0.3141 | 0.3897 | 0.1255 | 0.4274 | 0.6432 | 0.1679 | 0.3355 | -1.0 | -1.0 | 0.3341 | 0.4439 |
| 0.479 | 26.0 | 16250 | 1.3653 | 0.2509 | 0.5301 | 0.2096 | 0.0316 | 0.272 | 0.5133 | 0.1205 | 0.3142 | 0.3897 | 0.1268 | 0.4294 | 0.6379 | 0.1653 | 0.3321 | -1.0 | -1.0 | 0.3365 | 0.4473 |
| 0.4751 | 27.0 | 16875 | 1.3693 | 0.2527 | 0.5319 | 0.2119 | 0.0323 | 0.274 | 0.5166 | 0.1216 | 0.315 | 0.3908 | 0.1289 | 0.4289 | 0.6412 | 0.1657 | 0.3324 | -1.0 | -1.0 | 0.3396 | 0.4491 |
| 0.4595 | 28.0 | 17500 | 1.3686 | 0.2547 | 0.5322 | 0.2184 | 0.0315 | 0.2762 | 0.5183 | 0.1221 | 0.316 | 0.3923 | 0.1285 | 0.4312 | 0.6429 | 0.168 | 0.3338 | -1.0 | -1.0 | 0.3413 | 0.4507 |
| 0.4589 | 29.0 | 18125 | 1.3683 | 0.2541 | 0.5323 | 0.2133 | 0.0305 | 0.2745 | 0.519 | 0.1218 | 0.3155 | 0.3918 | 0.1283 | 0.4297 | 0.6447 | 0.1683 | 0.3353 | -1.0 | -1.0 | 0.3399 | 0.4483 |
| 0.4555 | 30.0 | 18750 | 1.3675 | 0.2538 | 0.5322 | 0.2143 | 0.0306 | 0.2748 | 0.5189 | 0.1217 | 0.3156 | 0.392 | 0.1285 | 0.4301 | 0.6443 | 0.1681 | 0.3351 | -1.0 | -1.0 | 0.3395 | 0.4489 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.2
- Tokenizers 0.21.0
| [
"pedestrian",
"ignore",
"car"
] |
zhuchi76/detr-resnet-50-finetuned-1-epoch-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-1-epoch-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
zhuchi76/detr-resnet-50-finetuned-10-epochs-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-10-epochs-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
lillythomas/rtdetr-v2-r50-cppe5-finetune-2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rtdetr-v2-r50-cppe5-finetune-2
This model is a fine-tuned version of [PekingU/rtdetr_v2_r50vd](https://huggingface.co/PekingU/rtdetr_v2_r50vd) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 15.7596
- Map: 0.243
- Map 50: 0.4016
- Map 75: 0.2288
- Map Small: 0.1622
- Map Medium: 0.1679
- Map Large: 0.4047
- Mar 1: 0.2498
- Mar 10: 0.5329
- Mar 100: 0.6054
- Mar Small: 0.4114
- Mar Medium: 0.5602
- Mar Large: 0.781
- Map Coverall: 0.4109
- Mar 100 Coverall: 0.7821
- Map Face Shield: 0.1194
- Mar 100 Face Shield: 0.6471
- Map Gloves: 0.2443
- Mar 100 Gloves: 0.4508
- Map Goggles: 0.065
- Mar 100 Goggles: 0.4586
- Map Mask: 0.3754
- Mar 100 Mask: 0.6882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:|
| No log | 1.0 | 107 | 35.1046 | 0.0395 | 0.0911 | 0.0256 | 0.0015 | 0.0104 | 0.0475 | 0.0733 | 0.1851 | 0.2494 | 0.0421 | 0.1906 | 0.365 | 0.1668 | 0.5491 | 0.0207 | 0.2278 | 0.0068 | 0.1759 | 0.0007 | 0.0877 | 0.0024 | 0.2062 |
| No log | 2.0 | 214 | 18.0639 | 0.1779 | 0.3283 | 0.1719 | 0.0679 | 0.1 | 0.2844 | 0.2053 | 0.4086 | 0.4734 | 0.2576 | 0.3816 | 0.7174 | 0.4958 | 0.7032 | 0.0485 | 0.4873 | 0.1107 | 0.3246 | 0.0257 | 0.3754 | 0.2086 | 0.4764 |
### Framework versions
- Transformers 4.50.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
Nihel13/detr_finetuned |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned
This model is a fine-tuned version of [apkonsta/table-transformer-detection-ifrs](https://huggingface.co/apkonsta/table-transformer-detection-ifrs) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4914.5493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4887.0844 | 0.2381 | 100 | 4915.8008 |
| 4806.7622 | 0.4762 | 200 | 4915.0762 |
| 4883.3641 | 0.7143 | 300 | 4914.7651 |
| 4869.7525 | 0.9524 | 400 | 4914.6387 |
| 4809.9978 | 1.1905 | 500 | 4914.6133 |
| 4820.81 | 1.4286 | 600 | 4914.6948 |
| 4779.5872 | 1.6667 | 700 | 4914.6743 |
| 5094.1991 | 1.9048 | 800 | 4914.5752 |
| 4851.7441 | 2.1429 | 900 | 4914.6494 |
| 4928.8484 | 2.3810 | 1000 | 4914.5767 |
| 4852.6178 | 2.6190 | 1100 | 4914.5840 |
| 4855.8131 | 2.8571 | 1200 | 4914.5991 |
| 4948.5747 | 3.0952 | 1300 | 4914.5967 |
| 4887.945 | 3.3333 | 1400 | 4914.5645 |
| 4900.1669 | 3.5714 | 1500 | 4914.5747 |
| 4937.1328 | 3.8095 | 1600 | 4914.5571 |
| 4792.3219 | 4.0476 | 1700 | 4914.6787 |
| 4842.8072 | 4.2857 | 1800 | 4914.5640 |
| 4914.0503 | 4.5238 | 1900 | 4914.6113 |
| 4892.0153 | 4.7619 | 2000 | 4914.5693 |
| 4882.0288 | 5.0 | 2100 | 4914.5630 |
| 4903.9891 | 5.2381 | 2200 | 4914.5679 |
| 4870.5566 | 5.4762 | 2300 | 4914.5688 |
| 4919.3287 | 5.7143 | 2400 | 4914.5508 |
| 4927.9272 | 5.9524 | 2500 | 4914.5488 |
| 4981.8925 | 6.1905 | 2600 | 4914.5537 |
| 4864.6322 | 6.4286 | 2700 | 4914.5835 |
| 4794.4006 | 6.6667 | 2800 | 4914.5820 |
| 4878.885 | 6.9048 | 2900 | 4914.5488 |
| 4967.0887 | 7.1429 | 3000 | 4914.5518 |
| 4937.0766 | 7.3810 | 3100 | 4914.5464 |
| 4829.3891 | 7.6190 | 3200 | 4914.5493 |
| 4812.0778 | 7.8571 | 3300 | 4914.5459 |
| 4823.5034 | 8.0952 | 3400 | 4914.5444 |
| 4919.2544 | 8.3333 | 3500 | 4914.5474 |
| 4838.375 | 8.5714 | 3600 | 4914.5581 |
| 4832.6153 | 8.8095 | 3700 | 4914.5513 |
| 4787.5813 | 9.0476 | 3800 | 4914.5464 |
| 4862.2234 | 9.2857 | 3900 | 4914.5464 |
| 4878.2669 | 9.5238 | 4000 | 4914.5474 |
| 4933.3856 | 9.7619 | 4100 | 4914.5488 |
| 4945.8159 | 10.0 | 4200 | 4914.5493 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Tokenizers 0.21.0
| [
"table",
"table rotated"
] |
anusha2002/results |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 12.8436 | 1.0 | 4 | 11.5652 |
| 12.8436 | 2.0 | 8 | 8.6499 |
| 12.8436 | 3.0 | 12 | 6.6430 |
| 12.8436 | 4.0 | 16 | 5.4425 |
| 12.8436 | 5.0 | 20 | 4.4521 |
| 12.8436 | 6.0 | 24 | 3.9127 |
| 12.8436 | 7.0 | 28 | 3.3273 |
| 12.8436 | 8.0 | 32 | 3.0743 |
| 12.8436 | 9.0 | 36 | 2.8087 |
| 12.8436 | 10.0 | 40 | 2.5618 |
| 12.8436 | 11.0 | 44 | 2.4515 |
| 12.8436 | 12.0 | 48 | 2.3580 |
| 12.8436 | 13.0 | 52 | 2.2749 |
| 12.8436 | 14.0 | 56 | 2.1216 |
| 12.8436 | 15.0 | 60 | 2.0890 |
| 12.8436 | 16.0 | 64 | 2.0283 |
| 12.8436 | 17.0 | 68 | 2.0358 |
| 12.8436 | 18.0 | 72 | 1.9374 |
| 12.8436 | 19.0 | 76 | 1.9090 |
| 12.8436 | 20.0 | 80 | 1.8779 |
| 12.8436 | 21.0 | 84 | 1.8474 |
| 12.8436 | 22.0 | 88 | 1.8371 |
| 12.8436 | 23.0 | 92 | 1.8247 |
| 12.8436 | 24.0 | 96 | 1.8031 |
| 12.8436 | 25.0 | 100 | 1.7836 |
| 12.8436 | 26.0 | 104 | 1.7500 |
| 12.8436 | 27.0 | 108 | 1.7332 |
| 12.8436 | 28.0 | 112 | 1.7477 |
| 12.8436 | 29.0 | 116 | 1.7634 |
| 12.8436 | 30.0 | 120 | 1.7669 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cpu
- Datasets 3.3.2
- Tokenizers 0.21.0
| [
"well-drained floodplain",
"poorly-drained floodplain",
"prodelta",
"peat layer",
"fluvial sand",
"swamp"
] |
KaiquanMah/detr-resnet-50_finetuned_cppe5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
KingRam/rtdetr-v2-r50-kitti-finetune-2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rtdetr-v2-r50-kitti-finetune-2
This model is a fine-tuned version of [PekingU/rtdetr_v2_r50vd](https://huggingface.co/PekingU/rtdetr_v2_r50vd) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.5547
- Map: 0.485
- Map 50: 0.7268
- Map 75: 0.5322
- Map Small: 0.3428
- Map Medium: 0.4976
- Map Large: 0.6003
- Mar 1: 0.3725
- Mar 10: 0.5938
- Mar 100: 0.6304
- Mar Small: 0.4564
- Mar Medium: 0.6461
- Mar Large: 0.7557
- Map Car: 0.6901
- Mar 100 Car: 0.7866
- Map Pedestrian: 0.4012
- Mar 100 Pedestrian: 0.5245
- Map Cyclist: 0.426
- Mar 100 Cyclist: 0.5849
- Map Van: 0.6925
- Mar 100 Van: 0.7705
- Map Truck: 0.6798
- Mar 100 Truck: 0.811
- Map Misc: 0.4375
- Mar 100 Misc: 0.6007
- Map Tram: 0.6611
- Mar 100 Tram: 0.7587
- Map Person Sitting: 0.3329
- Mar 100 Person Sitting: 0.5486
- Map Dontcare: 0.044
- Mar 100 Dontcare: 0.2877
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 1000
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Car | Mar 100 Car | Map Pedestrian | Mar 100 Pedestrian | Map Cyclist | Mar 100 Cyclist | Map Van | Mar 100 Van | Map Truck | Mar 100 Truck | Map Misc | Mar 100 Misc | Map Tram | Mar 100 Tram | Map Person Sitting | Mar 100 Person Sitting | Map Dontcare | Mar 100 Dontcare |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:-------:|:-----------:|:--------------:|:------------------:|:-----------:|:---------------:|:-------:|:-----------:|:---------:|:-------------:|:--------:|:------------:|:--------:|:------------:|:------------------:|:----------------------:|:------------:|:----------------:|
| No log | 1.0 | 164 | 26.6753 | 0.0437 | 0.0725 | 0.0462 | 0.0207 | 0.0431 | 0.0811 | 0.0636 | 0.1473 | 0.1846 | 0.1324 | 0.1902 | 0.2671 | 0.3193 | 0.5141 | 0.0321 | 0.2986 | 0.0 | 0.0 | 0.0379 | 0.4384 | 0.0 | 0.061 | 0.0001 | 0.0176 | 0.0034 | 0.2534 | 0.0 | 0.0 | 0.0007 | 0.0779 |
| No log | 2.0 | 328 | 15.4174 | 0.1446 | 0.2304 | 0.1561 | 0.1265 | 0.1433 | 0.1852 | 0.1752 | 0.3192 | 0.3606 | 0.2893 | 0.3773 | 0.4656 | 0.5433 | 0.6832 | 0.2426 | 0.3687 | 0.0288 | 0.3011 | 0.2055 | 0.6112 | 0.2315 | 0.5961 | 0.0026 | 0.1622 | 0.0404 | 0.3784 | 0.0 | 0.0143 | 0.0064 | 0.1302 |
| No log | 3.0 | 492 | 12.2696 | 0.2023 | 0.3323 | 0.2077 | 0.1618 | 0.2117 | 0.2549 | 0.2258 | 0.4109 | 0.4626 | 0.3835 | 0.4601 | 0.5739 | 0.5918 | 0.7299 | 0.2987 | 0.4422 | 0.1587 | 0.4697 | 0.2849 | 0.6799 | 0.3097 | 0.7279 | 0.0071 | 0.2818 | 0.1533 | 0.567 | 0.0 | 0.0429 | 0.0161 | 0.2216 |
| 37.4405 | 4.0 | 656 | 11.0586 | 0.2433 | 0.4049 | 0.2493 | 0.2134 | 0.2492 | 0.3038 | 0.2443 | 0.4698 | 0.5424 | 0.4256 | 0.5363 | 0.6646 | 0.6274 | 0.7513 | 0.3277 | 0.4868 | 0.2426 | 0.5126 | 0.2803 | 0.6968 | 0.3803 | 0.7532 | 0.014 | 0.3682 | 0.2826 | 0.6432 | 0.0108 | 0.3857 | 0.0242 | 0.2839 |
| 37.4405 | 5.0 | 820 | 10.3133 | 0.2806 | 0.4547 | 0.2961 | 0.2328 | 0.2837 | 0.3547 | 0.2663 | 0.49 | 0.5555 | 0.4185 | 0.559 | 0.6984 | 0.6358 | 0.7531 | 0.3339 | 0.4863 | 0.2796 | 0.5238 | 0.378 | 0.6872 | 0.4073 | 0.7429 | 0.0215 | 0.3993 | 0.3853 | 0.6352 | 0.052 | 0.4714 | 0.0322 | 0.2999 |
| 37.4405 | 6.0 | 984 | 9.7662 | 0.3054 | 0.5185 | 0.3195 | 0.2689 | 0.3215 | 0.3844 | 0.2758 | 0.5107 | 0.5703 | 0.4374 | 0.5696 | 0.7159 | 0.6561 | 0.7597 | 0.3405 | 0.512 | 0.3027 | 0.5322 | 0.4172 | 0.6963 | 0.4722 | 0.7552 | 0.0877 | 0.4493 | 0.3186 | 0.6364 | 0.1171 | 0.4786 | 0.0362 | 0.3127 |
| 16.8251 | 7.0 | 1148 | 9.4750 | 0.3509 | 0.5729 | 0.378 | 0.2607 | 0.3567 | 0.4598 | 0.2975 | 0.5198 | 0.5849 | 0.4307 | 0.5936 | 0.7209 | 0.6682 | 0.7642 | 0.3432 | 0.5166 | 0.3392 | 0.5502 | 0.4744 | 0.7 | 0.5145 | 0.7597 | 0.1268 | 0.4709 | 0.4179 | 0.6591 | 0.234 | 0.5357 | 0.0402 | 0.3079 |
| 16.8251 | 8.0 | 1312 | 9.3075 | 0.3621 | 0.5918 | 0.3863 | 0.2941 | 0.3639 | 0.4724 | 0.3053 | 0.5325 | 0.5902 | 0.4475 | 0.5981 | 0.725 | 0.661 | 0.7617 | 0.3501 | 0.5143 | 0.3634 | 0.5789 | 0.4905 | 0.697 | 0.4751 | 0.761 | 0.1863 | 0.4818 | 0.4888 | 0.6966 | 0.2019 | 0.5143 | 0.0414 | 0.3065 |
| 16.8251 | 9.0 | 1476 | 9.0699 | 0.3982 | 0.6327 | 0.4195 | 0.2943 | 0.3976 | 0.5432 | 0.3114 | 0.5443 | 0.5945 | 0.4646 | 0.6006 | 0.7245 | 0.6756 | 0.7699 | 0.3777 | 0.5289 | 0.3615 | 0.5621 | 0.5296 | 0.7048 | 0.5267 | 0.7805 | 0.3019 | 0.5331 | 0.5024 | 0.6875 | 0.2603 | 0.4643 | 0.0481 | 0.3191 |
| 14.4838 | 10.0 | 1640 | 9.1070 | 0.3937 | 0.6283 | 0.4267 | 0.299 | 0.4035 | 0.5337 | 0.3202 | 0.5435 | 0.5916 | 0.4465 | 0.6109 | 0.7201 | 0.6589 | 0.7591 | 0.3683 | 0.5246 | 0.4087 | 0.5816 | 0.5225 | 0.6957 | 0.5485 | 0.7799 | 0.2963 | 0.5182 | 0.5203 | 0.7023 | 0.1765 | 0.4643 | 0.043 | 0.2987 |
| 14.4838 | 11.0 | 1804 | 8.7969 | 0.4133 | 0.6423 | 0.4569 | 0.3227 | 0.4218 | 0.531 | 0.3265 | 0.5557 | 0.6101 | 0.4809 | 0.6198 | 0.7311 | 0.6711 | 0.7751 | 0.3739 | 0.5227 | 0.4002 | 0.5989 | 0.5847 | 0.7192 | 0.5839 | 0.8032 | 0.3116 | 0.5459 | 0.551 | 0.7216 | 0.1918 | 0.4929 | 0.0517 | 0.3117 |
| 14.4838 | 12.0 | 1968 | 8.8831 | 0.4177 | 0.6518 | 0.463 | 0.3252 | 0.4204 | 0.5484 | 0.3257 | 0.5516 | 0.6058 | 0.4717 | 0.6139 | 0.7435 | 0.6465 | 0.7714 | 0.3627 | 0.5198 | 0.4031 | 0.5835 | 0.5675 | 0.7057 | 0.5809 | 0.7877 | 0.3579 | 0.5392 | 0.5302 | 0.7068 | 0.2677 | 0.5429 | 0.0431 | 0.2951 |
| 13.3813 | 13.0 | 2132 | 8.9170 | 0.4163 | 0.6465 | 0.4435 | 0.3343 | 0.429 | 0.5332 | 0.3306 | 0.5567 | 0.6078 | 0.4753 | 0.6236 | 0.7246 | 0.6017 | 0.7739 | 0.3715 | 0.5155 | 0.4009 | 0.5927 | 0.5681 | 0.7119 | 0.601 | 0.7942 | 0.3706 | 0.5608 | 0.5597 | 0.6932 | 0.2296 | 0.5143 | 0.0437 | 0.3136 |
| 13.3813 | 14.0 | 2296 | 8.9692 | 0.4215 | 0.6542 | 0.4692 | 0.3065 | 0.4345 | 0.5607 | 0.3332 | 0.55 | 0.6101 | 0.4498 | 0.6348 | 0.7302 | 0.5946 | 0.7705 | 0.3571 | 0.4854 | 0.4099 | 0.5923 | 0.5491 | 0.6895 | 0.6044 | 0.7825 | 0.3688 | 0.5547 | 0.5361 | 0.7023 | 0.3414 | 0.6143 | 0.032 | 0.2991 |
| 13.3813 | 15.0 | 2460 | 9.0191 | 0.4286 | 0.653 | 0.4774 | 0.3172 | 0.444 | 0.5587 | 0.3312 | 0.5625 | 0.6119 | 0.4572 | 0.6351 | 0.7368 | 0.5858 | 0.7684 | 0.3725 | 0.5138 | 0.4133 | 0.59 | 0.5664 | 0.7096 | 0.6089 | 0.8006 | 0.3946 | 0.5581 | 0.5826 | 0.7318 | 0.2959 | 0.5429 | 0.0371 | 0.2918 |
| 12.7559 | 16.0 | 2624 | 8.8599 | 0.4395 | 0.6774 | 0.4793 | 0.3384 | 0.4537 | 0.5631 | 0.3395 | 0.5638 | 0.6122 | 0.4709 | 0.6303 | 0.7336 | 0.6044 | 0.7735 | 0.3824 | 0.5192 | 0.4174 | 0.5862 | 0.5832 | 0.7087 | 0.6247 | 0.7961 | 0.4008 | 0.5628 | 0.5748 | 0.7341 | 0.3311 | 0.5429 | 0.0368 | 0.2866 |
| 12.7559 | 17.0 | 2788 | 8.8447 | 0.4403 | 0.6727 | 0.4992 | 0.3302 | 0.4428 | 0.5722 | 0.3375 | 0.56 | 0.6098 | 0.4632 | 0.6231 | 0.7357 | 0.6188 | 0.7731 | 0.3795 | 0.5147 | 0.4218 | 0.5816 | 0.5656 | 0.6986 | 0.632 | 0.7955 | 0.4008 | 0.5797 | 0.5675 | 0.708 | 0.3437 | 0.5286 | 0.0328 | 0.3085 |
| 12.7559 | 18.0 | 2952 | 8.8752 | 0.4472 | 0.6858 | 0.4983 | 0.3472 | 0.4535 | 0.5831 | 0.3408 | 0.5686 | 0.6111 | 0.4693 | 0.6283 | 0.7359 | 0.6328 | 0.7808 | 0.3748 | 0.5083 | 0.4284 | 0.5897 | 0.5811 | 0.7002 | 0.664 | 0.7883 | 0.4284 | 0.5757 | 0.5806 | 0.7239 | 0.3017 | 0.5357 | 0.0332 | 0.2973 |
| 12.3803 | 19.0 | 3116 | 8.7937 | 0.4535 | 0.6863 | 0.4951 | 0.3494 | 0.4585 | 0.5913 | 0.3462 | 0.5769 | 0.6261 | 0.4778 | 0.6486 | 0.7365 | 0.6397 | 0.7817 | 0.374 | 0.5058 | 0.4084 | 0.5789 | 0.6043 | 0.724 | 0.6619 | 0.8045 | 0.4323 | 0.6041 | 0.6065 | 0.7511 | 0.3214 | 0.5786 | 0.0336 | 0.3061 |
| 12.3803 | 20.0 | 3280 | 8.7272 | 0.463 | 0.7008 | 0.5113 | 0.3514 | 0.4674 | 0.5999 | 0.3472 | 0.5839 | 0.6248 | 0.4795 | 0.6422 | 0.7463 | 0.6451 | 0.7783 | 0.3761 | 0.518 | 0.447 | 0.6034 | 0.591 | 0.7055 | 0.6614 | 0.8123 | 0.4576 | 0.6115 | 0.6201 | 0.7375 | 0.328 | 0.5571 | 0.0409 | 0.2991 |
| 12.3803 | 21.0 | 3444 | 8.8055 | 0.448 | 0.6791 | 0.4896 | 0.3402 | 0.4657 | 0.5788 | 0.3444 | 0.5658 | 0.6074 | 0.4563 | 0.6359 | 0.7249 | 0.6441 | 0.7784 | 0.3818 | 0.5058 | 0.4062 | 0.5828 | 0.5866 | 0.6941 | 0.6459 | 0.7968 | 0.4307 | 0.5885 | 0.6095 | 0.733 | 0.2954 | 0.5071 | 0.0313 | 0.2802 |
| 12.0491 | 22.0 | 3608 | 8.7340 | 0.4626 | 0.6937 | 0.5258 | 0.3423 | 0.4707 | 0.6071 | 0.3483 | 0.5741 | 0.6135 | 0.4486 | 0.6403 | 0.7432 | 0.6535 | 0.7761 | 0.3713 | 0.4972 | 0.4414 | 0.5958 | 0.6041 | 0.6986 | 0.6706 | 0.8006 | 0.435 | 0.5986 | 0.6053 | 0.725 | 0.3414 | 0.55 | 0.0412 | 0.2794 |
| 12.0491 | 23.0 | 3772 | 8.7322 | 0.4673 | 0.7001 | 0.5319 | 0.3524 | 0.4728 | 0.6071 | 0.3475 | 0.5734 | 0.6146 | 0.4571 | 0.6367 | 0.7371 | 0.6625 | 0.7765 | 0.3702 | 0.494 | 0.4474 | 0.6011 | 0.6078 | 0.7185 | 0.6793 | 0.8006 | 0.4671 | 0.6108 | 0.5972 | 0.7227 | 0.3354 | 0.5286 | 0.0388 | 0.2785 |
| 12.0491 | 24.0 | 3936 | 8.7272 | 0.4689 | 0.7089 | 0.5159 | 0.3576 | 0.4778 | 0.6078 | 0.3516 | 0.5775 | 0.6175 | 0.4617 | 0.6409 | 0.7449 | 0.6714 | 0.7774 | 0.3769 | 0.4998 | 0.4521 | 0.6011 | 0.6039 | 0.7014 | 0.6728 | 0.8039 | 0.4508 | 0.5953 | 0.5955 | 0.7364 | 0.3659 | 0.5571 | 0.0313 | 0.2849 |
| 11.7826 | 25.0 | 4100 | 8.6300 | 0.4744 | 0.7087 | 0.5409 | 0.3488 | 0.4835 | 0.6099 | 0.3531 | 0.5863 | 0.6296 | 0.4622 | 0.6505 | 0.7535 | 0.6793 | 0.7805 | 0.373 | 0.4971 | 0.451 | 0.6115 | 0.6188 | 0.7098 | 0.6701 | 0.8097 | 0.4555 | 0.6088 | 0.6109 | 0.7375 | 0.3699 | 0.6286 | 0.0411 | 0.2831 |
| 11.7826 | 26.0 | 4264 | 8.6652 | 0.4636 | 0.6971 | 0.5039 | 0.3464 | 0.4771 | 0.6009 | 0.348 | 0.5679 | 0.6122 | 0.4574 | 0.6393 | 0.7303 | 0.6717 | 0.7762 | 0.3722 | 0.4914 | 0.4559 | 0.5931 | 0.6064 | 0.6991 | 0.6744 | 0.8091 | 0.4531 | 0.6074 | 0.6012 | 0.7375 | 0.3 | 0.5143 | 0.0371 | 0.2816 |
| 11.7826 | 27.0 | 4428 | 8.5516 | 0.4778 | 0.7179 | 0.5298 | 0.3636 | 0.4869 | 0.6157 | 0.3575 | 0.5829 | 0.6228 | 0.4695 | 0.6455 | 0.7471 | 0.6797 | 0.7842 | 0.3777 | 0.4911 | 0.4596 | 0.6119 | 0.6184 | 0.7151 | 0.6746 | 0.8091 | 0.4631 | 0.6068 | 0.6189 | 0.7364 | 0.3633 | 0.5643 | 0.045 | 0.2864 |
| 11.5542 | 28.0 | 4592 | 8.6945 | 0.4666 | 0.6992 | 0.5195 | 0.3515 | 0.4755 | 0.6043 | 0.3528 | 0.5726 | 0.6089 | 0.4526 | 0.6278 | 0.7461 | 0.6662 | 0.7751 | 0.3616 | 0.4722 | 0.437 | 0.5908 | 0.6069 | 0.7091 | 0.6784 | 0.8091 | 0.4569 | 0.5926 | 0.6035 | 0.733 | 0.3532 | 0.5286 | 0.0355 | 0.27 |
| 11.5542 | 29.0 | 4756 | 8.6367 | 0.4685 | 0.699 | 0.5258 | 0.3498 | 0.481 | 0.6079 | 0.3535 | 0.578 | 0.6138 | 0.4516 | 0.6392 | 0.7482 | 0.6763 | 0.7762 | 0.3706 | 0.4894 | 0.4522 | 0.5912 | 0.6024 | 0.6954 | 0.6819 | 0.8188 | 0.463 | 0.6068 | 0.6152 | 0.7455 | 0.3216 | 0.5357 | 0.0338 | 0.2655 |
| 11.5542 | 30.0 | 4920 | 8.5968 | 0.4753 | 0.7076 | 0.5274 | 0.3606 | 0.4846 | 0.6102 | 0.3549 | 0.5774 | 0.6142 | 0.4705 | 0.6374 | 0.7341 | 0.6791 | 0.7811 | 0.3721 | 0.4892 | 0.4541 | 0.6038 | 0.6135 | 0.7107 | 0.6809 | 0.8065 | 0.4895 | 0.625 | 0.6366 | 0.75 | 0.3151 | 0.4857 | 0.0368 | 0.2758 |
| 11.3582 | 31.0 | 5084 | 8.5456 | 0.482 | 0.7143 | 0.5492 | 0.3686 | 0.4913 | 0.6095 | 0.3594 | 0.5863 | 0.6279 | 0.4868 | 0.6507 | 0.7361 | 0.686 | 0.784 | 0.375 | 0.5032 | 0.4567 | 0.6123 | 0.6278 | 0.7228 | 0.6849 | 0.8156 | 0.4847 | 0.6297 | 0.6394 | 0.7489 | 0.3426 | 0.55 | 0.0408 | 0.2847 |
| 11.3582 | 32.0 | 5248 | 8.4824 | 0.4887 | 0.7193 | 0.5543 | 0.3679 | 0.4987 | 0.6223 | 0.361 | 0.5938 | 0.6366 | 0.4799 | 0.6597 | 0.7547 | 0.6914 | 0.7845 | 0.3822 | 0.5117 | 0.4624 | 0.6149 | 0.6299 | 0.7253 | 0.6885 | 0.8169 | 0.4856 | 0.6257 | 0.6521 | 0.7636 | 0.3661 | 0.6071 | 0.0398 | 0.2797 |
| 11.3582 | 33.0 | 5412 | 8.5231 | 0.4861 | 0.715 | 0.5624 | 0.3663 | 0.494 | 0.6221 | 0.3606 | 0.5865 | 0.6276 | 0.4749 | 0.65 | 0.7509 | 0.6874 | 0.7856 | 0.3761 | 0.5029 | 0.4603 | 0.6084 | 0.6258 | 0.7162 | 0.6876 | 0.8169 | 0.4906 | 0.6236 | 0.6402 | 0.7432 | 0.3695 | 0.5786 | 0.0371 | 0.2731 |
| 11.2454 | 34.0 | 5576 | 8.5219 | 0.4866 | 0.7135 | 0.5551 | 0.3664 | 0.4931 | 0.6279 | 0.3597 | 0.5936 | 0.6342 | 0.4814 | 0.6551 | 0.7563 | 0.688 | 0.7841 | 0.3779 | 0.5054 | 0.4573 | 0.6184 | 0.6265 | 0.724 | 0.6867 | 0.8156 | 0.4793 | 0.6203 | 0.6328 | 0.7614 | 0.3964 | 0.6 | 0.0348 | 0.2789 |
| 11.2454 | 35.0 | 5740 | 8.5075 | 0.4864 | 0.7204 | 0.5455 | 0.3666 | 0.4945 | 0.6205 | 0.3602 | 0.5926 | 0.6357 | 0.4821 | 0.6573 | 0.7574 | 0.6899 | 0.7865 | 0.3743 | 0.5023 | 0.4587 | 0.6169 | 0.632 | 0.721 | 0.6872 | 0.824 | 0.4905 | 0.6331 | 0.6498 | 0.7784 | 0.3577 | 0.5786 | 0.038 | 0.2802 |
| 11.2454 | 36.0 | 5904 | 8.5390 | 0.4862 | 0.7149 | 0.5533 | 0.3655 | 0.4939 | 0.6294 | 0.3594 | 0.5924 | 0.6321 | 0.4775 | 0.653 | 0.7583 | 0.6867 | 0.7845 | 0.3757 | 0.4992 | 0.4606 | 0.6142 | 0.6265 | 0.7189 | 0.6892 | 0.8214 | 0.4882 | 0.627 | 0.6369 | 0.7545 | 0.3754 | 0.5929 | 0.0366 | 0.2764 |
| 11.157 | 37.0 | 6068 | 8.5368 | 0.4863 | 0.715 | 0.5546 | 0.3651 | 0.4936 | 0.6307 | 0.3599 | 0.5883 | 0.6284 | 0.4785 | 0.6489 | 0.7595 | 0.6867 | 0.7848 | 0.3755 | 0.5 | 0.4632 | 0.6138 | 0.6276 | 0.7185 | 0.6899 | 0.8227 | 0.4912 | 0.6277 | 0.6382 | 0.7534 | 0.3693 | 0.5571 | 0.0351 | 0.2779 |
### Framework versions
- Transformers 4.50.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1
| [
"car",
"pedestrian",
"cyclist",
"van",
"truck",
"misc",
"tram",
"person_sitting",
"dontcare"
] |
KingRam/rtdetr-v2-r50-kitti2-finetune-2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rtdetr-v2-r50-kitti2-finetune-2
This model is a fine-tuned version of [PekingU/rtdetr_v2_r50vd](https://huggingface.co/PekingU/rtdetr_v2_r50vd) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8052
- Map: 0.3423
- Map 50: 0.5623
- Map 75: 0.3606
- Map Small: 0.2161
- Map Medium: 0.3631
- Map Large: 0.4172
- Mar 1: 0.2926
- Mar 10: 0.5137
- Mar 100: 0.5886
- Mar Small: 0.4156
- Mar Medium: 0.5976
- Mar Large: 0.6841
- Map Car: 0.6052
- Mar 100 Car: 0.7464
- Map Pedestrian: 0.3483
- Mar 100 Pedestrian: 0.5172
- Map Cyclist: 0.2062
- Mar 100 Cyclist: 0.4523
- Map Van: 0.5231
- Mar 100 Van: 0.7377
- Map Truck: 0.6026
- Mar 100 Truck: 0.7417
- Map Misc: 0.1678
- Mar 100 Misc: 0.516
- Map Tram: 0.3851
- Mar 100 Tram: 0.6984
- Map Person Sitting: 0.1978
- Mar 100 Person Sitting: 0.5314
- Map Dontcare: 0.0442
- Mar 100 Dontcare: 0.3566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Car | Mar 100 Car | Map Pedestrian | Mar 100 Pedestrian | Map Cyclist | Mar 100 Cyclist | Map Van | Mar 100 Van | Map Truck | Mar 100 Truck | Map Misc | Mar 100 Misc | Map Tram | Mar 100 Tram | Map Person Sitting | Mar 100 Person Sitting | Map Dontcare | Mar 100 Dontcare |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:-------:|:-----------:|:--------------:|:------------------:|:-----------:|:---------------:|:-------:|:-----------:|:---------:|:-------------:|:--------:|:------------:|:--------:|:------------:|:------------------:|:----------------------:|:------------:|:----------------:|
| 34.3989 | 1.0 | 655 | 10.4608 | 0.2538 | 0.4309 | 0.2527 | 0.1757 | 0.2643 | 0.3546 | 0.2566 | 0.4571 | 0.5322 | 0.3842 | 0.5368 | 0.6569 | 0.6141 | 0.743 | 0.2816 | 0.4418 | 0.1456 | 0.4579 | 0.4092 | 0.6612 | 0.4751 | 0.7195 | 0.0566 | 0.4101 | 0.133 | 0.6307 | 0.1334 | 0.4357 | 0.0356 | 0.2902 |
| 15.6516 | 2.0 | 1310 | 10.8110 | 0.3342 | 0.5511 | 0.3468 | 0.2589 | 0.3513 | 0.4378 | 0.2863 | 0.5125 | 0.5775 | 0.4382 | 0.5818 | 0.7005 | 0.6159 | 0.7491 | 0.3197 | 0.4894 | 0.249 | 0.4594 | 0.4919 | 0.6995 | 0.6118 | 0.7552 | 0.1736 | 0.502 | 0.3949 | 0.7102 | 0.1077 | 0.4786 | 0.0433 | 0.354 |
| 14.1316 | 3.0 | 1965 | 11.9174 | 0.3267 | 0.5128 | 0.3591 | 0.2156 | 0.3255 | 0.4735 | 0.2828 | 0.5065 | 0.5748 | 0.4402 | 0.5811 | 0.7018 | 0.5899 | 0.7322 | 0.3121 | 0.4754 | 0.2163 | 0.4766 | 0.4165 | 0.6865 | 0.5878 | 0.7526 | 0.1879 | 0.5507 | 0.429 | 0.6807 | 0.1537 | 0.45 | 0.0474 | 0.3687 |
| 12.9222 | 4.0 | 2620 | 12.5913 | 0.3124 | 0.4931 | 0.3376 | 0.2457 | 0.3275 | 0.46 | 0.279 | 0.4928 | 0.5516 | 0.4054 | 0.5606 | 0.688 | 0.5478 | 0.7176 | 0.2902 | 0.4381 | 0.13 | 0.413 | 0.4401 | 0.6785 | 0.5964 | 0.7468 | 0.2421 | 0.5601 | 0.4089 | 0.7057 | 0.108 | 0.3429 | 0.0482 | 0.3613 |
| 12.6036 | 5.0 | 3275 | 13.0327 | 0.3117 | 0.4742 | 0.3476 | 0.2483 | 0.3282 | 0.4539 | 0.2901 | 0.4865 | 0.5418 | 0.4431 | 0.5536 | 0.6582 | 0.5549 | 0.7381 | 0.292 | 0.4435 | 0.1428 | 0.3874 | 0.4093 | 0.6968 | 0.5729 | 0.7494 | 0.2757 | 0.6088 | 0.4991 | 0.6989 | 0.0056 | 0.1714 | 0.0528 | 0.3825 |
| 12.2062 | 6.0 | 3930 | 13.2957 | 0.3151 | 0.4824 | 0.3456 | 0.2292 | 0.3166 | 0.465 | 0.2908 | 0.4839 | 0.5362 | 0.4192 | 0.5437 | 0.663 | 0.5304 | 0.7357 | 0.2899 | 0.4352 | 0.1494 | 0.3556 | 0.4531 | 0.6872 | 0.5778 | 0.7442 | 0.3199 | 0.598 | 0.4628 | 0.7034 | 0.004 | 0.1714 | 0.0491 | 0.3949 |
| 11.847 | 7.0 | 4585 | 13.1970 | 0.309 | 0.4753 | 0.3435 | 0.2333 | 0.3199 | 0.4422 | 0.2983 | 0.4868 | 0.5398 | 0.4161 | 0.5544 | 0.6635 | 0.5359 | 0.7321 | 0.2909 | 0.4272 | 0.099 | 0.359 | 0.4248 | 0.6806 | 0.5835 | 0.7247 | 0.2946 | 0.6203 | 0.4941 | 0.7102 | 0.0087 | 0.2214 | 0.0495 | 0.3826 |
### Framework versions
- Transformers 4.50.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.0
- Tokenizers 0.21.1
| [
"car",
"pedestrian",
"cyclist",
"van",
"truck",
"misc",
"tram",
"person_sitting",
"dontcare"
] |
Nihel13/tatr-finetuned |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"table",
"table rotated"
] |
MohamedAdamBaccouche/ai |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
pascalrai/Deformable-DETR-Document-Layout-Analysis |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Deformable-DETR-Document-Layout-Analysis
This model was fine-tuned on the doc_lay_net dataset for Document Layout Analysis using full-sized DocLayNet Public Dataset.
## Model description
The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100.
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.

## Intended uses & limitations
You can use the model to predict Bounding Box for 11 different Classes of Document Layout Analysis.
### How to use
```python
from transformers import AutoImageProcessor, DeformableDetrForObjectDetection
import torch
from PIL import Image
import requests
url = "string-url-of-a-Document_page"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("pascalrai/Deformable-DETR-Document-Layout-Analyzer")
model = DeformableDetrForObjectDetection.from_pretrained("pascalrai/Deformable-DETR-Document-Layout-Analyzer")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# convert outputs (bounding boxes and class logits) to COCO API
target_sizes = torch.tensor([image.size[::-1]])
results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.5)[0]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
box = [round(i, 2) for i in box.tolist()]
print(
f"Detected {model.config.id2label[label.item()]} with confidence "
f"{round(score.item(), 3)} at location {box}"
)
```
## Evaluation on DocLayNet
Evaluation of the Trained model on Test Dataset of DocLayNet (On 3 epoch):
```
{'map': 0.6086,
'map_50': 0.836,
'map_75': 0.6662,
'map_small': 0.3269,
'map_medium': 0.501,
'map_large': 0.6712,
'mar_1': 0.3336,
'mar_10': 0.7113,
'mar_100': 0.7596,
'mar_small': 0.4667,
'mar_medium': 0.6717,
'mar_large': 0.8436,
'map_0': 0.5709,
'mar_100_0': 0.7639,
'map_1': 0.4685,
'mar_100_1': 0.7468,
'map_2': 0.5776,
'mar_100_2': 0.7163,
'map_3': 0.7143,
'mar_100_3': 0.8251,
'map_4': 0.4056,
'mar_100_4': 0.533,
'map_5': 0.5095,
'mar_100_5': 0.6686,
'map_6': 0.6826,
'mar_100_6': 0.8387,
'map_7': 0.5859,
'mar_100_7': 0.7308,
'map_8': 0.7871,
'mar_100_8': 0.8852,
'map_9': 0.7898,
'mar_100_9': 0.8617,
'map_10': 0.6034,
'mar_100_10': 0.7854}
```
### Training hyperparameters
The model was trained on A10G 24GB GPU for 21 hours.
The following hyperparameters were used during training:
- learning_rate: 5e-05
- eff_train_batch_size: 12
- eff_eval_batch_size: 12
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 10
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 2.21.0
- Tokenizers 0.21.0
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2010.04159,
doi = {10.48550/ARXIV.2010.04159},
url = {https://arxiv.org/abs/2010.04159},
author = {Zhu, Xizhou and Su, Weijie and Lu, Lewei and Li, Bin and Wang, Xiaogang and Dai, Jifeng},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Deformable DETR: Deformable Transformers for End-to-End Object Detection},
publisher = {arXiv},
year = {2020},
copyright = {arXiv.org perpetual, non-exclusive license}
}
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
doi = {10.1145/3534678.353904},
url = {https://doi.org/10.1145/3534678.3539043},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
isbn = {9781450393850},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages = {3743–3751},
numpages = {9},
location = {Washington DC, USA},
series = {KDD '22}
}
``` | [
"caption",
"footnote",
"formula",
"list-item",
"page-footer",
"page-header",
"picture",
"section-header",
"table",
"text",
"title"
] |
Nihel13/ai |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"table",
"table rotated"
] |
Vrjb/DETRPT |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11"
] |
sung429/detr-accident-detection |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"accident",
"accident",
"vehicle"
] |
Joshhhhhhhhhh/detr-resnet-50-finetuned-10-epochs-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-10-epochs-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
paulyuan1219canada/detr-resnet-50_finetuned_cppe5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
cyc900908/detr-resnet-50-finetuned-10-epochs-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-10-epochs-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
BrianLan/detr-resnet-50-finetuned-10-epochs-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-10-epochs-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
mkx07/detr-resnet-50-finetuned-10-epochs-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-10-epochs-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.1.0
- Datasets 3.1.0
- Tokenizers 0.20.3
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
kenyou/detr-resnet-50-finetuned-10-epochs-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-10-epochs-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
jeffyuyu/detr-resnet-50-finetuned-10-epochs-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-10-epochs-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
JSlin/detr-resnet-50-finetuned-10-epochs-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-10-epochs-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
tschosbert/detr-resnet-50-dc5-fashionpedia-finetuned |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-dc5-fashionpedia-finetuned
This model is a fine-tuned version of [facebook/detr-resnet-50-dc5](https://huggingface.co/facebook/detr-resnet-50-dc5) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 6.5876
- eval_map: 0.0
- eval_map_50: 0.0001
- eval_map_75: 0.0
- eval_map_small: 0.0
- eval_map_medium: 0.0
- eval_map_large: 0.0002
- eval_mar_1: 0.0006
- eval_mar_10: 0.002
- eval_mar_100: 0.0031
- eval_mar_small: 0.0028
- eval_mar_medium: 0.0058
- eval_mar_large: 0.0086
- eval_map_shirt, blouse: 0.0001
- eval_mar_100_shirt, blouse: 0.0647
- eval_map_top, t-shirt, sweatshirt: 0.0
- eval_mar_100_top, t-shirt, sweatshirt: 0.0
- eval_map_sweater: 0.0
- eval_mar_100_sweater: 0.0
- eval_map_cardigan: 0.0
- eval_mar_100_cardigan: 0.0
- eval_map_jacket: 0.0001
- eval_mar_100_jacket: 0.0678
- eval_map_vest: 0.0
- eval_mar_100_vest: 0.0
- eval_map_pants: 0.0
- eval_mar_100_pants: 0.0
- eval_map_shorts: 0.0
- eval_mar_100_shorts: 0.0
- eval_map_skirt: 0.0005
- eval_mar_100_skirt: 0.0006
- eval_map_coat: 0.0
- eval_mar_100_coat: 0.0
- eval_map_dress: 0.0
- eval_mar_100_dress: 0.0
- eval_map_jumpsuit: 0.0
- eval_mar_100_jumpsuit: 0.0
- eval_map_cape: 0.0
- eval_mar_100_cape: 0.0
- eval_map_glasses: 0.0
- eval_mar_100_glasses: 0.0
- eval_map_hat: 0.0002
- eval_mar_100_hat: 0.0095
- eval_map_headband, head covering, hair accessory: 0.0
- eval_mar_100_headband, head covering, hair accessory: 0.0009
- eval_map_tie: 0.0
- eval_mar_100_tie: 0.0
- eval_map_glove: 0.0
- eval_mar_100_glove: 0.0
- eval_map_watch: 0.0
- eval_mar_100_watch: 0.0
- eval_map_belt: 0.0
- eval_mar_100_belt: 0.0
- eval_map_leg warmer: 0.0
- eval_mar_100_leg warmer: 0.0
- eval_map_tights, stockings: 0.0
- eval_mar_100_tights, stockings: 0.0
- eval_map_sock: 0.0
- eval_mar_100_sock: 0.0
- eval_map_shoe: 0.0
- eval_mar_100_shoe: 0.0
- eval_map_bag, wallet: 0.0
- eval_mar_100_bag, wallet: 0.0
- eval_map_scarf: 0.0
- eval_mar_100_scarf: 0.0
- eval_map_umbrella: 0.0
- eval_mar_100_umbrella: 0.0
- eval_map_hood: 0.0
- eval_mar_100_hood: 0.0
- eval_map_collar: 0.0
- eval_mar_100_collar: 0.0
- eval_map_lapel: 0.0
- eval_mar_100_lapel: 0.0
- eval_map_epaulette: 0.0
- eval_mar_100_epaulette: 0.0
- eval_map_sleeve: 0.0
- eval_mar_100_sleeve: 0.0
- eval_map_pocket: 0.0
- eval_mar_100_pocket: 0.0
- eval_map_neckline: 0.0
- eval_mar_100_neckline: 0.0
- eval_map_buckle: 0.0
- eval_mar_100_buckle: 0.0
- eval_map_zipper: 0.0
- eval_mar_100_zipper: 0.0
- eval_map_applique: 0.0
- eval_mar_100_applique: 0.0
- eval_map_bead: 0.0
- eval_mar_100_bead: 0.0
- eval_map_bow: 0.0
- eval_mar_100_bow: 0.0
- eval_map_flower: 0.0
- eval_mar_100_flower: 0.0
- eval_map_fringe: 0.0
- eval_mar_100_fringe: 0.0
- eval_map_ribbon: 0.0
- eval_mar_100_ribbon: 0.0
- eval_map_rivet: 0.0
- eval_mar_100_rivet: 0.0
- eval_map_ruffle: 0.0
- eval_mar_100_ruffle: 0.0
- eval_map_sequin: 0.0
- eval_mar_100_sequin: 0.0
- eval_map_tassel: 0.0
- eval_mar_100_tassel: 0.0
- eval_runtime: 212.2722
- eval_samples_per_second: 5.455
- eval_steps_per_second: 1.366
- epoch: 0.0044
- step: 50
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 10000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
| [
"shirt, blouse",
"top, t-shirt, sweatshirt",
"sweater",
"cardigan",
"jacket",
"vest",
"pants",
"shorts",
"skirt",
"coat",
"dress",
"jumpsuit",
"cape",
"glasses",
"hat",
"headband, head covering, hair accessory",
"tie",
"glove",
"watch",
"belt",
"leg warmer",
"tights, stockings",
"sock",
"shoe",
"bag, wallet",
"scarf",
"umbrella",
"hood",
"collar",
"lapel",
"epaulette",
"sleeve",
"pocket",
"neckline",
"buckle",
"zipper",
"applique",
"bead",
"bow",
"flower",
"fringe",
"ribbon",
"rivet",
"ruffle",
"sequin",
"tassel"
] |
joheras/detr_finetuned_fruits |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_fruits
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8626
- Map: 0.5447
- Map 50: 0.8282
- Map 75: 0.5821
- Map Small: -1.0
- Map Medium: 0.4675
- Map Large: 0.5734
- Mar 1: 0.4327
- Mar 10: 0.7017
- Mar 100: 0.7589
- Mar Small: -1.0
- Mar Medium: 0.6514
- Mar Large: 0.7795
- Map Banana: 0.4399
- Mar 100 Banana: 0.72
- Map Orange: 0.541
- Mar 100 Orange: 0.7738
- Map Apple: 0.6532
- Mar 100 Apple: 0.7829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Banana | Mar 100 Banana | Map Orange | Mar 100 Orange | Map Apple | Mar 100 Apple |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:----------:|:--------------:|:----------:|:--------------:|:---------:|:-------------:|
| No log | 1.0 | 60 | 2.2905 | 0.008 | 0.0222 | 0.006 | -1.0 | 0.0061 | 0.012 | 0.0871 | 0.1937 | 0.303 | -1.0 | 0.2429 | 0.3256 | 0.0066 | 0.15 | 0.0042 | 0.4048 | 0.0133 | 0.3543 |
| No log | 2.0 | 120 | 1.9265 | 0.0202 | 0.0629 | 0.0071 | -1.0 | 0.119 | 0.023 | 0.091 | 0.236 | 0.396 | -1.0 | 0.3243 | 0.4106 | 0.0238 | 0.415 | 0.0256 | 0.4071 | 0.0111 | 0.3657 |
| No log | 3.0 | 180 | 1.8221 | 0.0309 | 0.0731 | 0.0214 | -1.0 | 0.082 | 0.035 | 0.0877 | 0.241 | 0.4251 | -1.0 | 0.4071 | 0.4275 | 0.0504 | 0.49 | 0.0302 | 0.4738 | 0.0121 | 0.3114 |
| No log | 4.0 | 240 | 1.7172 | 0.0253 | 0.0655 | 0.0111 | -1.0 | 0.0988 | 0.0251 | 0.1424 | 0.258 | 0.4915 | -1.0 | 0.4371 | 0.502 | 0.0303 | 0.5225 | 0.0273 | 0.4548 | 0.0183 | 0.4971 |
| No log | 5.0 | 300 | 1.5541 | 0.0472 | 0.1085 | 0.0305 | -1.0 | 0.0639 | 0.0526 | 0.1869 | 0.3652 | 0.5653 | -1.0 | 0.4014 | 0.5933 | 0.0326 | 0.535 | 0.0777 | 0.6095 | 0.0313 | 0.5514 |
| No log | 6.0 | 360 | 1.5159 | 0.0501 | 0.1145 | 0.0436 | -1.0 | 0.0694 | 0.0556 | 0.2009 | 0.3976 | 0.5542 | -1.0 | 0.38 | 0.5799 | 0.0659 | 0.5725 | 0.0527 | 0.5071 | 0.0318 | 0.5829 |
| No log | 7.0 | 420 | 1.4185 | 0.0775 | 0.1777 | 0.0662 | -1.0 | 0.2007 | 0.0751 | 0.2078 | 0.4237 | 0.5944 | -1.0 | 0.5071 | 0.6137 | 0.0647 | 0.585 | 0.1071 | 0.5952 | 0.0608 | 0.6029 |
| No log | 8.0 | 480 | 1.2902 | 0.0965 | 0.189 | 0.077 | -1.0 | 0.1555 | 0.1161 | 0.2715 | 0.4469 | 0.64 | -1.0 | 0.5186 | 0.66 | 0.0726 | 0.62 | 0.1498 | 0.6286 | 0.0673 | 0.6714 |
| 1.5459 | 9.0 | 540 | 1.2497 | 0.1052 | 0.2137 | 0.1115 | -1.0 | 0.2298 | 0.1295 | 0.294 | 0.4625 | 0.6662 | -1.0 | 0.4914 | 0.6987 | 0.0749 | 0.6025 | 0.1614 | 0.6905 | 0.0794 | 0.7057 |
| 1.5459 | 10.0 | 600 | 1.0677 | 0.141 | 0.2485 | 0.1427 | -1.0 | 0.2822 | 0.1552 | 0.3656 | 0.5481 | 0.7142 | -1.0 | 0.6257 | 0.7329 | 0.0819 | 0.6475 | 0.2168 | 0.7238 | 0.1242 | 0.7714 |
| 1.5459 | 11.0 | 660 | 1.0572 | 0.1813 | 0.3134 | 0.1988 | -1.0 | 0.2859 | 0.2008 | 0.3533 | 0.5777 | 0.7017 | -1.0 | 0.5886 | 0.72 | 0.1098 | 0.665 | 0.2983 | 0.7143 | 0.136 | 0.7257 |
| 1.5459 | 12.0 | 720 | 1.0403 | 0.247 | 0.4247 | 0.2529 | -1.0 | 0.3598 | 0.2663 | 0.348 | 0.5748 | 0.7021 | -1.0 | 0.6286 | 0.7157 | 0.1359 | 0.67 | 0.3934 | 0.7333 | 0.2115 | 0.7029 |
| 1.5459 | 13.0 | 780 | 0.9933 | 0.3205 | 0.5352 | 0.3708 | -1.0 | 0.3999 | 0.3373 | 0.3908 | 0.6208 | 0.7248 | -1.0 | 0.6086 | 0.7447 | 0.1991 | 0.68 | 0.3998 | 0.7429 | 0.3626 | 0.7514 |
| 1.5459 | 14.0 | 840 | 1.0158 | 0.3865 | 0.6502 | 0.4208 | -1.0 | 0.3726 | 0.4172 | 0.3843 | 0.6447 | 0.7184 | -1.0 | 0.5557 | 0.7445 | 0.2549 | 0.6875 | 0.4506 | 0.7333 | 0.454 | 0.7343 |
| 1.5459 | 15.0 | 900 | 0.9649 | 0.4519 | 0.6973 | 0.4866 | -1.0 | 0.4641 | 0.4712 | 0.395 | 0.6727 | 0.7373 | -1.0 | 0.6357 | 0.7575 | 0.2713 | 0.67 | 0.5052 | 0.7619 | 0.5792 | 0.78 |
| 1.5459 | 16.0 | 960 | 0.9148 | 0.491 | 0.7552 | 0.5358 | -1.0 | 0.4674 | 0.5169 | 0.4167 | 0.6903 | 0.7571 | -1.0 | 0.6686 | 0.7776 | 0.3438 | 0.69 | 0.5616 | 0.7786 | 0.5676 | 0.8029 |
| 0.864 | 17.0 | 1020 | 0.8861 | 0.5232 | 0.7871 | 0.571 | -1.0 | 0.5199 | 0.5463 | 0.4387 | 0.6948 | 0.7541 | -1.0 | 0.68 | 0.771 | 0.4007 | 0.7 | 0.5659 | 0.7595 | 0.6029 | 0.8029 |
| 0.864 | 18.0 | 1080 | 0.8914 | 0.5014 | 0.7661 | 0.5433 | -1.0 | 0.4449 | 0.5276 | 0.4245 | 0.6954 | 0.7655 | -1.0 | 0.6286 | 0.79 | 0.4006 | 0.715 | 0.4992 | 0.7643 | 0.6043 | 0.8171 |
| 0.864 | 19.0 | 1140 | 0.8886 | 0.5223 | 0.7763 | 0.5611 | -1.0 | 0.4595 | 0.5492 | 0.4201 | 0.6893 | 0.7473 | -1.0 | 0.6143 | 0.7716 | 0.4002 | 0.69 | 0.5387 | 0.769 | 0.6279 | 0.7829 |
| 0.864 | 20.0 | 1200 | 0.8973 | 0.5239 | 0.8057 | 0.5726 | -1.0 | 0.4437 | 0.5531 | 0.4317 | 0.6917 | 0.7535 | -1.0 | 0.6371 | 0.7758 | 0.4343 | 0.7125 | 0.5406 | 0.7738 | 0.5966 | 0.7743 |
| 0.864 | 21.0 | 1260 | 0.8740 | 0.5355 | 0.8126 | 0.5889 | -1.0 | 0.4869 | 0.5605 | 0.4162 | 0.7055 | 0.7633 | -1.0 | 0.6314 | 0.7856 | 0.4039 | 0.7375 | 0.5735 | 0.7667 | 0.6292 | 0.7857 |
| 0.864 | 22.0 | 1320 | 0.8917 | 0.5212 | 0.7944 | 0.5517 | -1.0 | 0.4609 | 0.549 | 0.423 | 0.6872 | 0.7421 | -1.0 | 0.61 | 0.7657 | 0.4232 | 0.7 | 0.5315 | 0.769 | 0.609 | 0.7571 |
| 0.864 | 23.0 | 1380 | 0.8508 | 0.5508 | 0.8362 | 0.6164 | -1.0 | 0.4879 | 0.5786 | 0.4278 | 0.6983 | 0.753 | -1.0 | 0.6614 | 0.7723 | 0.4453 | 0.71 | 0.5576 | 0.769 | 0.6494 | 0.78 |
| 0.864 | 24.0 | 1440 | 0.8769 | 0.5586 | 0.8358 | 0.6156 | -1.0 | 0.4846 | 0.5886 | 0.4471 | 0.7105 | 0.765 | -1.0 | 0.6586 | 0.787 | 0.4598 | 0.705 | 0.5588 | 0.7786 | 0.6572 | 0.8114 |
| 0.638 | 25.0 | 1500 | 0.8670 | 0.5394 | 0.8271 | 0.5786 | -1.0 | 0.4681 | 0.5667 | 0.425 | 0.7004 | 0.7563 | -1.0 | 0.6514 | 0.7771 | 0.4333 | 0.7075 | 0.5426 | 0.7786 | 0.6422 | 0.7829 |
| 0.638 | 26.0 | 1560 | 0.8487 | 0.5557 | 0.8355 | 0.6103 | -1.0 | 0.4903 | 0.5829 | 0.4353 | 0.709 | 0.7612 | -1.0 | 0.6586 | 0.7812 | 0.4483 | 0.715 | 0.559 | 0.7857 | 0.6596 | 0.7829 |
| 0.638 | 27.0 | 1620 | 0.8585 | 0.5484 | 0.8267 | 0.5888 | -1.0 | 0.4735 | 0.5755 | 0.4318 | 0.7106 | 0.7646 | -1.0 | 0.6586 | 0.7848 | 0.4431 | 0.7225 | 0.5435 | 0.7857 | 0.6587 | 0.7857 |
| 0.638 | 28.0 | 1680 | 0.8668 | 0.5479 | 0.8262 | 0.5865 | -1.0 | 0.471 | 0.5762 | 0.4318 | 0.7051 | 0.763 | -1.0 | 0.6586 | 0.7831 | 0.4414 | 0.72 | 0.5465 | 0.7833 | 0.6556 | 0.7857 |
| 0.638 | 29.0 | 1740 | 0.8631 | 0.5459 | 0.8282 | 0.5962 | -1.0 | 0.4737 | 0.5737 | 0.4319 | 0.7011 | 0.7598 | -1.0 | 0.6586 | 0.7795 | 0.4394 | 0.72 | 0.5405 | 0.7738 | 0.6579 | 0.7857 |
| 0.638 | 30.0 | 1800 | 0.8626 | 0.5447 | 0.8282 | 0.5821 | -1.0 | 0.4675 | 0.5734 | 0.4327 | 0.7017 | 0.7589 | -1.0 | 0.6514 | 0.7795 | 0.4399 | 0.72 | 0.541 | 0.7738 | 0.6532 | 0.7829 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
| [
"banana",
"orange",
"apple"
] |
Wilbur1240/detr-resnet-50-finetuned-10-epochs-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-10-epochs-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
joheras/yolo_finetuned_fruits |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yolo_finetuned_fruits
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7836
- Map: 0.5785
- Map 50: 0.8356
- Map 75: 0.6723
- Map Small: -1.0
- Map Medium: 0.5125
- Map Large: 0.605
- Mar 1: 0.4248
- Mar 10: 0.7284
- Mar 100: 0.7686
- Mar Small: -1.0
- Mar Medium: 0.6125
- Mar Large: 0.7829
- Map Banana: 0.448
- Mar 100 Banana: 0.72
- Map Orange: 0.6045
- Mar 100 Orange: 0.7857
- Map Apple: 0.6831
- Mar 100 Apple: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Banana | Mar 100 Banana | Map Orange | Mar 100 Orange | Map Apple | Mar 100 Apple |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:----------:|:--------------:|:----------:|:--------------:|:---------:|:-------------:|
| No log | 1.0 | 60 | 2.2392 | 0.0133 | 0.0374 | 0.0065 | -1.0 | 0.0006 | 0.0174 | 0.0367 | 0.1159 | 0.2228 | -1.0 | 0.075 | 0.2375 | 0.0033 | 0.295 | 0.0055 | 0.019 | 0.031 | 0.3543 |
| No log | 2.0 | 120 | 1.8045 | 0.0433 | 0.094 | 0.035 | -1.0 | 0.0841 | 0.0463 | 0.1148 | 0.2667 | 0.4661 | -1.0 | 0.3708 | 0.4806 | 0.0131 | 0.425 | 0.0335 | 0.419 | 0.0834 | 0.5543 |
| No log | 3.0 | 180 | 1.7343 | 0.0758 | 0.1809 | 0.0542 | -1.0 | 0.0666 | 0.0765 | 0.1559 | 0.3357 | 0.473 | -1.0 | 0.3708 | 0.4901 | 0.0802 | 0.39 | 0.0401 | 0.4548 | 0.107 | 0.5743 |
| No log | 4.0 | 240 | 1.5930 | 0.0667 | 0.1545 | 0.0477 | -1.0 | 0.0345 | 0.0729 | 0.1339 | 0.3051 | 0.4819 | -1.0 | 0.2167 | 0.5061 | 0.0823 | 0.4875 | 0.0565 | 0.3524 | 0.0614 | 0.6057 |
| No log | 5.0 | 300 | 1.4399 | 0.08 | 0.1519 | 0.0659 | -1.0 | 0.0812 | 0.0899 | 0.1599 | 0.327 | 0.5297 | -1.0 | 0.35 | 0.5466 | 0.0811 | 0.4925 | 0.0724 | 0.4595 | 0.0867 | 0.6371 |
| No log | 6.0 | 360 | 1.2057 | 0.1493 | 0.2472 | 0.1804 | -1.0 | 0.1378 | 0.1618 | 0.2595 | 0.4663 | 0.6235 | -1.0 | 0.3542 | 0.6502 | 0.0964 | 0.5825 | 0.1548 | 0.6167 | 0.1967 | 0.6714 |
| No log | 7.0 | 420 | 1.1930 | 0.2454 | 0.4068 | 0.2628 | -1.0 | 0.1931 | 0.2652 | 0.2975 | 0.4886 | 0.6008 | -1.0 | 0.3625 | 0.6243 | 0.1301 | 0.53 | 0.2107 | 0.5952 | 0.3953 | 0.6771 |
| No log | 8.0 | 480 | 1.1520 | 0.3021 | 0.5017 | 0.3603 | -1.0 | 0.2696 | 0.3272 | 0.3091 | 0.5556 | 0.6268 | -1.0 | 0.4083 | 0.6477 | 0.136 | 0.57 | 0.2458 | 0.5905 | 0.5244 | 0.72 |
| 1.4531 | 9.0 | 540 | 1.0371 | 0.3781 | 0.5892 | 0.4062 | -1.0 | 0.3088 | 0.3964 | 0.3496 | 0.6028 | 0.6662 | -1.0 | 0.3958 | 0.6901 | 0.2285 | 0.63 | 0.3607 | 0.6429 | 0.5451 | 0.7257 |
| 1.4531 | 10.0 | 600 | 1.0391 | 0.3811 | 0.6249 | 0.4312 | -1.0 | 0.2525 | 0.4061 | 0.3532 | 0.6144 | 0.6606 | -1.0 | 0.4167 | 0.6837 | 0.2649 | 0.625 | 0.2871 | 0.631 | 0.5912 | 0.7257 |
| 1.4531 | 11.0 | 660 | 0.9947 | 0.4314 | 0.6884 | 0.4616 | -1.0 | 0.2102 | 0.4734 | 0.3681 | 0.6204 | 0.678 | -1.0 | 0.4 | 0.7046 | 0.2683 | 0.6025 | 0.449 | 0.7 | 0.5768 | 0.7314 |
| 1.4531 | 12.0 | 720 | 1.0551 | 0.4382 | 0.7558 | 0.4724 | -1.0 | 0.2711 | 0.4696 | 0.339 | 0.6118 | 0.6658 | -1.0 | 0.475 | 0.6833 | 0.2939 | 0.6325 | 0.4729 | 0.6762 | 0.5477 | 0.6886 |
| 1.4531 | 13.0 | 780 | 0.9251 | 0.4752 | 0.7361 | 0.5321 | -1.0 | 0.3079 | 0.5056 | 0.3823 | 0.6394 | 0.7055 | -1.0 | 0.4667 | 0.7265 | 0.333 | 0.6375 | 0.4894 | 0.6905 | 0.6033 | 0.7886 |
| 1.4531 | 14.0 | 840 | 0.8957 | 0.4906 | 0.7363 | 0.5688 | -1.0 | 0.34 | 0.5195 | 0.3813 | 0.6715 | 0.7187 | -1.0 | 0.5208 | 0.7345 | 0.3125 | 0.66 | 0.52 | 0.7333 | 0.6394 | 0.7629 |
| 1.4531 | 15.0 | 900 | 0.9153 | 0.4978 | 0.7646 | 0.5708 | -1.0 | 0.41 | 0.5297 | 0.401 | 0.6679 | 0.7131 | -1.0 | 0.5708 | 0.7275 | 0.3437 | 0.6275 | 0.5364 | 0.7548 | 0.6133 | 0.7571 |
| 1.4531 | 16.0 | 960 | 0.8663 | 0.5276 | 0.7993 | 0.576 | -1.0 | 0.3697 | 0.5634 | 0.4088 | 0.6738 | 0.7315 | -1.0 | 0.525 | 0.7493 | 0.3965 | 0.675 | 0.5225 | 0.731 | 0.6638 | 0.7886 |
| 0.7981 | 17.0 | 1020 | 0.8745 | 0.5359 | 0.8136 | 0.5912 | -1.0 | 0.3684 | 0.5684 | 0.4217 | 0.6903 | 0.7463 | -1.0 | 0.5458 | 0.765 | 0.3881 | 0.68 | 0.5621 | 0.7762 | 0.6575 | 0.7829 |
| 0.7981 | 18.0 | 1080 | 0.8692 | 0.5375 | 0.814 | 0.6356 | -1.0 | 0.4627 | 0.5653 | 0.4139 | 0.6979 | 0.7461 | -1.0 | 0.6083 | 0.76 | 0.3799 | 0.6825 | 0.5793 | 0.7786 | 0.6532 | 0.7771 |
| 0.7981 | 19.0 | 1140 | 0.8285 | 0.5488 | 0.8236 | 0.6288 | -1.0 | 0.4448 | 0.5802 | 0.4215 | 0.7103 | 0.7608 | -1.0 | 0.6542 | 0.7699 | 0.4209 | 0.7175 | 0.574 | 0.7762 | 0.6513 | 0.7886 |
| 0.7981 | 20.0 | 1200 | 0.8036 | 0.5544 | 0.8123 | 0.6339 | -1.0 | 0.4699 | 0.5869 | 0.4227 | 0.7209 | 0.7735 | -1.0 | 0.625 | 0.7859 | 0.4012 | 0.7175 | 0.5806 | 0.8 | 0.6815 | 0.8029 |
| 0.7981 | 21.0 | 1260 | 0.8163 | 0.5546 | 0.8194 | 0.6187 | -1.0 | 0.4976 | 0.5843 | 0.426 | 0.7134 | 0.7648 | -1.0 | 0.6083 | 0.781 | 0.3824 | 0.6925 | 0.6011 | 0.8048 | 0.6803 | 0.7971 |
| 0.7981 | 22.0 | 1320 | 0.8323 | 0.5608 | 0.8266 | 0.6316 | -1.0 | 0.5279 | 0.5848 | 0.4161 | 0.711 | 0.7573 | -1.0 | 0.6083 | 0.7706 | 0.4091 | 0.6975 | 0.5902 | 0.7857 | 0.6831 | 0.7886 |
| 0.7981 | 23.0 | 1380 | 0.8178 | 0.5621 | 0.83 | 0.6621 | -1.0 | 0.4861 | 0.5881 | 0.4194 | 0.7124 | 0.7578 | -1.0 | 0.6125 | 0.7707 | 0.4356 | 0.71 | 0.5775 | 0.7833 | 0.6733 | 0.78 |
| 0.7981 | 24.0 | 1440 | 0.8000 | 0.5615 | 0.8331 | 0.66 | -1.0 | 0.5107 | 0.5872 | 0.4135 | 0.7153 | 0.7615 | -1.0 | 0.5917 | 0.7765 | 0.4259 | 0.725 | 0.5974 | 0.7738 | 0.6611 | 0.7857 |
| 0.5872 | 25.0 | 1500 | 0.7918 | 0.5691 | 0.8323 | 0.6611 | -1.0 | 0.5043 | 0.5945 | 0.4271 | 0.7258 | 0.7671 | -1.0 | 0.6 | 0.7824 | 0.4274 | 0.7175 | 0.5935 | 0.781 | 0.6863 | 0.8029 |
| 0.5872 | 26.0 | 1560 | 0.7879 | 0.5846 | 0.839 | 0.674 | -1.0 | 0.4845 | 0.611 | 0.4234 | 0.7313 | 0.7656 | -1.0 | 0.6208 | 0.7789 | 0.457 | 0.7125 | 0.6081 | 0.7786 | 0.6888 | 0.8057 |
| 0.5872 | 27.0 | 1620 | 0.7810 | 0.5793 | 0.8423 | 0.664 | -1.0 | 0.485 | 0.6038 | 0.4285 | 0.7251 | 0.7736 | -1.0 | 0.6167 | 0.7865 | 0.4498 | 0.735 | 0.6025 | 0.7857 | 0.6857 | 0.8 |
| 0.5872 | 28.0 | 1680 | 0.7838 | 0.5779 | 0.8359 | 0.6719 | -1.0 | 0.5125 | 0.6044 | 0.424 | 0.7256 | 0.7666 | -1.0 | 0.6125 | 0.7803 | 0.4494 | 0.725 | 0.6017 | 0.7833 | 0.6827 | 0.7914 |
| 0.5872 | 29.0 | 1740 | 0.7841 | 0.5776 | 0.8363 | 0.6718 | -1.0 | 0.5125 | 0.604 | 0.4248 | 0.7276 | 0.7678 | -1.0 | 0.6125 | 0.782 | 0.4479 | 0.72 | 0.6019 | 0.7833 | 0.6829 | 0.8 |
| 0.5872 | 30.0 | 1800 | 0.7836 | 0.5785 | 0.8356 | 0.6723 | -1.0 | 0.5125 | 0.605 | 0.4248 | 0.7284 | 0.7686 | -1.0 | 0.6125 | 0.7829 | 0.448 | 0.72 | 0.6045 | 0.7857 | 0.6831 | 0.8 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
| [
"banana",
"orange",
"apple"
] |
Eric0804/detr-resnet-50-finetuned-10-epochs-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-10-epochs-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
lee-910530/detr-resnet-50-finetuned-10-epochs-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-10-epochs-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
AdamShih/detr-resnet-50-finetuned-10-epochs-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-10-epochs-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
hsinyen5/detr-resnet-50-finetuned-10-epochs-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-10-epochs-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
bnina-ayoub/finetuned-ViT-model |
# finetuned-ViT-model
This model is a fine-tuned version of [facebook/detr-resnet-50-dc5](https://huggingface.co/facebook/detr-resnet-50-dc5) on an the [Hard Hat Dataset](https://huggingface.co/datasets/hf-vision/hardhat)
It achieves the following results on the evaluation set:
- Loss: 0.9937
## Model description
This model is a demonstration project for the Hugging Face Certification assignment and was created for educational purpose.
It is a fine-tuned Vision Transformer (ViT) for object detection, specifically trained to detect hard hats, heads, and people in images. It uses the `facebook/detr-resnet-50-dc5` checkpoint as a base and is further trained on the `hf-vision/hardhat` dataset.
The model leverages the transformer architecture to process image patches and predict bounding boxes and labels for the objects of interest.
## Intended uses & limitations
- **Intended Uses:** This model can be used to demonstrate object detection with ViT. It can potentially be used in safety applications to identify individuals wearing or not wearing hardhats in construction sites or industrial environments.
- **Limitations:** This model has been limitedly trained and may not generalize well to images with significantly different characteristics, viewpoints, or lighting conditions. It is not intended for production use without further evaluation and validation.
## Training and evaluation data
- **Dataset:** The model was trained on the `hf-vision/hardhat` dataset from Hugging Face Datasets. This dataset contains images of construction sites and industrial settings with annotations for hardhats, heads, and people.
- **Data splits:** The dataset is divided into "train" and "test" splits.
- **Data augmentation:** Data augmentation was applied during training using `albumentations` to improve model generalization. These included random horizontal flipping and random brightness/contrast adjustments.
## Training procedure
- **Base model:** The model was initialized from the `facebook/detr-resnet-50-dc5` checkpoint, a pre-trained DETR model with a ResNet-50 backbone.
- **Fine-tuning:** The model was fine-tuned using the Hugging Face `Trainer` with the following hyperparameters:
- Learning rate: 1e-6
- Weight decay: 1e-4
- Batch size: 1
- Epochs: 3
- Max steps: 2500
- Optimizer: AdamW
- **Evaluation:** The model was evaluated on the test set using standard object detection metrics, including COCO metrics (Average Precision, Average Recall).
- **Hardware:** Training was performed on Google Colab using GPU acceleration.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.50.1
- Pytorch 2.5.1+cu121
- Datasets 3.4.1
- Tokenizers 0.21.0 | [
"head",
"helmet",
"person"
] |
rjhugs/modelStructure_TT_V4 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelStructure_TT_V4
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition-v1.1-all](https://huggingface.co/microsoft/table-transformer-structure-recognition-v1.1-all) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.50.1
- Pytorch 2.1.2+cu118
- Datasets 2.12.0
- Tokenizers 0.21.1
| [
"table",
"table column header",
"table column"
] |
TowardsUtopia/detr-finetuned-historic-v2 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15"
] |
ustc-community/dfine-large-obj2coco-e25 | ## D-FINE
### **Overview**
The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by
Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu
This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf)
This is the HF transformers implementation for D-FINE
_coco -> model trained on COCO
_obj365 -> model trained on Object365
_obj2coco -> model trained on Object365 and then finetuned on COCO
### **Performance**
D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD).

### **How to use**
```python
import torch
import requests
from PIL import Image
from transformers import DFineForObjectDetection, AutoImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-large-obj2coco-e25")
model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-large-obj2coco-e25")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
### **Training**
D-FINE is trained on COCO (Lin et al. [2014]) train2017 and validated on COCO val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios.
### **Applications**
D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments. | [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
ustc-community/dfine-medium-obj2coco | ## D-FINE
### **Overview**
The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by
Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu
This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf)
This is the HF transformers implementation for D-FINE
_coco -> model trained on COCO
_obj365 -> model trained on Object365
_obj2coco -> model trained on Object365 and then finetuned on COCO
### **Performance**
D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD).

### **How to use**
```python
import torch
import requests
from PIL import Image
from transformers import DFineForObjectDetection, AutoImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-medium-obj2coco")
model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-medium-obj2coco")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
### **Training**
D-FINE is trained on COCO (Lin et al. [2014]) train2017 and validated on COCO val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios.
### **Applications**
D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments. | [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
ustc-community/dfine-small-obj2coco | ## D-FINE
### **Overview**
The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by
Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu
This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf)
This is the HF transformers implementation for D-FINE
_coco -> model trained on COCO
_obj365 -> model trained on Object365
_obj2coco -> model trained on Object365 and then finetuned on COCO
### **Performance**
D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD).

### **How to use**
```python
import torch
import requests
from PIL import Image
from transformers import DFineForObjectDetection, AutoImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-small-obj2coco")
model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-small-obj2coco")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
### **Training**
D-FINE is trained on COCO (Lin et al. [2014]) train2017 and validated on COCO val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios.
### **Applications**
D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments. | [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
ustc-community/dfine-nano-coco | ## D-FINE
### **Overview**
The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by
Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu
This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf)
This is the HF transformers implementation for D-FINE
_coco -> model trained on COCO
_obj365 -> model trained on Object365
_obj2coco -> model trained on Object365 and then finetuned on COCO
### **Performance**
D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD).

### **How to use**
```python
import torch
import requests
from PIL import Image
from transformers import DFineForObjectDetection, AutoImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-nano-coco")
model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-nano-coco")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
### **Training**
D-FINE is trained on COCO (Lin et al. [2014]) train2017 and validated on COCO val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios.
### **Applications**
D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments. | [
"person",
"bicycle",
"car",
"motorbike",
"aeroplane",
"bus",
"train",
"truck",
"boat",
"traffic light",
"fire hydrant",
"stop sign",
"parking meter",
"bench",
"bird",
"cat",
"dog",
"horse",
"sheep",
"cow",
"elephant",
"bear",
"zebra",
"giraffe",
"backpack",
"umbrella",
"handbag",
"tie",
"suitcase",
"frisbee",
"skis",
"snowboard",
"sports ball",
"kite",
"baseball bat",
"baseball glove",
"skateboard",
"surfboard",
"tennis racket",
"bottle",
"wine glass",
"cup",
"fork",
"knife",
"spoon",
"bowl",
"banana",
"apple",
"sandwich",
"orange",
"broccoli",
"carrot",
"hot dog",
"pizza",
"donut",
"cake",
"chair",
"sofa",
"pottedplant",
"bed",
"diningtable",
"toilet",
"tvmonitor",
"laptop",
"mouse",
"remote",
"keyboard",
"cell phone",
"microwave",
"oven",
"toaster",
"sink",
"refrigerator",
"book",
"clock",
"vase",
"scissors",
"teddy bear",
"hair drier",
"toothbrush"
] |
ustc-community/dfine-xlarge-obj365 | ## D-FINE
### **Overview**
The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by
Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu
This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf)
This is the HF transformers implementation for D-FINE
_coco -> model trained on COCO
_obj365 -> model trained on Object365
_obj2coco -> model trained on Object365 and then finetuned on COCO
### **Performance**
D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD).


### **How to use**
```python
import torch
import requests
from PIL import Image
from transformers import DFineForObjectDetection, AutoImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-xlarge-obj365")
model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-xlarge-obj365")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
### **Training**
D-FINE is trained on COCO and Objects365 (Lin et al. [2014]) train2017 and validated on COCO + Objects365 val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios.
### **Applications**
D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments. | [
"none",
"person",
"sneakers",
"chair",
"other shoes",
"hat",
"car",
"lamp",
"glasses",
"bottle",
"desk",
"cup",
"street lights",
"cabinet/shelf",
"handbag/satchel",
"bracelet",
"plate",
"picture/frame",
"helmet",
"book",
"gloves",
"storage box",
"boat",
"leather shoes",
"flower",
"bench",
"potted plant",
"bowl/basin",
"flag",
"pillow",
"boots",
"vase",
"microphone",
"necklace",
"ring",
"suv",
"wine glass",
"belt",
"monitor/tv",
"backpack",
"umbrella",
"traffic light",
"speaker",
"watch",
"tie",
"trash bin can",
"slippers",
"bicycle",
"stool",
"barrel/bucket",
"van",
"couch",
"sandals",
"basket",
"drum",
"pen/pencil",
"bus",
"wild bird",
"high heels",
"motorcycle",
"guitar",
"carpet",
"cell phone",
"bread",
"camera",
"canned",
"truck",
"traffic cone",
"cymbal",
"lifesaver",
"towel",
"stuffed toy",
"candle",
"sailboat",
"laptop",
"awning",
"bed",
"faucet",
"tent",
"horse",
"mirror",
"power outlet",
"sink",
"apple",
"air conditioner",
"knife",
"hockey stick",
"paddle",
"pickup truck",
"fork",
"traffic sign",
"balloon",
"tripod",
"dog",
"spoon",
"clock",
"pot",
"cow",
"cake",
"dinning table",
"sheep",
"hanger",
"blackboard/whiteboard",
"napkin",
"other fish",
"orange/tangerine",
"toiletry",
"keyboard",
"tomato",
"lantern",
"machinery vehicle",
"fan",
"green vegetables",
"banana",
"baseball glove",
"airplane",
"mouse",
"train",
"pumpkin",
"soccer",
"skiboard",
"luggage",
"nightstand",
"tea pot",
"telephone",
"trolley",
"head phone",
"sports car",
"stop sign",
"dessert",
"scooter",
"stroller",
"crane",
"remote",
"refrigerator",
"oven",
"lemon",
"duck",
"baseball bat",
"surveillance camera",
"cat",
"jug",
"broccoli",
"piano",
"pizza",
"elephant",
"skateboard",
"surfboard",
"gun",
"skating and skiing shoes",
"gas stove",
"donut",
"bow tie",
"carrot",
"toilet",
"kite",
"strawberry",
"other balls",
"shovel",
"pepper",
"computer box",
"toilet paper",
"cleaning products",
"chopsticks",
"microwave",
"pigeon",
"baseball",
"cutting/chopping board",
"coffee table",
"side table",
"scissors",
"marker",
"pie",
"ladder",
"snowboard",
"cookies",
"radiator",
"fire hydrant",
"basketball",
"zebra",
"grape",
"giraffe",
"potato",
"sausage",
"tricycle",
"violin",
"egg",
"fire extinguisher",
"candy",
"fire truck",
"billiards",
"converter",
"bathtub",
"wheelchair",
"golf club",
"briefcase",
"cucumber",
"cigar/cigarette",
"paint brush",
"pear",
"heavy truck",
"hamburger",
"extractor",
"extension cord",
"tong",
"tennis racket",
"folder",
"american football",
"earphone",
"mask",
"kettle",
"tennis",
"ship",
"swing",
"coffee machine",
"slide",
"carriage",
"onion",
"green beans",
"projector",
"frisbee",
"washing machine/drying machine",
"chicken",
"printer",
"watermelon",
"saxophone",
"tissue",
"toothbrush",
"ice cream",
"hot-air balloon",
"cello",
"french fries",
"scale",
"trophy",
"cabbage",
"hot dog",
"blender",
"peach",
"rice",
"wallet/purse",
"volleyball",
"deer",
"goose",
"tape",
"tablet",
"cosmetics",
"trumpet",
"pineapple",
"golf ball",
"ambulance",
"parking meter",
"mango",
"key",
"hurdle",
"fishing rod",
"medal",
"flute",
"brush",
"penguin",
"megaphone",
"corn",
"lettuce",
"garlic",
"swan",
"helicopter",
"green onion",
"sandwich",
"nuts",
"speed limit sign",
"induction cooker",
"broom",
"trombone",
"plum",
"rickshaw",
"goldfish",
"kiwi fruit",
"router/modem",
"poker card",
"toaster",
"shrimp",
"sushi",
"cheese",
"notepaper",
"cherry",
"pliers",
"cd",
"pasta",
"hammer",
"cue",
"avocado",
"hamimelon",
"flask",
"mushroom",
"screwdriver",
"soap",
"recorder",
"bear",
"eggplant",
"board eraser",
"coconut",
"tape measure/ruler",
"pig",
"showerhead",
"globe",
"chips",
"steak",
"crosswalk sign",
"stapler",
"camel",
"formula 1",
"pomegranate",
"dishwasher",
"crab",
"hoverboard",
"meat ball",
"rice cooker",
"tuba",
"calculator",
"papaya",
"antelope",
"parrot",
"seal",
"butterfly",
"dumbbell",
"donkey",
"lion",
"urinal",
"dolphin",
"electric drill",
"hair dryer",
"egg tart",
"jellyfish",
"treadmill",
"lighter",
"grapefruit",
"game board",
"mop",
"radish",
"baozi",
"target",
"french",
"spring rolls",
"monkey",
"rabbit",
"pencil case",
"yak",
"red cabbage",
"binoculars",
"asparagus",
"barbell",
"scallop",
"noddles",
"comb",
"dumpling",
"oyster",
"table tennis paddle",
"cosmetics brush/eyeliner pencil",
"chainsaw",
"eraser",
"lobster",
"durian",
"okra",
"lipstick",
"cosmetics mirror",
"curling",
"table tennis"
] |
ustc-community/dfine-medium-obj365 | ## D-FINE
### **Overview**
The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by
Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu
This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf)
This is the HF transformers implementation for D-FINE
_coco -> model trained on COCO
_obj365 -> model trained on Object365
_obj2coco -> model trained on Object365 and then finetuned on COCO
### **Performance**
D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD).


### **How to use**
```python
import torch
import requests
from PIL import Image
from transformers import DFineForObjectDetection, AutoImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-medium-obj365")
model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-medium-obj365")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
### **Training**
D-FINE is trained on COCO and Objects365 (Lin et al. [2014]) train2017 and validated on COCO + Objects365 val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios.
### **Applications**
D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments. | [
"none",
"person",
"sneakers",
"chair",
"other shoes",
"hat",
"car",
"lamp",
"glasses",
"bottle",
"desk",
"cup",
"street lights",
"cabinet/shelf",
"handbag/satchel",
"bracelet",
"plate",
"picture/frame",
"helmet",
"book",
"gloves",
"storage box",
"boat",
"leather shoes",
"flower",
"bench",
"potted plant",
"bowl/basin",
"flag",
"pillow",
"boots",
"vase",
"microphone",
"necklace",
"ring",
"suv",
"wine glass",
"belt",
"monitor/tv",
"backpack",
"umbrella",
"traffic light",
"speaker",
"watch",
"tie",
"trash bin can",
"slippers",
"bicycle",
"stool",
"barrel/bucket",
"van",
"couch",
"sandals",
"basket",
"drum",
"pen/pencil",
"bus",
"wild bird",
"high heels",
"motorcycle",
"guitar",
"carpet",
"cell phone",
"bread",
"camera",
"canned",
"truck",
"traffic cone",
"cymbal",
"lifesaver",
"towel",
"stuffed toy",
"candle",
"sailboat",
"laptop",
"awning",
"bed",
"faucet",
"tent",
"horse",
"mirror",
"power outlet",
"sink",
"apple",
"air conditioner",
"knife",
"hockey stick",
"paddle",
"pickup truck",
"fork",
"traffic sign",
"balloon",
"tripod",
"dog",
"spoon",
"clock",
"pot",
"cow",
"cake",
"dinning table",
"sheep",
"hanger",
"blackboard/whiteboard",
"napkin",
"other fish",
"orange/tangerine",
"toiletry",
"keyboard",
"tomato",
"lantern",
"machinery vehicle",
"fan",
"green vegetables",
"banana",
"baseball glove",
"airplane",
"mouse",
"train",
"pumpkin",
"soccer",
"skiboard",
"luggage",
"nightstand",
"tea pot",
"telephone",
"trolley",
"head phone",
"sports car",
"stop sign",
"dessert",
"scooter",
"stroller",
"crane",
"remote",
"refrigerator",
"oven",
"lemon",
"duck",
"baseball bat",
"surveillance camera",
"cat",
"jug",
"broccoli",
"piano",
"pizza",
"elephant",
"skateboard",
"surfboard",
"gun",
"skating and skiing shoes",
"gas stove",
"donut",
"bow tie",
"carrot",
"toilet",
"kite",
"strawberry",
"other balls",
"shovel",
"pepper",
"computer box",
"toilet paper",
"cleaning products",
"chopsticks",
"microwave",
"pigeon",
"baseball",
"cutting/chopping board",
"coffee table",
"side table",
"scissors",
"marker",
"pie",
"ladder",
"snowboard",
"cookies",
"radiator",
"fire hydrant",
"basketball",
"zebra",
"grape",
"giraffe",
"potato",
"sausage",
"tricycle",
"violin",
"egg",
"fire extinguisher",
"candy",
"fire truck",
"billiards",
"converter",
"bathtub",
"wheelchair",
"golf club",
"briefcase",
"cucumber",
"cigar/cigarette",
"paint brush",
"pear",
"heavy truck",
"hamburger",
"extractor",
"extension cord",
"tong",
"tennis racket",
"folder",
"american football",
"earphone",
"mask",
"kettle",
"tennis",
"ship",
"swing",
"coffee machine",
"slide",
"carriage",
"onion",
"green beans",
"projector",
"frisbee",
"washing machine/drying machine",
"chicken",
"printer",
"watermelon",
"saxophone",
"tissue",
"toothbrush",
"ice cream",
"hot-air balloon",
"cello",
"french fries",
"scale",
"trophy",
"cabbage",
"hot dog",
"blender",
"peach",
"rice",
"wallet/purse",
"volleyball",
"deer",
"goose",
"tape",
"tablet",
"cosmetics",
"trumpet",
"pineapple",
"golf ball",
"ambulance",
"parking meter",
"mango",
"key",
"hurdle",
"fishing rod",
"medal",
"flute",
"brush",
"penguin",
"megaphone",
"corn",
"lettuce",
"garlic",
"swan",
"helicopter",
"green onion",
"sandwich",
"nuts",
"speed limit sign",
"induction cooker",
"broom",
"trombone",
"plum",
"rickshaw",
"goldfish",
"kiwi fruit",
"router/modem",
"poker card",
"toaster",
"shrimp",
"sushi",
"cheese",
"notepaper",
"cherry",
"pliers",
"cd",
"pasta",
"hammer",
"cue",
"avocado",
"hamimelon",
"flask",
"mushroom",
"screwdriver",
"soap",
"recorder",
"bear",
"eggplant",
"board eraser",
"coconut",
"tape measure/ruler",
"pig",
"showerhead",
"globe",
"chips",
"steak",
"crosswalk sign",
"stapler",
"camel",
"formula 1",
"pomegranate",
"dishwasher",
"crab",
"hoverboard",
"meat ball",
"rice cooker",
"tuba",
"calculator",
"papaya",
"antelope",
"parrot",
"seal",
"butterfly",
"dumbbell",
"donkey",
"lion",
"urinal",
"dolphin",
"electric drill",
"hair dryer",
"egg tart",
"jellyfish",
"treadmill",
"lighter",
"grapefruit",
"game board",
"mop",
"radish",
"baozi",
"target",
"french",
"spring rolls",
"monkey",
"rabbit",
"pencil case",
"yak",
"red cabbage",
"binoculars",
"asparagus",
"barbell",
"scallop",
"noddles",
"comb",
"dumpling",
"oyster",
"table tennis paddle",
"cosmetics brush/eyeliner pencil",
"chainsaw",
"eraser",
"lobster",
"durian",
"okra",
"lipstick",
"cosmetics mirror",
"curling",
"table tennis"
] |
ustc-community/dfine-large-obj365 | ## D-FINE
### **Overview**
The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by
Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu
This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf)
This is the HF transformers implementation for D-FINE
_coco -> model trained on COCO
_obj365 -> model trained on Object365
_obj2coco -> model trained on Object365 and then finetuned on COCO
### **Performance**
D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD).


### **How to use**
```python
import torch
import requests
from PIL import Image
from transformers import DFineForObjectDetection, AutoImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-large-obj365")
model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-large-obj365")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
### **Training**
D-FINE is trained on COCO and Objects365 (Lin et al. [2014]) train2017 and validated on COCO + Objects365 val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios.
### **Applications**
D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments. | [
"none",
"person",
"sneakers",
"chair",
"other shoes",
"hat",
"car",
"lamp",
"glasses",
"bottle",
"desk",
"cup",
"street lights",
"cabinet/shelf",
"handbag/satchel",
"bracelet",
"plate",
"picture/frame",
"helmet",
"book",
"gloves",
"storage box",
"boat",
"leather shoes",
"flower",
"bench",
"potted plant",
"bowl/basin",
"flag",
"pillow",
"boots",
"vase",
"microphone",
"necklace",
"ring",
"suv",
"wine glass",
"belt",
"monitor/tv",
"backpack",
"umbrella",
"traffic light",
"speaker",
"watch",
"tie",
"trash bin can",
"slippers",
"bicycle",
"stool",
"barrel/bucket",
"van",
"couch",
"sandals",
"basket",
"drum",
"pen/pencil",
"bus",
"wild bird",
"high heels",
"motorcycle",
"guitar",
"carpet",
"cell phone",
"bread",
"camera",
"canned",
"truck",
"traffic cone",
"cymbal",
"lifesaver",
"towel",
"stuffed toy",
"candle",
"sailboat",
"laptop",
"awning",
"bed",
"faucet",
"tent",
"horse",
"mirror",
"power outlet",
"sink",
"apple",
"air conditioner",
"knife",
"hockey stick",
"paddle",
"pickup truck",
"fork",
"traffic sign",
"balloon",
"tripod",
"dog",
"spoon",
"clock",
"pot",
"cow",
"cake",
"dinning table",
"sheep",
"hanger",
"blackboard/whiteboard",
"napkin",
"other fish",
"orange/tangerine",
"toiletry",
"keyboard",
"tomato",
"lantern",
"machinery vehicle",
"fan",
"green vegetables",
"banana",
"baseball glove",
"airplane",
"mouse",
"train",
"pumpkin",
"soccer",
"skiboard",
"luggage",
"nightstand",
"tea pot",
"telephone",
"trolley",
"head phone",
"sports car",
"stop sign",
"dessert",
"scooter",
"stroller",
"crane",
"remote",
"refrigerator",
"oven",
"lemon",
"duck",
"baseball bat",
"surveillance camera",
"cat",
"jug",
"broccoli",
"piano",
"pizza",
"elephant",
"skateboard",
"surfboard",
"gun",
"skating and skiing shoes",
"gas stove",
"donut",
"bow tie",
"carrot",
"toilet",
"kite",
"strawberry",
"other balls",
"shovel",
"pepper",
"computer box",
"toilet paper",
"cleaning products",
"chopsticks",
"microwave",
"pigeon",
"baseball",
"cutting/chopping board",
"coffee table",
"side table",
"scissors",
"marker",
"pie",
"ladder",
"snowboard",
"cookies",
"radiator",
"fire hydrant",
"basketball",
"zebra",
"grape",
"giraffe",
"potato",
"sausage",
"tricycle",
"violin",
"egg",
"fire extinguisher",
"candy",
"fire truck",
"billiards",
"converter",
"bathtub",
"wheelchair",
"golf club",
"briefcase",
"cucumber",
"cigar/cigarette",
"paint brush",
"pear",
"heavy truck",
"hamburger",
"extractor",
"extension cord",
"tong",
"tennis racket",
"folder",
"american football",
"earphone",
"mask",
"kettle",
"tennis",
"ship",
"swing",
"coffee machine",
"slide",
"carriage",
"onion",
"green beans",
"projector",
"frisbee",
"washing machine/drying machine",
"chicken",
"printer",
"watermelon",
"saxophone",
"tissue",
"toothbrush",
"ice cream",
"hot-air balloon",
"cello",
"french fries",
"scale",
"trophy",
"cabbage",
"hot dog",
"blender",
"peach",
"rice",
"wallet/purse",
"volleyball",
"deer",
"goose",
"tape",
"tablet",
"cosmetics",
"trumpet",
"pineapple",
"golf ball",
"ambulance",
"parking meter",
"mango",
"key",
"hurdle",
"fishing rod",
"medal",
"flute",
"brush",
"penguin",
"megaphone",
"corn",
"lettuce",
"garlic",
"swan",
"helicopter",
"green onion",
"sandwich",
"nuts",
"speed limit sign",
"induction cooker",
"broom",
"trombone",
"plum",
"rickshaw",
"goldfish",
"kiwi fruit",
"router/modem",
"poker card",
"toaster",
"shrimp",
"sushi",
"cheese",
"notepaper",
"cherry",
"pliers",
"cd",
"pasta",
"hammer",
"cue",
"avocado",
"hamimelon",
"flask",
"mushroom",
"screwdriver",
"soap",
"recorder",
"bear",
"eggplant",
"board eraser",
"coconut",
"tape measure/ruler",
"pig",
"showerhead",
"globe",
"chips",
"steak",
"crosswalk sign",
"stapler",
"camel",
"formula 1",
"pomegranate",
"dishwasher",
"crab",
"hoverboard",
"meat ball",
"rice cooker",
"tuba",
"calculator",
"papaya",
"antelope",
"parrot",
"seal",
"butterfly",
"dumbbell",
"donkey",
"lion",
"urinal",
"dolphin",
"electric drill",
"hair dryer",
"egg tart",
"jellyfish",
"treadmill",
"lighter",
"grapefruit",
"game board",
"mop",
"radish",
"baozi",
"target",
"french",
"spring rolls",
"monkey",
"rabbit",
"pencil case",
"yak",
"red cabbage",
"binoculars",
"asparagus",
"barbell",
"scallop",
"noddles",
"comb",
"dumpling",
"oyster",
"table tennis paddle",
"cosmetics brush/eyeliner pencil",
"chainsaw",
"eraser",
"lobster",
"durian",
"okra",
"lipstick",
"cosmetics mirror",
"curling",
"table tennis"
] |
ustc-community/dfine-small-obj365 | ## D-FINE
### **Overview**
The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by
Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu
This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf)
This is the HF transformers implementation for D-FINE
_coco -> model trained on COCO
_obj365 -> model trained on Object365
_obj2coco -> model trained on Object365 and then finetuned on COCO
### **Performance**
D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD).


### **How to use**
```python
import torch
import requests
from PIL import Image
from transformers import DFineForObjectDetection, AutoImageProcessor
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-small-obj365")
model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-small-obj365")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3)
for result in results:
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
score, label = score.item(), label_id.item()
box = [round(i, 2) for i in box.tolist()]
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
```
### **Training**
D-FINE is trained on COCO and Objects365 (Lin et al. [2014]) train2017 and validated on COCO + Objects365 val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios.
### **Applications**
D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments. | [
"none",
"person",
"sneakers",
"chair",
"other shoes",
"hat",
"car",
"lamp",
"glasses",
"bottle",
"desk",
"cup",
"street lights",
"cabinet/shelf",
"handbag/satchel",
"bracelet",
"plate",
"picture/frame",
"helmet",
"book",
"gloves",
"storage box",
"boat",
"leather shoes",
"flower",
"bench",
"potted plant",
"bowl/basin",
"flag",
"pillow",
"boots",
"vase",
"microphone",
"necklace",
"ring",
"suv",
"wine glass",
"belt",
"monitor/tv",
"backpack",
"umbrella",
"traffic light",
"speaker",
"watch",
"tie",
"trash bin can",
"slippers",
"bicycle",
"stool",
"barrel/bucket",
"van",
"couch",
"sandals",
"basket",
"drum",
"pen/pencil",
"bus",
"wild bird",
"high heels",
"motorcycle",
"guitar",
"carpet",
"cell phone",
"bread",
"camera",
"canned",
"truck",
"traffic cone",
"cymbal",
"lifesaver",
"towel",
"stuffed toy",
"candle",
"sailboat",
"laptop",
"awning",
"bed",
"faucet",
"tent",
"horse",
"mirror",
"power outlet",
"sink",
"apple",
"air conditioner",
"knife",
"hockey stick",
"paddle",
"pickup truck",
"fork",
"traffic sign",
"balloon",
"tripod",
"dog",
"spoon",
"clock",
"pot",
"cow",
"cake",
"dinning table",
"sheep",
"hanger",
"blackboard/whiteboard",
"napkin",
"other fish",
"orange/tangerine",
"toiletry",
"keyboard",
"tomato",
"lantern",
"machinery vehicle",
"fan",
"green vegetables",
"banana",
"baseball glove",
"airplane",
"mouse",
"train",
"pumpkin",
"soccer",
"skiboard",
"luggage",
"nightstand",
"tea pot",
"telephone",
"trolley",
"head phone",
"sports car",
"stop sign",
"dessert",
"scooter",
"stroller",
"crane",
"remote",
"refrigerator",
"oven",
"lemon",
"duck",
"baseball bat",
"surveillance camera",
"cat",
"jug",
"broccoli",
"piano",
"pizza",
"elephant",
"skateboard",
"surfboard",
"gun",
"skating and skiing shoes",
"gas stove",
"donut",
"bow tie",
"carrot",
"toilet",
"kite",
"strawberry",
"other balls",
"shovel",
"pepper",
"computer box",
"toilet paper",
"cleaning products",
"chopsticks",
"microwave",
"pigeon",
"baseball",
"cutting/chopping board",
"coffee table",
"side table",
"scissors",
"marker",
"pie",
"ladder",
"snowboard",
"cookies",
"radiator",
"fire hydrant",
"basketball",
"zebra",
"grape",
"giraffe",
"potato",
"sausage",
"tricycle",
"violin",
"egg",
"fire extinguisher",
"candy",
"fire truck",
"billiards",
"converter",
"bathtub",
"wheelchair",
"golf club",
"briefcase",
"cucumber",
"cigar/cigarette",
"paint brush",
"pear",
"heavy truck",
"hamburger",
"extractor",
"extension cord",
"tong",
"tennis racket",
"folder",
"american football",
"earphone",
"mask",
"kettle",
"tennis",
"ship",
"swing",
"coffee machine",
"slide",
"carriage",
"onion",
"green beans",
"projector",
"frisbee",
"washing machine/drying machine",
"chicken",
"printer",
"watermelon",
"saxophone",
"tissue",
"toothbrush",
"ice cream",
"hot-air balloon",
"cello",
"french fries",
"scale",
"trophy",
"cabbage",
"hot dog",
"blender",
"peach",
"rice",
"wallet/purse",
"volleyball",
"deer",
"goose",
"tape",
"tablet",
"cosmetics",
"trumpet",
"pineapple",
"golf ball",
"ambulance",
"parking meter",
"mango",
"key",
"hurdle",
"fishing rod",
"medal",
"flute",
"brush",
"penguin",
"megaphone",
"corn",
"lettuce",
"garlic",
"swan",
"helicopter",
"green onion",
"sandwich",
"nuts",
"speed limit sign",
"induction cooker",
"broom",
"trombone",
"plum",
"rickshaw",
"goldfish",
"kiwi fruit",
"router/modem",
"poker card",
"toaster",
"shrimp",
"sushi",
"cheese",
"notepaper",
"cherry",
"pliers",
"cd",
"pasta",
"hammer",
"cue",
"avocado",
"hamimelon",
"flask",
"mushroom",
"screwdriver",
"soap",
"recorder",
"bear",
"eggplant",
"board eraser",
"coconut",
"tape measure/ruler",
"pig",
"showerhead",
"globe",
"chips",
"steak",
"crosswalk sign",
"stapler",
"camel",
"formula 1",
"pomegranate",
"dishwasher",
"crab",
"hoverboard",
"meat ball",
"rice cooker",
"tuba",
"calculator",
"papaya",
"antelope",
"parrot",
"seal",
"butterfly",
"dumbbell",
"donkey",
"lion",
"urinal",
"dolphin",
"electric drill",
"hair dryer",
"egg tart",
"jellyfish",
"treadmill",
"lighter",
"grapefruit",
"game board",
"mop",
"radish",
"baozi",
"target",
"french",
"spring rolls",
"monkey",
"rabbit",
"pencil case",
"yak",
"red cabbage",
"binoculars",
"asparagus",
"barbell",
"scallop",
"noddles",
"comb",
"dumpling",
"oyster",
"table tennis paddle",
"cosmetics brush/eyeliner pencil",
"chainsaw",
"eraser",
"lobster",
"durian",
"okra",
"lipstick",
"cosmetics mirror",
"curling",
"table tennis"
] |
YaroslavPrytula/detr-coco |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-coco
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.827 | 1.0 | 250 | 1.5417 |
| 1.6489 | 2.0 | 500 | 1.5062 |
| 1.419 | 3.0 | 750 | 1.3149 |
| 1.3463 | 4.0 | 1000 | 1.2389 |
| 1.5814 | 5.0 | 1250 | 1.2236 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
franciscomj0901/detr-fashionpedia-finetune-francisco |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-fashionpedia-finetune-francisco
This model is a fine-tuned version of [franciscomj0901/detr-fashionpedia-finetune-francisco](https://huggingface.co/franciscomj0901/detr-fashionpedia-finetune-francisco) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5590
- Map: 0.0011
- Map 50: 0.0033
- Map 75: 0.0005
- Map Small: 0.001
- Map Medium: 0.0017
- Map Large: 0.0006
- Mar 1: 0.0018
- Mar 10: 0.0086
- Mar 100: 0.0138
- Mar Small: 0.0094
- Mar Medium: 0.0181
- Mar Large: 0.0231
- Map Shirt, blouse: 0.0
- Mar 100 Shirt, blouse: 0.0
- Map Top, t-shirt, sweatshirt: 0.0
- Mar 100 Top, t-shirt, sweatshirt: 0.0
- Map Sweater: 0.0
- Mar 100 Sweater: 0.0
- Map Cardigan: 0.0
- Mar 100 Cardigan: 0.0
- Map Jacket: 0.0
- Mar 100 Jacket: 0.0
- Map Vest: 0.0
- Mar 100 Vest: 0.0
- Map Pants: 0.0
- Mar 100 Pants: 0.0
- Map Shorts: 0.0
- Mar 100 Shorts: 0.0
- Map Skirt: 0.0
- Mar 100 Skirt: 0.0
- Map Coat: 0.0
- Mar 100 Coat: 0.0
- Map Dress: 0.0
- Mar 100 Dress: 0.0
- Map Jumpsuit: 0.0
- Mar 100 Jumpsuit: 0.0
- Map Cape: 0.0
- Mar 100 Cape: 0.0
- Map Glasses: 0.0
- Mar 100 Glasses: 0.0
- Map Hat: 0.0
- Mar 100 Hat: 0.0
- Map Headband, head covering, hair accessory: 0.0
- Mar 100 Headband, head covering, hair accessory: 0.0
- Map Tie: 0.0
- Mar 100 Tie: 0.0
- Map Glove: 0.0
- Mar 100 Glove: 0.0
- Map Watch: 0.0
- Mar 100 Watch: 0.0
- Map Belt: 0.0
- Mar 100 Belt: 0.0
- Map Leg warmer: 0.0
- Mar 100 Leg warmer: 0.0
- Map Tights, stockings: 0.0
- Mar 100 Tights, stockings: 0.0
- Map Sock: 0.0
- Mar 100 Sock: 0.0
- Map Shoe: 0.0486
- Mar 100 Shoe: 0.3698
- Map Bag, wallet: 0.0
- Mar 100 Bag, wallet: 0.0
- Map Scarf: 0.0
- Mar 100 Scarf: 0.0
- Map Umbrella: 0.0
- Mar 100 Umbrella: 0.0
- Map Hood: 0.0
- Mar 100 Hood: 0.0
- Map Collar: 0.0
- Mar 100 Collar: 0.0
- Map Lapel: 0.0
- Mar 100 Lapel: 0.0
- Map Epaulette: 0.0
- Mar 100 Epaulette: 0.0
- Map Sleeve: 0.0034
- Mar 100 Sleeve: 0.253
- Map Pocket: 0.0
- Mar 100 Pocket: 0.0
- Map Neckline: 0.0002
- Mar 100 Neckline: 0.0139
- Map Buckle: 0.0
- Mar 100 Buckle: 0.0
- Map Zipper: 0.0
- Mar 100 Zipper: 0.0
- Map Applique: 0.0
- Mar 100 Applique: 0.0
- Map Bead: 0.0
- Mar 100 Bead: 0.0
- Map Bow: 0.0
- Mar 100 Bow: 0.0
- Map Flower: 0.0
- Mar 100 Flower: 0.0
- Map Fringe: 0.0
- Mar 100 Fringe: 0.0
- Map Ribbon: 0.0
- Mar 100 Ribbon: 0.0
- Map Rivet: 0.0
- Mar 100 Rivet: 0.0
- Map Ruffle: 0.0
- Mar 100 Ruffle: 0.0
- Map Sequin: 0.0
- Mar 100 Sequin: 0.0
- Map Tassel: 0.0
- Mar 100 Tassel: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Shirt, blouse | Mar 100 Shirt, blouse | Map Top, t-shirt, sweatshirt | Mar 100 Top, t-shirt, sweatshirt | Map Sweater | Mar 100 Sweater | Map Cardigan | Mar 100 Cardigan | Map Jacket | Mar 100 Jacket | Map Vest | Mar 100 Vest | Map Pants | Mar 100 Pants | Map Shorts | Mar 100 Shorts | Map Skirt | Mar 100 Skirt | Map Coat | Mar 100 Coat | Map Dress | Mar 100 Dress | Map Jumpsuit | Mar 100 Jumpsuit | Map Cape | Mar 100 Cape | Map Glasses | Mar 100 Glasses | Map Hat | Mar 100 Hat | Map Headband, head covering, hair accessory | Mar 100 Headband, head covering, hair accessory | Map Tie | Mar 100 Tie | Map Glove | Mar 100 Glove | Map Watch | Mar 100 Watch | Map Belt | Mar 100 Belt | Map Leg warmer | Mar 100 Leg warmer | Map Tights, stockings | Mar 100 Tights, stockings | Map Sock | Mar 100 Sock | Map Shoe | Mar 100 Shoe | Map Bag, wallet | Mar 100 Bag, wallet | Map Scarf | Mar 100 Scarf | Map Umbrella | Mar 100 Umbrella | Map Hood | Mar 100 Hood | Map Collar | Mar 100 Collar | Map Lapel | Mar 100 Lapel | Map Epaulette | Mar 100 Epaulette | Map Sleeve | Mar 100 Sleeve | Map Pocket | Mar 100 Pocket | Map Neckline | Mar 100 Neckline | Map Buckle | Mar 100 Buckle | Map Zipper | Mar 100 Zipper | Map Applique | Mar 100 Applique | Map Bead | Mar 100 Bead | Map Bow | Mar 100 Bow | Map Flower | Mar 100 Flower | Map Fringe | Mar 100 Fringe | Map Ribbon | Mar 100 Ribbon | Map Rivet | Mar 100 Rivet | Map Ruffle | Mar 100 Ruffle | Map Sequin | Mar 100 Sequin | Map Tassel | Mar 100 Tassel |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:-----------------:|:---------------------:|:----------------------------:|:--------------------------------:|:-----------:|:---------------:|:------------:|:----------------:|:----------:|:--------------:|:--------:|:------------:|:---------:|:-------------:|:----------:|:--------------:|:---------:|:-------------:|:--------:|:------------:|:---------:|:-------------:|:------------:|:----------------:|:--------:|:------------:|:-----------:|:---------------:|:-------:|:-----------:|:-------------------------------------------:|:-----------------------------------------------:|:-------:|:-----------:|:---------:|:-------------:|:---------:|:-------------:|:--------:|:------------:|:--------------:|:------------------:|:---------------------:|:-------------------------:|:--------:|:------------:|:--------:|:------------:|:---------------:|:-------------------:|:---------:|:-------------:|:------------:|:----------------:|:--------:|:------------:|:----------:|:--------------:|:---------:|:-------------:|:-------------:|:-----------------:|:----------:|:--------------:|:----------:|:--------------:|:------------:|:----------------:|:----------:|:--------------:|:----------:|:--------------:|:------------:|:----------------:|:--------:|:------------:|:-------:|:-----------:|:----------:|:--------------:|:----------:|:--------------:|:----------:|:--------------:|:---------:|:-------------:|:----------:|:--------------:|:----------:|:--------------:|:----------:|:--------------:|
| 5.5311 | 0.0044 | 50 | 3.9850 | 0.0005 | 0.0018 | 0.0001 | 0.0006 | 0.0006 | 0.0004 | 0.0008 | 0.006 | 0.011 | 0.0065 | 0.0149 | 0.0152 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0204 | 0.3288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0027 | 0.1688 | 0.0 | 0.0 | 0.0001 | 0.0066 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.7041 | 0.0088 | 100 | 3.8513 | 0.0006 | 0.0018 | 0.0002 | 0.0006 | 0.0008 | 0.001 | 0.0009 | 0.0067 | 0.0118 | 0.0091 | 0.0153 | 0.0168 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0048 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0002 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0243 | 0.3702 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0027 | 0.1518 | 0.0 | 0.0 | 0.0002 | 0.017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 4.358 | 0.0132 | 150 | 3.7499 | 0.0009 | 0.0026 | 0.0005 | 0.0008 | 0.001 | 0.0008 | 0.0012 | 0.0074 | 0.012 | 0.0085 | 0.0159 | 0.0168 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0333 | 0.3502 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0077 | 0.1803 | 0.0 | 0.0 | 0.0002 | 0.0237 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.9409 | 0.0175 | 200 | 3.6568 | 0.0008 | 0.0025 | 0.0004 | 0.0008 | 0.0011 | 0.0004 | 0.0011 | 0.0079 | 0.0133 | 0.0096 | 0.0172 | 0.02 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0334 | 0.3621 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0038 | 0.2232 | 0.0 | 0.0 | 0.0003 | 0.025 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 4.3598 | 0.0219 | 250 | 3.6445 | 0.0009 | 0.0028 | 0.0004 | 0.001 | 0.0013 | 0.0003 | 0.0013 | 0.0082 | 0.0129 | 0.0095 | 0.0168 | 0.0188 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0392 | 0.3739 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0024 | 0.2027 | 0.0 | 0.0 | 0.0005 | 0.0186 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.8139 | 0.0263 | 300 | 3.6136 | 0.0011 | 0.0033 | 0.0004 | 0.0011 | 0.0014 | 0.0006 | 0.0017 | 0.0087 | 0.0134 | 0.0092 | 0.0176 | 0.0212 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0457 | 0.362 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0034 | 0.2283 | 0.0 | 0.0 | 0.0005 | 0.0244 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.5465 | 0.0307 | 350 | 3.6114 | 0.0012 | 0.0034 | 0.0005 | 0.0011 | 0.0017 | 0.001 | 0.0019 | 0.0082 | 0.0129 | 0.0085 | 0.0171 | 0.0248 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0487 | 0.3562 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0046 | 0.2233 | 0.0 | 0.0 | 0.0003 | 0.0137 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.1327 | 0.0351 | 400 | 3.5889 | 0.0012 | 0.0035 | 0.0005 | 0.0011 | 0.0017 | 0.0014 | 0.0019 | 0.0085 | 0.0133 | 0.0089 | 0.0175 | 0.0256 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0501 | 0.3647 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0034 | 0.231 | 0.0 | 0.0 | 0.0003 | 0.0168 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.8889 | 0.0395 | 450 | 3.5696 | 0.0011 | 0.0033 | 0.0005 | 0.001 | 0.0017 | 0.0006 | 0.0018 | 0.0085 | 0.0137 | 0.0092 | 0.018 | 0.022 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0482 | 0.3676 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0036 | 0.2483 | 0.0 | 0.0 | 0.0002 | 0.0147 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 3.9952 | 0.0438 | 500 | 3.5590 | 0.0011 | 0.0033 | 0.0005 | 0.001 | 0.0017 | 0.0006 | 0.0018 | 0.0086 | 0.0138 | 0.0094 | 0.0181 | 0.0231 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0486 | 0.3698 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0034 | 0.253 | 0.0 | 0.0 | 0.0002 | 0.0139 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"shirt, blouse",
"top, t-shirt, sweatshirt",
"sweater",
"cardigan",
"jacket",
"vest",
"pants",
"shorts",
"skirt",
"coat",
"dress",
"jumpsuit",
"cape",
"glasses",
"hat",
"headband, head covering, hair accessory",
"tie",
"glove",
"watch",
"belt",
"leg warmer",
"tights, stockings",
"sock",
"shoe",
"bag, wallet",
"scarf",
"umbrella",
"hood",
"collar",
"lapel",
"epaulette",
"sleeve",
"pocket",
"neckline",
"buckle",
"zipper",
"applique",
"bead",
"bow",
"flower",
"fringe",
"ribbon",
"rivet",
"ruffle",
"sequin",
"tassel"
] |
jaygemini/detr-resnet-50_finetuned_cppe5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
mreraser/detr-resnet-50-dc5-fashionpedia-finetuned |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-dc5-fashionpedia-finetuned
This model is a fine-tuned version of [facebook/detr-resnet-50-dc5](https://huggingface.co/facebook/detr-resnet-50-dc5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.7281 | 0.0438 | 50 | 4.3705 |
| 3.9353 | 0.0876 | 100 | 4.0499 |
| 4.5369 | 0.1315 | 150 | 3.8890 |
| 3.9156 | 0.1753 | 200 | 3.7630 |
| 3.6006 | 0.2191 | 250 | 3.6861 |
| 3.6562 | 0.2629 | 300 | 3.6110 |
| 3.7636 | 0.3067 | 350 | 3.5906 |
| 4.0293 | 0.3506 | 400 | 3.5405 |
| 3.533 | 0.3944 | 450 | 3.4906 |
| 3.1302 | 0.4382 | 500 | 3.4249 |
| 3.8257 | 0.4820 | 550 | 3.3910 |
| 2.9622 | 0.5259 | 600 | 3.3622 |
| 3.9213 | 0.5697 | 650 | 3.3310 |
| 4.4062 | 0.6135 | 700 | 3.3303 |
| 4.3076 | 0.6573 | 750 | 3.3105 |
| 4.0868 | 0.7011 | 800 | 3.3040 |
| 4.0639 | 0.7450 | 850 | 3.3076 |
| 4.7454 | 0.7888 | 900 | 3.2996 |
| 4.3044 | 0.8326 | 950 | 3.2935 |
| 3.9519 | 0.8764 | 1000 | 3.2904 |
### Framework versions
- Transformers 4.51.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"shirt, blouse",
"top, t-shirt, sweatshirt",
"sweater",
"cardigan",
"jacket",
"vest",
"pants",
"shorts",
"skirt",
"coat",
"dress",
"jumpsuit",
"cape",
"glasses",
"hat",
"headband, head covering, hair accessory",
"tie",
"glove",
"watch",
"belt",
"leg warmer",
"tights, stockings",
"sock",
"shoe",
"bag, wallet",
"scarf",
"umbrella",
"hood",
"collar",
"lapel",
"epaulette",
"sleeve",
"pocket",
"neckline",
"buckle",
"zipper",
"applique",
"bead",
"bow",
"flower",
"fringe",
"ribbon",
"rivet",
"ruffle",
"sequin",
"tassel"
] |
Ahnj-Stability/detr-resnet-50-dc5-fashionpedia-finetuned |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-dc5-fashionpedia-finetuned
This model is a fine-tuned version of [facebook/detr-resnet-50-dc5](https://huggingface.co/facebook/detr-resnet-50-dc5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.8509 | 0.0438 | 50 | 6.5878 |
| 6.5395 | 0.0876 | 100 | 6.0833 |
| 6.8232 | 0.1315 | 150 | 5.6343 |
| 4.8783 | 0.1753 | 200 | 5.0698 |
| 4.8238 | 0.2191 | 250 | 4.7423 |
| 4.3674 | 0.2629 | 300 | 4.3963 |
| 4.5075 | 0.3067 | 350 | 4.1901 |
| 4.6329 | 0.3506 | 400 | 4.0261 |
| 3.734 | 0.3944 | 450 | 3.8477 |
| 3.7449 | 0.4382 | 500 | 3.7750 |
| 4.0604 | 0.4820 | 550 | 3.6956 |
| 3.0591 | 0.5259 | 600 | 3.6346 |
| 4.2276 | 0.5697 | 650 | 3.6107 |
| 4.4193 | 0.6135 | 700 | 3.5955 |
| 4.5098 | 0.6573 | 750 | 3.5602 |
| 4.1579 | 0.7011 | 800 | 3.5349 |
| 4.0028 | 0.7450 | 850 | 3.5262 |
| 4.8916 | 0.7888 | 900 | 3.5146 |
| 4.5715 | 0.8326 | 950 | 3.5160 |
| 3.77 | 0.8764 | 1000 | 3.5093 |
### Framework versions
- Transformers 4.51.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"shirt, blouse",
"top, t-shirt, sweatshirt",
"sweater",
"cardigan",
"jacket",
"vest",
"pants",
"shorts",
"skirt",
"coat",
"dress",
"jumpsuit",
"cape",
"glasses",
"hat",
"headband, head covering, hair accessory",
"tie",
"glove",
"watch",
"belt",
"leg warmer",
"tights, stockings",
"sock",
"shoe",
"bag, wallet",
"scarf",
"umbrella",
"hood",
"collar",
"lapel",
"epaulette",
"sleeve",
"pocket",
"neckline",
"buckle",
"zipper",
"applique",
"bead",
"bow",
"flower",
"fringe",
"ribbon",
"rivet",
"ruffle",
"sequin",
"tassel"
] |
rjhugs/modelStructure_TT_20050407 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelStructure_TT_20050407
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition-v1.1-all](https://huggingface.co/microsoft/table-transformer-structure-recognition-v1.1-all) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.50.1
- Pytorch 2.1.2+cu118
- Datasets 2.12.0
- Tokenizers 0.21.1
| [
"table",
"table column header",
"table column"
] |
bortle/autotrain-ap-obj-detector-1 |
# Model Trained Using AutoTrain
- Problem type: Object Detection
## Validation Metrics
loss: 5.052831649780273
map: 0.0
map_50: 0.0
map_75: 0.0
map_small: -1.0
map_medium: -1.0
map_large: 0.0
mar_1: 0.0
mar_10: 0.0
mar_100: 0.0
mar_small: -1.0
mar_medium: -1.0
mar_large: 0.0
| [
"comet",
"galaxy",
"moon",
"nebula",
"saturn",
"star cluster"
] |
bortle/autotrain-ap-obj-detector-2 |
# Model Trained Using AutoTrain
- Problem type: Object Detection
## Validation Metrics
loss: 5.644123077392578
map: 0.001
map_50: 0.0096
map_75: 0.0
map_small: -1.0
map_medium: -1.0
map_large: 0.001
mar_1: 0.0
mar_10: 0.025
mar_100: 0.025
mar_small: -1.0
mar_medium: -1.0
mar_large: 0.025
| [
"comet",
"galaxy",
"moon",
"nebula",
"saturn",
"snr",
"star cluster"
] |
yejimene/yolo_finetuned_fruits |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yolo_finetuned_fruits
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8208
- Map: 0.5539
- Map 50: 0.8071
- Map 75: 0.6043
- Map Small: -1.0
- Map Medium: 0.4804
- Map Large: 0.5761
- Mar 1: 0.409
- Mar 10: 0.7106
- Mar 100: 0.7748
- Mar Small: -1.0
- Mar Medium: 0.6829
- Mar Large: 0.7861
- Map Banana: 0.4114
- Mar 100 Banana: 0.775
- Map Orange: 0.6102
- Mar 100 Orange: 0.781
- Map Apple: 0.6401
- Mar 100 Apple: 0.7686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Banana | Mar 100 Banana | Map Orange | Mar 100 Orange | Map Apple | Mar 100 Apple |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:----------:|:--------------:|:----------:|:--------------:|:---------:|:-------------:|
| No log | 1.0 | 60 | 2.1986 | 0.0068 | 0.0254 | 0.0016 | -1.0 | 0.0068 | 0.0079 | 0.0246 | 0.0997 | 0.2776 | -1.0 | 0.24 | 0.283 | 0.0109 | 0.2575 | 0.0002 | 0.0095 | 0.0092 | 0.5657 |
| No log | 2.0 | 120 | 1.9727 | 0.0088 | 0.03 | 0.0036 | -1.0 | 0.0201 | 0.0089 | 0.0521 | 0.1605 | 0.3185 | -1.0 | 0.26 | 0.3186 | 0.0163 | 0.4325 | 0.0 | 0.0 | 0.0103 | 0.5229 |
| No log | 3.0 | 180 | 1.9117 | 0.0353 | 0.114 | 0.0137 | -1.0 | 0.0335 | 0.0411 | 0.1015 | 0.2692 | 0.4279 | -1.0 | 0.28 | 0.4458 | 0.0185 | 0.415 | 0.0278 | 0.3714 | 0.0596 | 0.4971 |
| No log | 4.0 | 240 | 1.6734 | 0.0659 | 0.1642 | 0.0544 | -1.0 | 0.1162 | 0.0783 | 0.1647 | 0.3225 | 0.4596 | -1.0 | 0.28 | 0.4787 | 0.0818 | 0.485 | 0.0324 | 0.2452 | 0.0836 | 0.6486 |
| No log | 5.0 | 300 | 1.3011 | 0.1225 | 0.2534 | 0.1145 | -1.0 | 0.1155 | 0.156 | 0.2833 | 0.4893 | 0.5985 | -1.0 | 0.42 | 0.6231 | 0.0858 | 0.5575 | 0.0939 | 0.5238 | 0.1879 | 0.7143 |
| No log | 6.0 | 360 | 1.2643 | 0.2057 | 0.356 | 0.2293 | -1.0 | 0.2614 | 0.2286 | 0.3177 | 0.5069 | 0.6091 | -1.0 | 0.4843 | 0.6289 | 0.1166 | 0.5425 | 0.1363 | 0.5333 | 0.3641 | 0.7514 |
| No log | 7.0 | 420 | 1.1581 | 0.281 | 0.4787 | 0.2868 | -1.0 | 0.3952 | 0.2874 | 0.3263 | 0.577 | 0.6758 | -1.0 | 0.54 | 0.6973 | 0.139 | 0.6025 | 0.2267 | 0.619 | 0.4773 | 0.8057 |
| No log | 8.0 | 480 | 1.1026 | 0.3086 | 0.524 | 0.3347 | -1.0 | 0.2653 | 0.3326 | 0.3576 | 0.58 | 0.6648 | -1.0 | 0.5943 | 0.6766 | 0.2161 | 0.615 | 0.2935 | 0.631 | 0.4162 | 0.7486 |
| 1.4697 | 9.0 | 540 | 1.0055 | 0.3516 | 0.5724 | 0.3781 | -1.0 | 0.3764 | 0.3613 | 0.3457 | 0.6023 | 0.7044 | -1.0 | 0.6629 | 0.7125 | 0.2457 | 0.645 | 0.3506 | 0.7024 | 0.4585 | 0.7657 |
| 1.4697 | 10.0 | 600 | 0.9545 | 0.4136 | 0.6261 | 0.4555 | -1.0 | 0.3712 | 0.4388 | 0.3688 | 0.6483 | 0.73 | -1.0 | 0.6671 | 0.7413 | 0.2924 | 0.68 | 0.4384 | 0.75 | 0.51 | 0.76 |
| 1.4697 | 11.0 | 660 | 0.9475 | 0.423 | 0.6493 | 0.4547 | -1.0 | 0.5066 | 0.4345 | 0.3763 | 0.662 | 0.7468 | -1.0 | 0.6429 | 0.7622 | 0.2579 | 0.71 | 0.456 | 0.7476 | 0.5551 | 0.7829 |
| 1.4697 | 12.0 | 720 | 0.9563 | 0.4131 | 0.6719 | 0.4431 | -1.0 | 0.4135 | 0.4285 | 0.3598 | 0.6447 | 0.7194 | -1.0 | 0.5957 | 0.7354 | 0.3076 | 0.71 | 0.4745 | 0.731 | 0.4573 | 0.7171 |
| 1.4697 | 13.0 | 780 | 0.8893 | 0.4472 | 0.6689 | 0.4985 | -1.0 | 0.4739 | 0.4567 | 0.3983 | 0.6573 | 0.7334 | -1.0 | 0.6443 | 0.7447 | 0.3567 | 0.735 | 0.4538 | 0.7595 | 0.5309 | 0.7057 |
| 1.4697 | 14.0 | 840 | 0.9049 | 0.4915 | 0.7427 | 0.5237 | -1.0 | 0.415 | 0.5107 | 0.3922 | 0.6898 | 0.7536 | -1.0 | 0.6529 | 0.7674 | 0.3643 | 0.7375 | 0.5229 | 0.7405 | 0.5872 | 0.7829 |
| 1.4697 | 15.0 | 900 | 0.8799 | 0.4884 | 0.7419 | 0.5376 | -1.0 | 0.4822 | 0.5042 | 0.3963 | 0.6875 | 0.7565 | -1.0 | 0.6614 | 0.7686 | 0.3481 | 0.7525 | 0.5076 | 0.7571 | 0.6095 | 0.76 |
| 1.4697 | 16.0 | 960 | 0.8778 | 0.5014 | 0.7714 | 0.5549 | -1.0 | 0.5352 | 0.5127 | 0.4015 | 0.6808 | 0.744 | -1.0 | 0.6329 | 0.7593 | 0.3398 | 0.725 | 0.5527 | 0.75 | 0.6116 | 0.7571 |
| 0.7568 | 17.0 | 1020 | 0.8810 | 0.5025 | 0.7664 | 0.5708 | -1.0 | 0.506 | 0.5126 | 0.3919 | 0.6854 | 0.7424 | -1.0 | 0.6743 | 0.7518 | 0.3768 | 0.7325 | 0.5336 | 0.7405 | 0.5973 | 0.7543 |
| 0.7568 | 18.0 | 1080 | 0.8716 | 0.4942 | 0.7505 | 0.5653 | -1.0 | 0.4833 | 0.509 | 0.3965 | 0.6756 | 0.7391 | -1.0 | 0.6357 | 0.7515 | 0.374 | 0.7525 | 0.5074 | 0.719 | 0.6011 | 0.7457 |
| 0.7568 | 19.0 | 1140 | 0.8007 | 0.5072 | 0.7516 | 0.5666 | -1.0 | 0.4698 | 0.524 | 0.411 | 0.7079 | 0.757 | -1.0 | 0.6486 | 0.7697 | 0.3868 | 0.7625 | 0.5498 | 0.7429 | 0.5849 | 0.7657 |
| 0.7568 | 20.0 | 1200 | 0.8122 | 0.5502 | 0.8115 | 0.594 | -1.0 | 0.4834 | 0.575 | 0.4175 | 0.7223 | 0.7704 | -1.0 | 0.6486 | 0.7855 | 0.436 | 0.765 | 0.6078 | 0.769 | 0.6067 | 0.7771 |
| 0.7568 | 21.0 | 1260 | 0.8067 | 0.5387 | 0.7907 | 0.5869 | -1.0 | 0.505 | 0.5602 | 0.3976 | 0.72 | 0.7725 | -1.0 | 0.6486 | 0.7874 | 0.3823 | 0.7775 | 0.6032 | 0.7857 | 0.6306 | 0.7543 |
| 0.7568 | 22.0 | 1320 | 0.8331 | 0.5408 | 0.7992 | 0.5769 | -1.0 | 0.4986 | 0.5614 | 0.4017 | 0.71 | 0.7596 | -1.0 | 0.6614 | 0.7726 | 0.4037 | 0.745 | 0.5779 | 0.7595 | 0.6408 | 0.7743 |
| 0.7568 | 23.0 | 1380 | 0.8336 | 0.5386 | 0.7938 | 0.5854 | -1.0 | 0.4914 | 0.56 | 0.4017 | 0.713 | 0.7625 | -1.0 | 0.6657 | 0.7751 | 0.3928 | 0.75 | 0.5954 | 0.769 | 0.6277 | 0.7686 |
| 0.7568 | 24.0 | 1440 | 0.8137 | 0.5391 | 0.7978 | 0.593 | -1.0 | 0.4835 | 0.5612 | 0.4081 | 0.7134 | 0.7681 | -1.0 | 0.6714 | 0.7807 | 0.3796 | 0.7625 | 0.6057 | 0.7762 | 0.6321 | 0.7657 |
| 0.5523 | 25.0 | 1500 | 0.8126 | 0.5518 | 0.8009 | 0.5998 | -1.0 | 0.4901 | 0.5745 | 0.4082 | 0.7152 | 0.7745 | -1.0 | 0.6757 | 0.7869 | 0.3933 | 0.7725 | 0.6199 | 0.7881 | 0.6423 | 0.7629 |
| 0.5523 | 26.0 | 1560 | 0.8205 | 0.5528 | 0.8115 | 0.6105 | -1.0 | 0.4859 | 0.5733 | 0.4063 | 0.711 | 0.7727 | -1.0 | 0.7 | 0.7819 | 0.4121 | 0.77 | 0.6125 | 0.7881 | 0.6338 | 0.76 |
| 0.5523 | 27.0 | 1620 | 0.8211 | 0.5503 | 0.8075 | 0.6082 | -1.0 | 0.4756 | 0.5729 | 0.4081 | 0.7088 | 0.7748 | -1.0 | 0.7 | 0.7843 | 0.4064 | 0.77 | 0.6134 | 0.7857 | 0.6312 | 0.7686 |
| 0.5523 | 28.0 | 1680 | 0.8223 | 0.5543 | 0.8091 | 0.6061 | -1.0 | 0.4809 | 0.5771 | 0.4082 | 0.7081 | 0.7758 | -1.0 | 0.6929 | 0.7862 | 0.4103 | 0.7725 | 0.6136 | 0.7833 | 0.639 | 0.7714 |
| 0.5523 | 29.0 | 1740 | 0.8171 | 0.5531 | 0.806 | 0.6037 | -1.0 | 0.4803 | 0.5755 | 0.409 | 0.7106 | 0.774 | -1.0 | 0.6829 | 0.7852 | 0.4113 | 0.775 | 0.6079 | 0.7786 | 0.6401 | 0.7686 |
| 0.5523 | 30.0 | 1800 | 0.8208 | 0.5539 | 0.8071 | 0.6043 | -1.0 | 0.4804 | 0.5761 | 0.409 | 0.7106 | 0.7748 | -1.0 | 0.6829 | 0.7861 | 0.4114 | 0.775 | 0.6102 | 0.781 | 0.6401 | 0.7686 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"banana",
"orange",
"apple"
] |
aiarenm/yolo_finetuned_fruits |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yolo_finetuned_fruits
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8182
- Map: 0.5572
- Map 50: 0.8422
- Map 75: 0.5925
- Map Small: -1.0
- Map Medium: 0.4995
- Map Large: 0.5815
- Mar 1: 0.406
- Mar 10: 0.7081
- Mar 100: 0.771
- Mar Small: -1.0
- Mar Medium: 0.6571
- Mar Large: 0.7893
- Map Banana: 0.4184
- Mar 100 Banana: 0.755
- Map Orange: 0.5804
- Mar 100 Orange: 0.7667
- Map Apple: 0.6727
- Mar 100 Apple: 0.7914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Banana | Mar 100 Banana | Map Orange | Mar 100 Orange | Map Apple | Mar 100 Apple |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:----------:|:--------------:|:----------:|:--------------:|:---------:|:-------------:|
| No log | 1.0 | 60 | 1.3832 | 0.0662 | 0.1157 | 0.0675 | -1.0 | 0.0739 | 0.0823 | 0.2406 | 0.4375 | 0.5766 | -1.0 | 0.3429 | 0.6114 | 0.0589 | 0.585 | 0.0233 | 0.5476 | 0.1164 | 0.5971 |
| No log | 2.0 | 120 | 1.1698 | 0.1353 | 0.2296 | 0.1411 | -1.0 | 0.084 | 0.153 | 0.2702 | 0.4985 | 0.5976 | -1.0 | 0.3 | 0.6393 | 0.1204 | 0.64 | 0.0371 | 0.45 | 0.2482 | 0.7029 |
| No log | 3.0 | 180 | 1.1113 | 0.2256 | 0.3806 | 0.2196 | -1.0 | 0.2253 | 0.2425 | 0.2869 | 0.541 | 0.6777 | -1.0 | 0.4429 | 0.7111 | 0.2027 | 0.6825 | 0.0814 | 0.619 | 0.3926 | 0.7314 |
| No log | 4.0 | 240 | 1.1347 | 0.2893 | 0.4969 | 0.3288 | -1.0 | 0.227 | 0.3081 | 0.31 | 0.5639 | 0.6804 | -1.0 | 0.5 | 0.708 | 0.2585 | 0.6575 | 0.1758 | 0.6524 | 0.4336 | 0.7314 |
| No log | 5.0 | 300 | 1.2339 | 0.3139 | 0.5879 | 0.2967 | -1.0 | 0.331 | 0.3322 | 0.3008 | 0.5406 | 0.632 | -1.0 | 0.5429 | 0.6512 | 0.206 | 0.575 | 0.2292 | 0.681 | 0.5065 | 0.64 |
| No log | 6.0 | 360 | 0.9967 | 0.4014 | 0.6469 | 0.4216 | -1.0 | 0.3962 | 0.4191 | 0.358 | 0.6481 | 0.7231 | -1.0 | 0.6286 | 0.742 | 0.3186 | 0.6625 | 0.2708 | 0.7238 | 0.6147 | 0.7829 |
| No log | 7.0 | 420 | 1.0597 | 0.4142 | 0.7448 | 0.4478 | -1.0 | 0.3237 | 0.4436 | 0.3425 | 0.6121 | 0.6731 | -1.0 | 0.5143 | 0.6985 | 0.3237 | 0.6575 | 0.3926 | 0.6762 | 0.5264 | 0.6857 |
| No log | 8.0 | 480 | 0.8963 | 0.4543 | 0.7499 | 0.457 | -1.0 | 0.4688 | 0.4784 | 0.3545 | 0.6702 | 0.7344 | -1.0 | 0.6357 | 0.75 | 0.3374 | 0.72 | 0.4997 | 0.7548 | 0.526 | 0.7286 |
| 1.1464 | 9.0 | 540 | 0.9669 | 0.4473 | 0.706 | 0.4926 | -1.0 | 0.3025 | 0.4892 | 0.369 | 0.6374 | 0.719 | -1.0 | 0.5643 | 0.7461 | 0.3263 | 0.6775 | 0.4506 | 0.731 | 0.5649 | 0.7486 |
| 1.1464 | 10.0 | 600 | 0.9448 | 0.4642 | 0.7241 | 0.5078 | -1.0 | 0.4185 | 0.4969 | 0.3656 | 0.6285 | 0.7131 | -1.0 | 0.5714 | 0.7363 | 0.3286 | 0.6875 | 0.4589 | 0.7262 | 0.6051 | 0.7257 |
| 1.1464 | 11.0 | 660 | 0.9464 | 0.4645 | 0.7321 | 0.4897 | -1.0 | 0.393 | 0.502 | 0.381 | 0.6472 | 0.7229 | -1.0 | 0.6071 | 0.7425 | 0.3356 | 0.6925 | 0.4614 | 0.7476 | 0.5964 | 0.7286 |
| 1.1464 | 12.0 | 720 | 0.9143 | 0.4816 | 0.7601 | 0.5366 | -1.0 | 0.4266 | 0.5129 | 0.37 | 0.6644 | 0.7434 | -1.0 | 0.6143 | 0.7642 | 0.3459 | 0.7225 | 0.4752 | 0.7333 | 0.6236 | 0.7743 |
| 1.1464 | 13.0 | 780 | 0.8523 | 0.5186 | 0.7851 | 0.5524 | -1.0 | 0.4882 | 0.5457 | 0.3976 | 0.6733 | 0.7352 | -1.0 | 0.6071 | 0.7542 | 0.3849 | 0.7475 | 0.5316 | 0.7238 | 0.6394 | 0.7343 |
| 1.1464 | 14.0 | 840 | 0.8937 | 0.5077 | 0.7907 | 0.557 | -1.0 | 0.4944 | 0.5348 | 0.3906 | 0.6622 | 0.748 | -1.0 | 0.6571 | 0.7649 | 0.3544 | 0.7125 | 0.5471 | 0.7571 | 0.6215 | 0.7743 |
| 1.1464 | 15.0 | 900 | 0.8502 | 0.52 | 0.8012 | 0.5662 | -1.0 | 0.4128 | 0.5524 | 0.4075 | 0.669 | 0.7367 | -1.0 | 0.5857 | 0.7619 | 0.3692 | 0.715 | 0.5478 | 0.7381 | 0.6432 | 0.7571 |
| 1.1464 | 16.0 | 960 | 0.8644 | 0.515 | 0.8117 | 0.5603 | -1.0 | 0.4659 | 0.5399 | 0.3743 | 0.6587 | 0.7319 | -1.0 | 0.5857 | 0.7537 | 0.3634 | 0.7325 | 0.5928 | 0.769 | 0.5889 | 0.6943 |
| 0.6981 | 17.0 | 1020 | 0.8121 | 0.5252 | 0.8011 | 0.5394 | -1.0 | 0.4677 | 0.5524 | 0.3924 | 0.6975 | 0.7589 | -1.0 | 0.6571 | 0.7767 | 0.347 | 0.73 | 0.5675 | 0.7667 | 0.6611 | 0.78 |
| 0.6981 | 18.0 | 1080 | 0.8345 | 0.5364 | 0.8268 | 0.5691 | -1.0 | 0.5232 | 0.559 | 0.3959 | 0.6849 | 0.7559 | -1.0 | 0.6571 | 0.7713 | 0.3633 | 0.7425 | 0.58 | 0.7595 | 0.6659 | 0.7657 |
| 0.6981 | 19.0 | 1140 | 0.8186 | 0.531 | 0.8115 | 0.5705 | -1.0 | 0.4991 | 0.5533 | 0.3908 | 0.6815 | 0.748 | -1.0 | 0.65 | 0.7623 | 0.3856 | 0.7525 | 0.5816 | 0.7571 | 0.6257 | 0.7343 |
| 0.6981 | 20.0 | 1200 | 0.7999 | 0.562 | 0.8515 | 0.598 | -1.0 | 0.4755 | 0.5915 | 0.4071 | 0.7006 | 0.7647 | -1.0 | 0.6143 | 0.7881 | 0.4081 | 0.755 | 0.6069 | 0.7762 | 0.6711 | 0.7629 |
| 0.6981 | 21.0 | 1260 | 0.8050 | 0.5545 | 0.8284 | 0.6088 | -1.0 | 0.4877 | 0.5829 | 0.3939 | 0.704 | 0.7691 | -1.0 | 0.6429 | 0.7885 | 0.4096 | 0.7625 | 0.608 | 0.7619 | 0.6459 | 0.7829 |
| 0.6981 | 22.0 | 1320 | 0.8181 | 0.5503 | 0.813 | 0.573 | -1.0 | 0.4993 | 0.5756 | 0.4033 | 0.7077 | 0.77 | -1.0 | 0.6714 | 0.787 | 0.4089 | 0.7375 | 0.5731 | 0.7667 | 0.6689 | 0.8057 |
| 0.6981 | 23.0 | 1380 | 0.8230 | 0.553 | 0.8332 | 0.5737 | -1.0 | 0.4984 | 0.5758 | 0.4016 | 0.7059 | 0.7706 | -1.0 | 0.6571 | 0.7882 | 0.405 | 0.7575 | 0.5733 | 0.7571 | 0.6807 | 0.7971 |
| 0.6981 | 24.0 | 1440 | 0.8177 | 0.5512 | 0.8339 | 0.5859 | -1.0 | 0.5113 | 0.5758 | 0.402 | 0.701 | 0.7675 | -1.0 | 0.6357 | 0.7882 | 0.4096 | 0.7525 | 0.5716 | 0.7643 | 0.6724 | 0.7857 |
| 0.5444 | 25.0 | 1500 | 0.8300 | 0.558 | 0.8348 | 0.5836 | -1.0 | 0.4993 | 0.5835 | 0.4049 | 0.7054 | 0.7706 | -1.0 | 0.65 | 0.7902 | 0.4095 | 0.75 | 0.5775 | 0.7619 | 0.6868 | 0.8 |
| 0.5444 | 26.0 | 1560 | 0.8121 | 0.5618 | 0.8348 | 0.5814 | -1.0 | 0.5067 | 0.5873 | 0.4102 | 0.7154 | 0.7776 | -1.0 | 0.6714 | 0.7954 | 0.4142 | 0.7575 | 0.5854 | 0.7667 | 0.6859 | 0.8086 |
| 0.5444 | 27.0 | 1620 | 0.8138 | 0.5582 | 0.8328 | 0.5901 | -1.0 | 0.5006 | 0.5824 | 0.4064 | 0.7073 | 0.7702 | -1.0 | 0.6571 | 0.7883 | 0.4105 | 0.755 | 0.5879 | 0.7643 | 0.6762 | 0.7914 |
| 0.5444 | 28.0 | 1680 | 0.8084 | 0.5597 | 0.842 | 0.5932 | -1.0 | 0.4993 | 0.5837 | 0.4052 | 0.704 | 0.7677 | -1.0 | 0.6571 | 0.7854 | 0.4212 | 0.7525 | 0.5829 | 0.7619 | 0.6748 | 0.7886 |
| 0.5444 | 29.0 | 1740 | 0.8162 | 0.5581 | 0.8424 | 0.593 | -1.0 | 0.4995 | 0.5829 | 0.4052 | 0.7065 | 0.7719 | -1.0 | 0.6571 | 0.79 | 0.4187 | 0.76 | 0.5811 | 0.7643 | 0.6745 | 0.7914 |
| 0.5444 | 30.0 | 1800 | 0.8182 | 0.5572 | 0.8422 | 0.5925 | -1.0 | 0.4995 | 0.5815 | 0.406 | 0.7081 | 0.771 | -1.0 | 0.6571 | 0.7893 | 0.4184 | 0.755 | 0.5804 | 0.7667 | 0.6727 | 0.7914 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
| [
"banana",
"orange",
"apple"
] |
PeeterPissarenko/detr-resnet50-fashionpedia-finetuned-demo |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet50-fashionpedia-finetuned-demo
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5581 | 1.6667 | 500 | 2.5232 |
| 1.971 | 3.3333 | 1000 | 2.0719 |
| 1.9395 | 5.0 | 1500 | 1.9387 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
zf31265639/detr-resnet-50-finetuned-10-epochs-boat-dataset |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-finetuned-10-epochs-boat-dataset
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
| [
"ballonboat",
"bigboat",
"boat",
"jetski",
"katamaran",
"sailboat",
"smallboat",
"speedboat",
"wam_v"
] |
arielb30/detr-resnet-50_finetuned_cppe5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [arielb30/detr-resnet-50_finetuned_cppe5](https://huggingface.co/arielb30/detr-resnet-50_finetuned_cppe5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.316 | 0.8 | 100 | 1.2873 |
| 1.2785 | 1.6 | 200 | 1.2614 |
| 1.2304 | 2.4 | 300 | 1.2703 |
| 1.3002 | 3.2 | 400 | 1.2749 |
| 1.2282 | 4.0 | 500 | 1.2673 |
| 1.2266 | 4.8 | 600 | 1.2323 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"coverall",
"face_shield",
"gloves",
"goggles",
"mask"
] |
jaaguptamme/outputs |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.51.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
arielb30/detr-resnet-50_finetuned_csod |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_csod
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.1804
- eval_runtime: 23.8939
- eval_samples_per_second: 16.029
- eval_steps_per_second: 2.009
- epoch: 2.7835
- step: 1350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"ct",
"cthead",
"t",
"thead"
] |
arielb30/detr-resnet-50_finetuned_csgo |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_csgo
This model is a fine-tuned version of [arielb30/detr-resnet-50_finetuned_csgo](https://huggingface.co/arielb30/detr-resnet-50_finetuned_csgo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9375 | 0.8230 | 200 | 0.9884 |
| 0.9233 | 1.6461 | 400 | 0.9462 |
| 0.9256 | 2.4691 | 600 | 0.9453 |
| 0.8885 | 3.2922 | 800 | 0.9669 |
| 0.898 | 4.1152 | 1000 | 0.9314 |
| 0.8739 | 4.9383 | 1200 | 0.9204 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"ct",
"cthead",
"t",
"thead"
] |
GabrielMI/yolo_finetuned_fruits |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yolo_finetuned_fruits
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7819
- Map: 0.5883
- Map 50: 0.8521
- Map 75: 0.6633
- Map Small: -1.0
- Map Medium: 0.4917
- Map Large: 0.6223
- Mar 1: 0.4441
- Mar 10: 0.7224
- Mar 100: 0.7722
- Mar Small: -1.0
- Mar Medium: 0.6417
- Mar Large: 0.7892
- Map Banana: 0.4472
- Mar 100 Banana: 0.7275
- Map Orange: 0.6126
- Mar 100 Orange: 0.7833
- Map Apple: 0.7051
- Mar 100 Apple: 0.8057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Banana | Mar 100 Banana | Map Orange | Mar 100 Orange | Map Apple | Mar 100 Apple |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:----------:|:--------------:|:----------:|:--------------:|:---------:|:-------------:|
| No log | 1.0 | 60 | 1.9158 | 0.0195 | 0.0636 | 0.0063 | -1.0 | 0.027 | 0.0198 | 0.0473 | 0.1897 | 0.3615 | -1.0 | 0.2183 | 0.3761 | 0.04 | 0.385 | 0.0011 | 0.0881 | 0.0174 | 0.6114 |
| No log | 2.0 | 120 | 2.0371 | 0.0268 | 0.078 | 0.012 | -1.0 | 0.0374 | 0.0262 | 0.0794 | 0.2079 | 0.3617 | -1.0 | 0.2 | 0.3771 | 0.039 | 0.4075 | 0.0053 | 0.1119 | 0.0361 | 0.5657 |
| No log | 3.0 | 180 | 1.4488 | 0.0432 | 0.1226 | 0.0229 | -1.0 | 0.0661 | 0.0401 | 0.1905 | 0.3398 | 0.5017 | -1.0 | 0.2583 | 0.5219 | 0.069 | 0.625 | 0.0333 | 0.2714 | 0.0272 | 0.6086 |
| No log | 4.0 | 240 | 1.2716 | 0.0733 | 0.1622 | 0.0681 | -1.0 | 0.1318 | 0.0715 | 0.242 | 0.4183 | 0.6169 | -1.0 | 0.3967 | 0.6399 | 0.0732 | 0.6425 | 0.0524 | 0.4452 | 0.0943 | 0.7629 |
| No log | 5.0 | 300 | 1.1472 | 0.1133 | 0.2136 | 0.1094 | -1.0 | 0.1777 | 0.1156 | 0.2851 | 0.48 | 0.6252 | -1.0 | 0.405 | 0.6456 | 0.1031 | 0.7 | 0.1103 | 0.4071 | 0.1265 | 0.7686 |
| No log | 6.0 | 360 | 1.1300 | 0.1191 | 0.2449 | 0.1125 | -1.0 | 0.1469 | 0.1307 | 0.2699 | 0.4715 | 0.6694 | -1.0 | 0.4333 | 0.6956 | 0.0991 | 0.67 | 0.1202 | 0.6095 | 0.1379 | 0.7286 |
| No log | 7.0 | 420 | 1.0298 | 0.1813 | 0.3059 | 0.1929 | -1.0 | 0.2227 | 0.2027 | 0.3479 | 0.5318 | 0.6733 | -1.0 | 0.45 | 0.6983 | 0.1167 | 0.67 | 0.2302 | 0.55 | 0.197 | 0.8 |
| No log | 8.0 | 480 | 1.0293 | 0.2451 | 0.4266 | 0.2668 | -1.0 | 0.2595 | 0.2725 | 0.3298 | 0.5753 | 0.7069 | -1.0 | 0.5267 | 0.7266 | 0.1943 | 0.7125 | 0.3182 | 0.6738 | 0.2229 | 0.7343 |
| 1.2845 | 9.0 | 540 | 1.0215 | 0.3761 | 0.6485 | 0.3915 | -1.0 | 0.4188 | 0.3996 | 0.3375 | 0.6398 | 0.7336 | -1.0 | 0.6383 | 0.7473 | 0.2155 | 0.6775 | 0.428 | 0.7405 | 0.4848 | 0.7829 |
| 1.2845 | 10.0 | 600 | 0.9666 | 0.4652 | 0.7264 | 0.5036 | -1.0 | 0.3976 | 0.497 | 0.3696 | 0.6634 | 0.7315 | -1.0 | 0.5817 | 0.7512 | 0.3139 | 0.6775 | 0.4658 | 0.7286 | 0.616 | 0.7886 |
| 1.2845 | 11.0 | 660 | 0.9365 | 0.4826 | 0.7627 | 0.5587 | -1.0 | 0.4124 | 0.5147 | 0.3787 | 0.6606 | 0.7163 | -1.0 | 0.505 | 0.7414 | 0.3238 | 0.6875 | 0.4915 | 0.7071 | 0.6327 | 0.7543 |
| 1.2845 | 12.0 | 720 | 0.9472 | 0.4644 | 0.7652 | 0.5261 | -1.0 | 0.3875 | 0.5056 | 0.3719 | 0.6594 | 0.7294 | -1.0 | 0.5717 | 0.7484 | 0.3286 | 0.7025 | 0.4983 | 0.7143 | 0.5663 | 0.7714 |
| 1.2845 | 13.0 | 780 | 0.9087 | 0.4966 | 0.764 | 0.5557 | -1.0 | 0.4804 | 0.5252 | 0.3921 | 0.679 | 0.7517 | -1.0 | 0.6483 | 0.7656 | 0.3481 | 0.71 | 0.5176 | 0.7595 | 0.624 | 0.7857 |
| 1.2845 | 14.0 | 840 | 0.8610 | 0.5198 | 0.7753 | 0.5692 | -1.0 | 0.4833 | 0.5606 | 0.4232 | 0.7004 | 0.7477 | -1.0 | 0.6633 | 0.7606 | 0.409 | 0.685 | 0.5204 | 0.7667 | 0.63 | 0.7914 |
| 1.2845 | 15.0 | 900 | 0.8564 | 0.5518 | 0.8086 | 0.6727 | -1.0 | 0.5648 | 0.5817 | 0.411 | 0.6983 | 0.7569 | -1.0 | 0.645 | 0.7717 | 0.4321 | 0.715 | 0.5533 | 0.7643 | 0.6701 | 0.7914 |
| 1.2845 | 16.0 | 960 | 0.8996 | 0.5348 | 0.8088 | 0.6341 | -1.0 | 0.4901 | 0.5621 | 0.4183 | 0.6793 | 0.745 | -1.0 | 0.6383 | 0.7595 | 0.4119 | 0.6975 | 0.5284 | 0.7405 | 0.6642 | 0.7971 |
| 0.8009 | 17.0 | 1020 | 0.8437 | 0.5527 | 0.8203 | 0.6544 | -1.0 | 0.4749 | 0.5871 | 0.4243 | 0.6989 | 0.7535 | -1.0 | 0.6067 | 0.7722 | 0.4025 | 0.71 | 0.5727 | 0.7476 | 0.683 | 0.8029 |
| 0.8009 | 18.0 | 1080 | 0.8433 | 0.5625 | 0.8238 | 0.6682 | -1.0 | 0.4952 | 0.5982 | 0.4334 | 0.6974 | 0.7577 | -1.0 | 0.5983 | 0.7777 | 0.407 | 0.7175 | 0.5929 | 0.7643 | 0.6876 | 0.7914 |
| 0.8009 | 19.0 | 1140 | 0.8158 | 0.5855 | 0.8359 | 0.6588 | -1.0 | 0.5315 | 0.614 | 0.4387 | 0.7157 | 0.7715 | -1.0 | 0.6267 | 0.7896 | 0.4249 | 0.735 | 0.6071 | 0.7738 | 0.7245 | 0.8057 |
| 0.8009 | 20.0 | 1200 | 0.7977 | 0.586 | 0.8433 | 0.6602 | -1.0 | 0.5306 | 0.6157 | 0.4415 | 0.7192 | 0.7753 | -1.0 | 0.6433 | 0.7929 | 0.4119 | 0.7225 | 0.6322 | 0.7833 | 0.7138 | 0.82 |
| 0.8009 | 21.0 | 1260 | 0.8195 | 0.5916 | 0.8465 | 0.6581 | -1.0 | 0.5731 | 0.6166 | 0.442 | 0.7196 | 0.7795 | -1.0 | 0.6733 | 0.7941 | 0.4367 | 0.73 | 0.616 | 0.7857 | 0.7222 | 0.8229 |
| 0.8009 | 22.0 | 1320 | 0.7861 | 0.5915 | 0.8481 | 0.6645 | -1.0 | 0.5399 | 0.619 | 0.4396 | 0.7219 | 0.7785 | -1.0 | 0.6583 | 0.7943 | 0.4391 | 0.735 | 0.6303 | 0.7976 | 0.7052 | 0.8029 |
| 0.8009 | 23.0 | 1380 | 0.8101 | 0.5835 | 0.848 | 0.6618 | -1.0 | 0.5154 | 0.6151 | 0.4409 | 0.7128 | 0.7804 | -1.0 | 0.64 | 0.7979 | 0.4427 | 0.7475 | 0.614 | 0.7881 | 0.6937 | 0.8057 |
| 0.8009 | 24.0 | 1440 | 0.7936 | 0.5912 | 0.8462 | 0.6779 | -1.0 | 0.5577 | 0.6196 | 0.4438 | 0.7217 | 0.7819 | -1.0 | 0.6733 | 0.7962 | 0.4422 | 0.7425 | 0.6164 | 0.7833 | 0.715 | 0.82 |
| 0.5971 | 25.0 | 1500 | 0.7935 | 0.5879 | 0.8557 | 0.6645 | -1.0 | 0.4766 | 0.6217 | 0.441 | 0.7235 | 0.7775 | -1.0 | 0.6683 | 0.7921 | 0.4412 | 0.7325 | 0.6318 | 0.8 | 0.6907 | 0.8 |
| 0.5971 | 26.0 | 1560 | 0.7936 | 0.5867 | 0.854 | 0.6559 | -1.0 | 0.4773 | 0.6209 | 0.4406 | 0.719 | 0.7754 | -1.0 | 0.6417 | 0.7922 | 0.4459 | 0.74 | 0.6115 | 0.7833 | 0.7028 | 0.8029 |
| 0.5971 | 27.0 | 1620 | 0.7856 | 0.5904 | 0.8561 | 0.6682 | -1.0 | 0.5188 | 0.6238 | 0.4441 | 0.7217 | 0.7748 | -1.0 | 0.6417 | 0.7919 | 0.4463 | 0.7325 | 0.6143 | 0.7833 | 0.7105 | 0.8086 |
| 0.5971 | 28.0 | 1680 | 0.7838 | 0.5918 | 0.8561 | 0.6678 | -1.0 | 0.4937 | 0.6265 | 0.4448 | 0.7231 | 0.7746 | -1.0 | 0.6417 | 0.7919 | 0.4458 | 0.7275 | 0.6251 | 0.7905 | 0.7047 | 0.8057 |
| 0.5971 | 29.0 | 1740 | 0.7819 | 0.592 | 0.8569 | 0.6674 | -1.0 | 0.4967 | 0.6266 | 0.4457 | 0.724 | 0.7738 | -1.0 | 0.6417 | 0.791 | 0.4476 | 0.7275 | 0.6235 | 0.7881 | 0.7047 | 0.8057 |
| 0.5971 | 30.0 | 1800 | 0.7819 | 0.5883 | 0.8521 | 0.6633 | -1.0 | 0.4917 | 0.6223 | 0.4441 | 0.7224 | 0.7722 | -1.0 | 0.6417 | 0.7892 | 0.4472 | 0.7275 | 0.6126 | 0.7833 | 0.7051 | 0.8057 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"banana",
"orange",
"apple"
] |
Hoixi/table-transformer-structure-recognition-v1.1-all-finetuned-v4 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# table-transformer-structure-recognition-v1.1-all-finetuned-v4
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition-v1.1-all](https://huggingface.co/microsoft/table-transformer-structure-recognition-v1.1-all) on the tr-fin_table-dataset-v4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8161 | 12.5 | 50 | 1.8594 |
| 2.1356 | 25.0 | 100 | 1.8244 |
| 1.9666 | 37.5 | 150 | 1.7812 |
| 2.686 | 50.0 | 200 | 1.7117 |
| 1.8873 | 62.5 | 250 | 1.6662 |
| 2.0797 | 75.0 | 300 | 1.6360 |
| 2.2612 | 87.5 | 350 | 1.6419 |
| 1.954 | 100.0 | 400 | 1.6110 |
| 2.0358 | 112.5 | 450 | 1.6159 |
| 1.9712 | 125.0 | 500 | 1.6164 |
| 2.1658 | 137.5 | 550 | 1.6242 |
| 2.7702 | 150.0 | 600 | 1.6097 |
| 1.9429 | 162.5 | 650 | 1.6083 |
| 1.947 | 175.0 | 700 | 1.5989 |
| 2.0561 | 187.5 | 750 | 1.6029 |
| 2.0323 | 200.0 | 800 | 1.5877 |
| 2.0326 | 212.5 | 850 | 1.5870 |
| 1.7113 | 225.0 | 900 | 1.5835 |
| 1.6647 | 237.5 | 950 | 1.5811 |
| 2.2978 | 250.0 | 1000 | 1.5814 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.0
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
Unax14/yolo_finetuned_fruits |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yolo_finetuned_fruits
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8251
- Map: 0.5689
- Map 50: 0.837
- Map 75: 0.6378
- Map Small: -1.0
- Map Medium: 0.6185
- Map Large: 0.5762
- Mar 1: 0.4035
- Mar 10: 0.7088
- Mar 100: 0.7653
- Mar Small: -1.0
- Mar Medium: 0.7429
- Mar Large: 0.7707
- Map Banana: 0.4416
- Mar 100 Banana: 0.725
- Map Orange: 0.6177
- Mar 100 Orange: 0.7881
- Map Apple: 0.6474
- Mar 100 Apple: 0.7829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Banana | Mar 100 Banana | Map Orange | Mar 100 Orange | Map Apple | Mar 100 Apple |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:----------:|:--------------:|:----------:|:--------------:|:---------:|:-------------:|
| No log | 1.0 | 60 | 1.9168 | 0.011 | 0.0267 | 0.0081 | -1.0 | 0.0161 | 0.0148 | 0.0442 | 0.1565 | 0.3206 | -1.0 | 0.1 | 0.3485 | 0.0136 | 0.4275 | 0.0 | 0.0 | 0.0193 | 0.5343 |
| No log | 2.0 | 120 | 1.5461 | 0.0412 | 0.0965 | 0.0315 | -1.0 | 0.1493 | 0.0399 | 0.1579 | 0.273 | 0.449 | -1.0 | 0.3786 | 0.4512 | 0.0342 | 0.57 | 0.0194 | 0.1143 | 0.07 | 0.6629 |
| No log | 3.0 | 180 | 1.2702 | 0.0734 | 0.1671 | 0.0639 | -1.0 | 0.1017 | 0.076 | 0.2463 | 0.387 | 0.5636 | -1.0 | 0.4357 | 0.5795 | 0.0961 | 0.615 | 0.0443 | 0.3214 | 0.0799 | 0.7543 |
| No log | 4.0 | 240 | 1.2423 | 0.0813 | 0.1613 | 0.0758 | -1.0 | 0.2779 | 0.0743 | 0.2702 | 0.4558 | 0.6172 | -1.0 | 0.5357 | 0.6285 | 0.0907 | 0.6125 | 0.0486 | 0.4905 | 0.1046 | 0.7486 |
| No log | 5.0 | 300 | 1.2186 | 0.1002 | 0.1958 | 0.0928 | -1.0 | 0.2011 | 0.0965 | 0.2551 | 0.4875 | 0.595 | -1.0 | 0.4643 | 0.612 | 0.0919 | 0.6275 | 0.0925 | 0.4262 | 0.1163 | 0.7314 |
| No log | 6.0 | 360 | 1.0360 | 0.1936 | 0.3298 | 0.2149 | -1.0 | 0.3235 | 0.1842 | 0.3295 | 0.5887 | 0.6991 | -1.0 | 0.5857 | 0.7157 | 0.1859 | 0.705 | 0.1827 | 0.581 | 0.2124 | 0.8114 |
| No log | 7.0 | 420 | 1.0435 | 0.3418 | 0.5496 | 0.3768 | -1.0 | 0.4263 | 0.3516 | 0.3779 | 0.6207 | 0.7223 | -1.0 | 0.6429 | 0.7387 | 0.2188 | 0.6575 | 0.3071 | 0.6952 | 0.4995 | 0.8143 |
| No log | 8.0 | 480 | 0.9763 | 0.3733 | 0.5963 | 0.4304 | -1.0 | 0.4576 | 0.384 | 0.3477 | 0.6131 | 0.7203 | -1.0 | 0.6643 | 0.7299 | 0.2579 | 0.6875 | 0.3948 | 0.7619 | 0.4673 | 0.7114 |
| 1.2819 | 9.0 | 540 | 0.9729 | 0.4048 | 0.6491 | 0.4533 | -1.0 | 0.4535 | 0.4208 | 0.3606 | 0.6365 | 0.7297 | -1.0 | 0.6786 | 0.7392 | 0.317 | 0.6925 | 0.4198 | 0.7595 | 0.4778 | 0.7371 |
| 1.2819 | 10.0 | 600 | 0.9867 | 0.4522 | 0.7245 | 0.5204 | -1.0 | 0.5405 | 0.4528 | 0.3633 | 0.6438 | 0.733 | -1.0 | 0.65 | 0.7462 | 0.3247 | 0.71 | 0.471 | 0.7405 | 0.561 | 0.7486 |
| 1.2819 | 11.0 | 660 | 0.8974 | 0.4976 | 0.7352 | 0.561 | -1.0 | 0.6327 | 0.4956 | 0.394 | 0.6717 | 0.7571 | -1.0 | 0.7214 | 0.7652 | 0.3346 | 0.705 | 0.5363 | 0.7833 | 0.622 | 0.7829 |
| 1.2819 | 12.0 | 720 | 0.9062 | 0.5042 | 0.8019 | 0.566 | -1.0 | 0.5781 | 0.5131 | 0.3774 | 0.6703 | 0.7594 | -1.0 | 0.7071 | 0.7702 | 0.3796 | 0.715 | 0.5031 | 0.769 | 0.6301 | 0.7943 |
| 1.2819 | 13.0 | 780 | 0.8927 | 0.5136 | 0.79 | 0.5867 | -1.0 | 0.6444 | 0.514 | 0.3724 | 0.6983 | 0.7641 | -1.0 | 0.7286 | 0.7736 | 0.3486 | 0.7 | 0.5547 | 0.781 | 0.6374 | 0.8114 |
| 1.2819 | 14.0 | 840 | 0.9009 | 0.507 | 0.7814 | 0.5461 | -1.0 | 0.5907 | 0.5186 | 0.3919 | 0.691 | 0.7621 | -1.0 | 0.7 | 0.7754 | 0.3471 | 0.7025 | 0.5375 | 0.7667 | 0.6364 | 0.8171 |
| 1.2819 | 15.0 | 900 | 0.8588 | 0.5349 | 0.7915 | 0.607 | -1.0 | 0.5704 | 0.5479 | 0.404 | 0.6791 | 0.7604 | -1.0 | 0.7143 | 0.7677 | 0.3818 | 0.7425 | 0.5783 | 0.7643 | 0.6445 | 0.7743 |
| 1.2819 | 16.0 | 960 | 0.8809 | 0.5314 | 0.8154 | 0.599 | -1.0 | 0.5413 | 0.5484 | 0.4085 | 0.6689 | 0.7546 | -1.0 | 0.7143 | 0.7613 | 0.4064 | 0.7225 | 0.5545 | 0.7643 | 0.6334 | 0.7771 |
| 0.7 | 17.0 | 1020 | 0.8626 | 0.5402 | 0.823 | 0.601 | -1.0 | 0.5705 | 0.5557 | 0.4038 | 0.6979 | 0.767 | -1.0 | 0.7214 | 0.7739 | 0.4157 | 0.7525 | 0.5728 | 0.7857 | 0.632 | 0.7629 |
| 0.7 | 18.0 | 1080 | 0.8723 | 0.5431 | 0.8142 | 0.615 | -1.0 | 0.5579 | 0.556 | 0.3902 | 0.6911 | 0.7657 | -1.0 | 0.7357 | 0.7717 | 0.4201 | 0.73 | 0.5923 | 0.7786 | 0.617 | 0.7886 |
| 0.7 | 19.0 | 1140 | 0.8407 | 0.558 | 0.8205 | 0.6471 | -1.0 | 0.5592 | 0.5833 | 0.4172 | 0.7085 | 0.7793 | -1.0 | 0.7286 | 0.7905 | 0.4215 | 0.725 | 0.5807 | 0.7786 | 0.6719 | 0.8343 |
| 0.7 | 20.0 | 1200 | 0.8675 | 0.5656 | 0.8479 | 0.6415 | -1.0 | 0.5875 | 0.5785 | 0.4039 | 0.697 | 0.7698 | -1.0 | 0.75 | 0.7743 | 0.4318 | 0.735 | 0.6069 | 0.7857 | 0.6579 | 0.7886 |
| 0.7 | 21.0 | 1260 | 0.8636 | 0.5601 | 0.8313 | 0.6211 | -1.0 | 0.6281 | 0.5637 | 0.4085 | 0.6962 | 0.7611 | -1.0 | 0.7357 | 0.7662 | 0.4335 | 0.73 | 0.607 | 0.7905 | 0.6399 | 0.7629 |
| 0.7 | 22.0 | 1320 | 0.8463 | 0.567 | 0.827 | 0.6541 | -1.0 | 0.6092 | 0.5758 | 0.4168 | 0.7023 | 0.7797 | -1.0 | 0.7357 | 0.7888 | 0.4327 | 0.73 | 0.6211 | 0.8119 | 0.6472 | 0.7971 |
| 0.7 | 23.0 | 1380 | 0.8397 | 0.5704 | 0.8411 | 0.6472 | -1.0 | 0.6288 | 0.579 | 0.4068 | 0.7036 | 0.7723 | -1.0 | 0.7357 | 0.7804 | 0.4259 | 0.7225 | 0.6243 | 0.8 | 0.661 | 0.7943 |
| 0.7 | 24.0 | 1440 | 0.8512 | 0.5627 | 0.829 | 0.6446 | -1.0 | 0.6038 | 0.5722 | 0.404 | 0.7019 | 0.7693 | -1.0 | 0.75 | 0.7748 | 0.4261 | 0.725 | 0.6236 | 0.7857 | 0.6383 | 0.7971 |
| 0.5208 | 25.0 | 1500 | 0.8526 | 0.5704 | 0.843 | 0.6505 | -1.0 | 0.6087 | 0.584 | 0.412 | 0.7165 | 0.7754 | -1.0 | 0.7429 | 0.7837 | 0.4288 | 0.7175 | 0.6141 | 0.7857 | 0.6684 | 0.8229 |
| 0.5208 | 26.0 | 1560 | 0.8459 | 0.5683 | 0.8447 | 0.6421 | -1.0 | 0.6012 | 0.5795 | 0.4175 | 0.7115 | 0.7681 | -1.0 | 0.7429 | 0.7748 | 0.4402 | 0.72 | 0.6091 | 0.7786 | 0.6555 | 0.8057 |
| 0.5208 | 27.0 | 1620 | 0.8259 | 0.5724 | 0.847 | 0.6408 | -1.0 | 0.6195 | 0.5806 | 0.4146 | 0.7185 | 0.7724 | -1.0 | 0.75 | 0.7789 | 0.4418 | 0.7225 | 0.6132 | 0.7833 | 0.6621 | 0.8114 |
| 0.5208 | 28.0 | 1680 | 0.8257 | 0.5728 | 0.8404 | 0.6418 | -1.0 | 0.6191 | 0.5812 | 0.4127 | 0.718 | 0.7744 | -1.0 | 0.7429 | 0.7818 | 0.4424 | 0.7275 | 0.616 | 0.7929 | 0.66 | 0.8029 |
| 0.5208 | 29.0 | 1740 | 0.8261 | 0.5689 | 0.837 | 0.6377 | -1.0 | 0.6193 | 0.5761 | 0.4035 | 0.708 | 0.7661 | -1.0 | 0.75 | 0.7707 | 0.4415 | 0.725 | 0.6176 | 0.7905 | 0.6476 | 0.7829 |
| 0.5208 | 30.0 | 1800 | 0.8251 | 0.5689 | 0.837 | 0.6378 | -1.0 | 0.6185 | 0.5762 | 0.4035 | 0.7088 | 0.7653 | -1.0 | 0.7429 | 0.7707 | 0.4416 | 0.725 | 0.6177 | 0.7881 | 0.6474 | 0.7829 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"banana",
"orange",
"apple"
] |
Igmata/yolo_finetuned_fruits |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yolo_finetuned_fruits
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8075
- Map: 0.5492
- Map 50: 0.8129
- Map 75: 0.6184
- Map Small: -1.0
- Map Medium: 0.5412
- Map Large: 0.5745
- Mar 1: 0.4367
- Mar 10: 0.7285
- Mar 100: 0.7829
- Mar Small: -1.0
- Mar Medium: 0.7643
- Mar Large: 0.7895
- Map Banana: 0.4035
- Mar 100 Banana: 0.73
- Map Orange: 0.5513
- Mar 100 Orange: 0.7929
- Map Apple: 0.6929
- Mar 100 Apple: 0.8257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Banana | Mar 100 Banana | Map Orange | Mar 100 Orange | Map Apple | Mar 100 Apple |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:----------:|:--------------:|:----------:|:--------------:|:---------:|:-------------:|
| No log | 1.0 | 60 | 2.1930 | 0.0025 | 0.0072 | 0.0016 | -1.0 | 0.0007 | 0.0035 | 0.0161 | 0.0825 | 0.2242 | -1.0 | 0.0429 | 0.2487 | 0.0015 | 0.2925 | 0.0 | 0.0 | 0.0061 | 0.38 |
| No log | 2.0 | 120 | 1.9326 | 0.011 | 0.0299 | 0.0064 | -1.0 | 0.0044 | 0.0134 | 0.0758 | 0.2107 | 0.3813 | -1.0 | 0.1214 | 0.416 | 0.0137 | 0.45 | 0.0062 | 0.1738 | 0.0131 | 0.52 |
| No log | 3.0 | 180 | 1.6307 | 0.0352 | 0.0947 | 0.0195 | -1.0 | 0.0531 | 0.0355 | 0.1342 | 0.303 | 0.503 | -1.0 | 0.3214 | 0.5277 | 0.0413 | 0.5375 | 0.0424 | 0.3714 | 0.0218 | 0.6 |
| No log | 4.0 | 240 | 1.6542 | 0.0558 | 0.1344 | 0.0522 | -1.0 | 0.1515 | 0.0482 | 0.0944 | 0.2671 | 0.4604 | -1.0 | 0.35 | 0.4725 | 0.0524 | 0.5075 | 0.0662 | 0.3881 | 0.0487 | 0.4857 |
| No log | 5.0 | 300 | 1.6691 | 0.0388 | 0.1063 | 0.0274 | -1.0 | 0.0944 | 0.0364 | 0.1583 | 0.2932 | 0.4751 | -1.0 | 0.35 | 0.4917 | 0.043 | 0.5 | 0.0359 | 0.3452 | 0.0374 | 0.58 |
| No log | 6.0 | 360 | 1.1086 | 0.0826 | 0.1345 | 0.0841 | -1.0 | 0.2117 | 0.0861 | 0.2797 | 0.4782 | 0.7029 | -1.0 | 0.5929 | 0.7177 | 0.071 | 0.7225 | 0.1168 | 0.6262 | 0.06 | 0.76 |
| No log | 7.0 | 420 | 1.1675 | 0.0814 | 0.165 | 0.072 | -1.0 | 0.2427 | 0.086 | 0.2683 | 0.4722 | 0.6522 | -1.0 | 0.4929 | 0.6746 | 0.0837 | 0.6575 | 0.0851 | 0.5619 | 0.0754 | 0.7371 |
| No log | 8.0 | 480 | 1.0365 | 0.1206 | 0.207 | 0.1248 | -1.0 | 0.2508 | 0.1171 | 0.3042 | 0.5348 | 0.7123 | -1.0 | 0.6143 | 0.7282 | 0.0767 | 0.6875 | 0.1441 | 0.681 | 0.1411 | 0.7686 |
| 1.512 | 9.0 | 540 | 1.0794 | 0.1506 | 0.2487 | 0.1703 | -1.0 | 0.2685 | 0.1558 | 0.3506 | 0.5771 | 0.6842 | -1.0 | 0.4857 | 0.7162 | 0.0875 | 0.645 | 0.1822 | 0.6619 | 0.1823 | 0.7457 |
| 1.512 | 10.0 | 600 | 0.9685 | 0.2052 | 0.3178 | 0.2417 | -1.0 | 0.3075 | 0.2088 | 0.3638 | 0.5795 | 0.713 | -1.0 | 0.5571 | 0.7386 | 0.1142 | 0.66 | 0.2011 | 0.6619 | 0.3002 | 0.8171 |
| 1.512 | 11.0 | 660 | 1.0193 | 0.2702 | 0.4348 | 0.3242 | -1.0 | 0.3423 | 0.287 | 0.3652 | 0.6083 | 0.6889 | -1.0 | 0.6429 | 0.699 | 0.1441 | 0.64 | 0.263 | 0.6952 | 0.4036 | 0.7314 |
| 1.512 | 12.0 | 720 | 0.9402 | 0.3339 | 0.5175 | 0.3808 | -1.0 | 0.358 | 0.3523 | 0.3898 | 0.637 | 0.7244 | -1.0 | 0.6286 | 0.7421 | 0.2116 | 0.67 | 0.3413 | 0.7262 | 0.4489 | 0.7771 |
| 1.512 | 13.0 | 780 | 0.9065 | 0.4067 | 0.6265 | 0.4574 | -1.0 | 0.5061 | 0.4159 | 0.3831 | 0.6531 | 0.7409 | -1.0 | 0.6286 | 0.76 | 0.2899 | 0.705 | 0.3526 | 0.7262 | 0.5776 | 0.7914 |
| 1.512 | 14.0 | 840 | 0.8992 | 0.4333 | 0.6571 | 0.4951 | -1.0 | 0.5391 | 0.4405 | 0.3823 | 0.679 | 0.7469 | -1.0 | 0.6929 | 0.7585 | 0.2879 | 0.6975 | 0.4142 | 0.7405 | 0.5978 | 0.8029 |
| 1.512 | 15.0 | 900 | 0.9158 | 0.4523 | 0.6711 | 0.5006 | -1.0 | 0.567 | 0.457 | 0.3885 | 0.6792 | 0.7503 | -1.0 | 0.7429 | 0.7554 | 0.3111 | 0.6875 | 0.4015 | 0.7548 | 0.6444 | 0.8086 |
| 1.512 | 16.0 | 960 | 0.8610 | 0.4903 | 0.7499 | 0.5371 | -1.0 | 0.5782 | 0.4965 | 0.4083 | 0.6934 | 0.7603 | -1.0 | 0.75 | 0.7656 | 0.3468 | 0.6975 | 0.4646 | 0.769 | 0.6594 | 0.8143 |
| 0.8176 | 17.0 | 1020 | 0.8541 | 0.5026 | 0.7497 | 0.5756 | -1.0 | 0.6024 | 0.509 | 0.4079 | 0.7004 | 0.7741 | -1.0 | 0.7429 | 0.783 | 0.363 | 0.7175 | 0.5092 | 0.7905 | 0.6356 | 0.8143 |
| 0.8176 | 18.0 | 1080 | 0.8627 | 0.4944 | 0.7614 | 0.5615 | -1.0 | 0.58 | 0.5081 | 0.4067 | 0.6975 | 0.7571 | -1.0 | 0.7 | 0.7686 | 0.3636 | 0.715 | 0.501 | 0.7476 | 0.6185 | 0.8086 |
| 0.8176 | 19.0 | 1140 | 0.8270 | 0.5227 | 0.7928 | 0.5967 | -1.0 | 0.589 | 0.5339 | 0.4137 | 0.7212 | 0.7767 | -1.0 | 0.7143 | 0.789 | 0.3864 | 0.735 | 0.5444 | 0.781 | 0.6372 | 0.8143 |
| 0.8176 | 20.0 | 1200 | 0.8100 | 0.5428 | 0.807 | 0.629 | -1.0 | 0.5925 | 0.561 | 0.4291 | 0.7177 | 0.7721 | -1.0 | 0.7571 | 0.7787 | 0.4188 | 0.7125 | 0.553 | 0.7952 | 0.6567 | 0.8086 |
| 0.8176 | 21.0 | 1260 | 0.8255 | 0.5424 | 0.8012 | 0.6145 | -1.0 | 0.5723 | 0.5611 | 0.4269 | 0.7175 | 0.7674 | -1.0 | 0.7286 | 0.7775 | 0.3995 | 0.7075 | 0.5572 | 0.7833 | 0.6703 | 0.8114 |
| 0.8176 | 22.0 | 1320 | 0.8203 | 0.5447 | 0.8214 | 0.6081 | -1.0 | 0.5544 | 0.567 | 0.4308 | 0.7186 | 0.7785 | -1.0 | 0.75 | 0.7863 | 0.3999 | 0.7275 | 0.5527 | 0.7881 | 0.6815 | 0.82 |
| 0.8176 | 23.0 | 1380 | 0.8116 | 0.555 | 0.8196 | 0.6291 | -1.0 | 0.5953 | 0.569 | 0.4345 | 0.7297 | 0.7793 | -1.0 | 0.75 | 0.7874 | 0.4045 | 0.725 | 0.5768 | 0.7929 | 0.6836 | 0.82 |
| 0.8176 | 24.0 | 1440 | 0.8178 | 0.5431 | 0.7922 | 0.6252 | -1.0 | 0.5631 | 0.5629 | 0.4217 | 0.7217 | 0.7755 | -1.0 | 0.7357 | 0.7849 | 0.3973 | 0.7275 | 0.5501 | 0.7905 | 0.6819 | 0.8086 |
| 0.6165 | 25.0 | 1500 | 0.8056 | 0.5533 | 0.8126 | 0.6213 | -1.0 | 0.569 | 0.5718 | 0.43 | 0.7249 | 0.779 | -1.0 | 0.75 | 0.787 | 0.4049 | 0.7275 | 0.5546 | 0.781 | 0.7002 | 0.8286 |
| 0.6165 | 26.0 | 1560 | 0.7900 | 0.556 | 0.8143 | 0.6332 | -1.0 | 0.552 | 0.5828 | 0.4417 | 0.7364 | 0.7812 | -1.0 | 0.7643 | 0.7873 | 0.418 | 0.7325 | 0.55 | 0.7881 | 0.7002 | 0.8229 |
| 0.6165 | 27.0 | 1620 | 0.8072 | 0.5466 | 0.8125 | 0.6105 | -1.0 | 0.5431 | 0.5733 | 0.4367 | 0.7327 | 0.7787 | -1.0 | 0.7571 | 0.786 | 0.4051 | 0.725 | 0.5465 | 0.7881 | 0.688 | 0.8229 |
| 0.6165 | 28.0 | 1680 | 0.8077 | 0.5481 | 0.8135 | 0.6199 | -1.0 | 0.5418 | 0.5725 | 0.4351 | 0.7263 | 0.783 | -1.0 | 0.7643 | 0.7899 | 0.4018 | 0.7275 | 0.5497 | 0.7929 | 0.6927 | 0.8286 |
| 0.6165 | 29.0 | 1740 | 0.8085 | 0.5488 | 0.812 | 0.618 | -1.0 | 0.541 | 0.574 | 0.4367 | 0.7277 | 0.7837 | -1.0 | 0.7643 | 0.7905 | 0.4021 | 0.73 | 0.5514 | 0.7952 | 0.6929 | 0.8257 |
| 0.6165 | 30.0 | 1800 | 0.8075 | 0.5492 | 0.8129 | 0.6184 | -1.0 | 0.5412 | 0.5745 | 0.4367 | 0.7285 | 0.7829 | -1.0 | 0.7643 | 0.7895 | 0.4035 | 0.73 | 0.5513 | 0.7929 | 0.6929 | 0.8257 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
| [
"banana",
"orange",
"apple"
] |
Hoixi/TR-Fin-Table-Structure-HoixiFinetuned-v1 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TR-Fin-Table-Structure-HoixiFinetuned-v1
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition-v1.1-all](https://huggingface.co/microsoft/table-transformer-structure-recognition-v1.1-all) on the tr-fin_table-dataset-v4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3718 | 12.5 | 50 | 1.8326 |
| 2.352 | 25.0 | 100 | 1.8074 |
| 1.8852 | 37.5 | 150 | 1.7669 |
| 2.4453 | 50.0 | 200 | 1.6969 |
| 1.7715 | 62.5 | 250 | 1.6691 |
| 1.8492 | 75.0 | 300 | 1.6298 |
| 1.9541 | 87.5 | 350 | 1.6240 |
| 1.6634 | 100.0 | 400 | 1.6063 |
| 2.0306 | 112.5 | 450 | 1.5812 |
| 2.0784 | 125.0 | 500 | 1.5645 |
| 1.7727 | 137.5 | 550 | 1.5720 |
| 2.7045 | 150.0 | 600 | 1.5749 |
| 2.2544 | 162.5 | 650 | 1.5736 |
| 2.06 | 175.0 | 700 | 1.5837 |
| 1.8153 | 187.5 | 750 | 1.5631 |
| 1.9772 | 200.0 | 800 | 1.5747 |
| 2.3578 | 212.5 | 850 | 1.5718 |
| 1.5885 | 225.0 | 900 | 1.5746 |
| 1.8051 | 237.5 | 950 | 1.5705 |
| 2.5822 | 250.0 | 1000 | 1.5728 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.0
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
Hoixi/TR-Fin-Table-Structure-HoixiFinetuned-Overdose |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TR-Fin-Table-Structure-HoixiFinetuned-Overdose
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition-v1.1-all](https://huggingface.co/microsoft/table-transformer-structure-recognition-v1.1-all) on the tr-fin_table dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 2.5699 | 12.5 | 50 | 1.8167 |
| 2.4601 | 25.0 | 100 | 1.6425 |
| 2.2612 | 37.5 | 150 | 1.6247 |
| 2.2122 | 50.0 | 200 | 1.5704 |
| 1.8015 | 62.5 | 250 | 1.5473 |
| 2.0555 | 75.0 | 300 | 1.5278 |
| 2.3249 | 87.5 | 350 | 1.5308 |
| 1.4308 | 100.0 | 400 | 1.5316 |
| 1.9009 | 112.5 | 450 | 1.5131 |
| 2.2087 | 125.0 | 500 | 1.5006 |
| 2.0473 | 137.5 | 550 | 1.5212 |
| 2.5294 | 150.0 | 600 | 1.5379 |
| 1.9833 | 162.5 | 650 | 1.5652 |
| 1.8314 | 175.0 | 700 | 1.4976 |
| 2.4367 | 187.5 | 750 | 1.5416 |
| 1.9989 | 200.0 | 800 | 1.5043 |
| 1.8754 | 212.5 | 850 | 1.5266 |
| 2.0064 | 225.0 | 900 | 1.5051 |
| 2.0185 | 237.5 | 950 | 1.5313 |
| 1.6792 | 250.0 | 1000 | 1.4967 |
| 1.7105 | 262.5 | 1050 | 1.4732 |
| 1.9245 | 275.0 | 1100 | 1.5032 |
| 1.4862 | 287.5 | 1150 | 1.5018 |
| 2.1234 | 300.0 | 1200 | 1.4667 |
| 1.7904 | 312.5 | 1250 | 1.4761 |
| 2.4147 | 325.0 | 1300 | 1.4684 |
| 1.8115 | 337.5 | 1350 | 1.4711 |
| 1.9528 | 350.0 | 1400 | 1.4457 |
| 1.9951 | 362.5 | 1450 | 1.4618 |
| 1.5924 | 375.0 | 1500 | 1.4597 |
| 1.9166 | 387.5 | 1550 | 1.4513 |
| 1.8729 | 400.0 | 1600 | 1.4332 |
| 2.0891 | 412.5 | 1650 | 1.4299 |
| 2.1724 | 425.0 | 1700 | 1.4578 |
| 2.0618 | 437.5 | 1750 | 1.4791 |
| 1.7166 | 450.0 | 1800 | 1.4734 |
| 2.027 | 462.5 | 1850 | 1.4736 |
| 1.7804 | 475.0 | 1900 | 1.4461 |
| 2.3921 | 487.5 | 1950 | 1.4398 |
| 1.8792 | 500.0 | 2000 | 1.4345 |
| 2.1287 | 512.5 | 2050 | 1.4477 |
| 1.5598 | 525.0 | 2100 | 1.4608 |
| 1.7381 | 537.5 | 2150 | 1.4428 |
| 1.8059 | 550.0 | 2200 | 1.4423 |
| 1.7971 | 562.5 | 2250 | 1.4109 |
| 1.6301 | 575.0 | 2300 | 1.4341 |
| 1.9655 | 587.5 | 2350 | 1.4502 |
| 1.4477 | 600.0 | 2400 | 1.4378 |
| 1.7368 | 612.5 | 2450 | 1.4445 |
| 1.9277 | 625.0 | 2500 | 1.4349 |
| 1.8093 | 637.5 | 2550 | 1.4542 |
| 1.8594 | 650.0 | 2600 | 1.4462 |
| 1.7637 | 662.5 | 2650 | 1.4250 |
| 1.9192 | 675.0 | 2700 | 1.4681 |
| 1.86 | 687.5 | 2750 | 1.4827 |
| 1.8954 | 700.0 | 2800 | 1.4187 |
| 1.4728 | 712.5 | 2850 | 1.4129 |
| 1.6828 | 725.0 | 2900 | 1.4113 |
| 1.7694 | 737.5 | 2950 | 1.4035 |
| 1.805 | 750.0 | 3000 | 1.4134 |
| 1.8506 | 762.5 | 3050 | 1.4135 |
| 1.8127 | 775.0 | 3100 | 1.4057 |
| 1.9829 | 787.5 | 3150 | 1.4102 |
| 2.0491 | 800.0 | 3200 | 1.4216 |
| 1.549 | 812.5 | 3250 | 1.3648 |
| 1.8095 | 825.0 | 3300 | 1.4064 |
| 1.6556 | 837.5 | 3350 | 1.3776 |
| 1.5418 | 850.0 | 3400 | 1.3942 |
| 1.5569 | 862.5 | 3450 | 1.4009 |
| 2.3074 | 875.0 | 3500 | 1.4011 |
| 1.6733 | 887.5 | 3550 | 1.4032 |
| 1.9263 | 900.0 | 3600 | 1.3799 |
| 1.9946 | 912.5 | 3650 | 1.3972 |
| 1.4845 | 925.0 | 3700 | 1.3853 |
| 1.9587 | 937.5 | 3750 | 1.4146 |
| 1.7828 | 950.0 | 3800 | 1.4037 |
| 2.012 | 962.5 | 3850 | 1.4235 |
| 1.817 | 975.0 | 3900 | 1.4007 |
| 2.0893 | 987.5 | 3950 | 1.4095 |
| 2.1338 | 1000.0 | 4000 | 1.3824 |
| 1.8228 | 1012.5 | 4050 | 1.3843 |
| 1.6272 | 1025.0 | 4100 | 1.4122 |
| 1.6202 | 1037.5 | 4150 | 1.3909 |
| 1.4482 | 1050.0 | 4200 | 1.3589 |
| 1.949 | 1062.5 | 4250 | 1.3605 |
| 2.0954 | 1075.0 | 4300 | 1.3869 |
| 1.4728 | 1087.5 | 4350 | 1.3944 |
| 1.5916 | 1100.0 | 4400 | 1.3825 |
| 1.7988 | 1112.5 | 4450 | 1.3682 |
| 1.5051 | 1125.0 | 4500 | 1.3719 |
| 1.8492 | 1137.5 | 4550 | 1.4000 |
| 1.6146 | 1150.0 | 4600 | 1.3886 |
| 1.9732 | 1162.5 | 4650 | 1.3769 |
| 1.7256 | 1175.0 | 4700 | 1.3717 |
| 1.9683 | 1187.5 | 4750 | 1.3849 |
| 1.6818 | 1200.0 | 4800 | 1.3951 |
| 1.5879 | 1212.5 | 4850 | 1.3903 |
| 1.8743 | 1225.0 | 4900 | 1.3988 |
| 1.7887 | 1237.5 | 4950 | 1.3970 |
| 1.7302 | 1250.0 | 5000 | 1.3774 |
| 1.6503 | 1262.5 | 5050 | 1.4183 |
| 1.6207 | 1275.0 | 5100 | 1.3892 |
| 1.9589 | 1287.5 | 5150 | 1.4226 |
| 1.9163 | 1300.0 | 5200 | 1.4142 |
| 1.869 | 1312.5 | 5250 | 1.3777 |
| 1.601 | 1325.0 | 5300 | 1.3743 |
| 1.5548 | 1337.5 | 5350 | 1.3871 |
| 1.6482 | 1350.0 | 5400 | 1.4068 |
| 1.545 | 1362.5 | 5450 | 1.4012 |
| 1.292 | 1375.0 | 5500 | 1.4138 |
| 1.5313 | 1387.5 | 5550 | 1.4066 |
| 1.5981 | 1400.0 | 5600 | 1.4022 |
| 1.6622 | 1412.5 | 5650 | 1.4069 |
| 1.7446 | 1425.0 | 5700 | 1.3957 |
| 1.9459 | 1437.5 | 5750 | 1.4085 |
| 1.6468 | 1450.0 | 5800 | 1.4191 |
| 1.6107 | 1462.5 | 5850 | 1.3973 |
| 1.5986 | 1475.0 | 5900 | 1.3834 |
| 1.6157 | 1487.5 | 5950 | 1.3983 |
| 1.7203 | 1500.0 | 6000 | 1.3696 |
| 1.7985 | 1512.5 | 6050 | 1.3884 |
| 1.9865 | 1525.0 | 6100 | 1.3951 |
| 1.5754 | 1537.5 | 6150 | 1.3935 |
| 1.7058 | 1550.0 | 6200 | 1.3856 |
| 1.7909 | 1562.5 | 6250 | 1.3916 |
| 2.0516 | 1575.0 | 6300 | 1.3532 |
| 1.787 | 1587.5 | 6350 | 1.4099 |
| 1.6804 | 1600.0 | 6400 | 1.4122 |
| 1.8824 | 1612.5 | 6450 | 1.3876 |
| 1.4672 | 1625.0 | 6500 | 1.3845 |
| 1.5871 | 1637.5 | 6550 | 1.3900 |
| 1.899 | 1650.0 | 6600 | 1.3777 |
| 1.3322 | 1662.5 | 6650 | 1.3765 |
| 1.6055 | 1675.0 | 6700 | 1.3556 |
| 2.226 | 1687.5 | 6750 | 1.3798 |
| 1.3981 | 1700.0 | 6800 | 1.3695 |
| 1.6295 | 1712.5 | 6850 | 1.3579 |
| 1.5333 | 1725.0 | 6900 | 1.3714 |
| 1.5442 | 1737.5 | 6950 | 1.3709 |
| 1.2871 | 1750.0 | 7000 | 1.3615 |
| 1.6814 | 1762.5 | 7050 | 1.3742 |
| 1.4199 | 1775.0 | 7100 | 1.3683 |
| 1.6349 | 1787.5 | 7150 | 1.3593 |
| 1.4781 | 1800.0 | 7200 | 1.3633 |
| 1.9904 | 1812.5 | 7250 | 1.3705 |
| 1.6171 | 1825.0 | 7300 | 1.3768 |
| 1.7736 | 1837.5 | 7350 | 1.3753 |
| 1.7629 | 1850.0 | 7400 | 1.3719 |
| 1.6829 | 1862.5 | 7450 | 1.3687 |
| 1.4467 | 1875.0 | 7500 | 1.3606 |
| 1.8322 | 1887.5 | 7550 | 1.3759 |
| 1.9977 | 1900.0 | 7600 | 1.3839 |
| 1.6281 | 1912.5 | 7650 | 1.3877 |
| 1.4727 | 1925.0 | 7700 | 1.3922 |
| 1.739 | 1937.5 | 7750 | 1.3922 |
| 2.0781 | 1950.0 | 7800 | 1.4001 |
| 1.8195 | 1962.5 | 7850 | 1.3875 |
| 1.7775 | 1975.0 | 7900 | 1.3743 |
| 1.5131 | 1987.5 | 7950 | 1.3774 |
| 1.5687 | 2000.0 | 8000 | 1.3767 |
| 1.6019 | 2012.5 | 8050 | 1.3773 |
| 1.2421 | 2025.0 | 8100 | 1.3663 |
| 1.5391 | 2037.5 | 8150 | 1.3599 |
| 1.8665 | 2050.0 | 8200 | 1.3744 |
| 1.7484 | 2062.5 | 8250 | 1.3667 |
| 1.5384 | 2075.0 | 8300 | 1.3483 |
| 1.4885 | 2087.5 | 8350 | 1.3664 |
| 1.8017 | 2100.0 | 8400 | 1.3662 |
| 1.4904 | 2112.5 | 8450 | 1.3577 |
| 1.6576 | 2125.0 | 8500 | 1.3727 |
| 1.5057 | 2137.5 | 8550 | 1.3647 |
| 1.8728 | 2150.0 | 8600 | 1.3558 |
| 1.8287 | 2162.5 | 8650 | 1.3604 |
| 1.4705 | 2175.0 | 8700 | 1.3586 |
| 1.6126 | 2187.5 | 8750 | 1.3818 |
| 1.6838 | 2200.0 | 8800 | 1.3756 |
| 1.5985 | 2212.5 | 8850 | 1.3683 |
| 1.9316 | 2225.0 | 8900 | 1.3554 |
| 1.7605 | 2237.5 | 8950 | 1.3485 |
| 1.8473 | 2250.0 | 9000 | 1.3679 |
| 1.5161 | 2262.5 | 9050 | 1.3440 |
| 1.38 | 2275.0 | 9100 | 1.3578 |
| 1.2987 | 2287.5 | 9150 | 1.3477 |
| 1.6364 | 2300.0 | 9200 | 1.3497 |
| 1.3951 | 2312.5 | 9250 | 1.3630 |
| 1.3344 | 2325.0 | 9300 | 1.3498 |
| 1.3916 | 2337.5 | 9350 | 1.3503 |
| 1.7832 | 2350.0 | 9400 | 1.3502 |
| 1.377 | 2362.5 | 9450 | 1.3512 |
| 1.3797 | 2375.0 | 9500 | 1.3507 |
| 1.4729 | 2387.5 | 9550 | 1.3533 |
| 1.5299 | 2400.0 | 9600 | 1.3544 |
| 1.6858 | 2412.5 | 9650 | 1.3447 |
| 1.3794 | 2425.0 | 9700 | 1.3432 |
| 1.8406 | 2437.5 | 9750 | 1.3449 |
| 1.8643 | 2450.0 | 9800 | 1.3394 |
| 1.5886 | 2462.5 | 9850 | 1.3452 |
| 2.065 | 2475.0 | 9900 | 1.3461 |
| 1.7918 | 2487.5 | 9950 | 1.3447 |
| 1.3398 | 2500.0 | 10000 | 1.3453 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.0
| [
"table",
"table column",
"table row",
"table column header",
"table projected row header",
"table spanning cell"
] |
Ukuleele/results |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.3968 | 1.0 | 82 | 4.4245 |
| 3.108 | 2.0 | 164 | 3.5409 |
| 2.1009 | 3.0 | 246 | 2.2401 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
liish/trainer_output |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer_output
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
mohadrk/practica_2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# practica_2
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.1
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"background",
"kangaroo"
] |
tarvitamm/detr-fashion |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-fashion
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
hiro2law/detr-fashionpedia |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-fashionpedia
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
T4nker/detr-finetuned |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-finetuned
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5936 | 1.0 | 125 | 1.5671 |
| 1.5195 | 2.0 | 250 | 1.4495 |
| 1.4267 | 3.0 | 375 | 1.4426 |
| 1.4248 | 4.0 | 500 | 1.3785 |
| 1.4158 | 5.0 | 625 | 1.3745 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
mart-mihkel/trainer_output |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer_output
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
SansanderSa/detr_fashionpedia_4cat |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_fashionpedia_4cat
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 94 | 4.4474 |
| 4.35 | 2.0 | 188 | 4.0507 |
| 4.0018 | 3.0 | 282 | 3.4614 |
| 3.4206 | 4.0 | 376 | 2.6122 |
| 2.6373 | 5.0 | 470 | 2.1421 |
| 2.1656 | 6.0 | 564 | 1.9358 |
| 1.9312 | 7.0 | 658 | 1.8075 |
| 1.7873 | 8.0 | 752 | 1.7616 |
| 1.7104 | 9.0 | 846 | 1.7005 |
| 1.6826 | 10.0 | 940 | 1.6789 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.4.1+cu121
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
Nic0ss/dl4cv-nicoss |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dl4cv-nicoss
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5719 | 1.6 | 200 | 1.5520 |
| 1.4191 | 3.2 | 400 | 1.3642 |
| 1.322 | 4.8 | 600 | 1.2540 |
| 1.241 | 6.4 | 800 | 1.2295 |
| 1.1893 | 8.0 | 1000 | 1.1864 |
| 1.1621 | 9.6 | 1200 | 1.1665 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.4.1+cu121
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
franciscomj0901/WearSense-Detector | This model is fine-tuned version of hustvl/yolos-tiny.
You can find details of model in this github repo -> [fashion-visual-search](https://github.com/yainage90/fashion-visual-search)
And you can find fashion image feature extractor model -> [yainage90/fashion-image-feature-extractor](https://huggingface.co/yainage90/fashion-image-feature-extractor)
This model was trained using a combination of two datasets: [modanet](https://github.com/eBay/modanet) and [fashionpedia](https://fashionpedia.github.io/home/)
The labels are ['bag', 'bottom', 'dress', 'hat', 'shoes', 'outer', 'top']
In the 96th epoch out of total of 100 epochs, the best score was achieved with mAP 0.697400.
``` python
from PIL import Image
import torch
from transformers import YolosImageProcessor, YolosForObjectDetection
device = 'cpu'
if torch.cuda.is_available():
device = torch.device('cuda')
elif torch.backends.mps.is_available():
device = torch.device('mps')
ckpt = 'yainage90/fashion-object-detection-yolos-tiny'
image_processor = YolosImageProcessor.from_pretrained(ckpt)
model = YolosForObjectDetection.from_pretrained(ckpt).to(device)
image = Image.open('<path/to/image>').convert('RGB')
with torch.no_grad():
inputs = image_processor(images=[image], return_tensors="pt")
outputs = model(**inputs.to(device))
target_sizes = torch.tensor([[image.size[1], image.size[0]]])
results = image_processor.post_process_object_detection(outputs, threshold=0.85, target_sizes=target_sizes)[0]
items = []
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
score = score.item()
label = label.item()
box = [i.item() for i in box]
print(f"{model.config.id2label[label]}: {round(score, 3)} at {box}")
items.append((score, label, box))
```
 | [
"bag",
"bottom",
"dress",
"hat",
"outer",
"shoes",
"top"
] |
alakxender/detr-resnet-50-dc5-dv-layout-sm1 |
# DETR ResNet-50 DC5 for Dhivehi Layout-Aware Document Parsing
A fine-tuned DETR (DEtection TRansformer) model based on `facebook/detr-resnet-50-dc5`, trained on a custom COCO-style dataset for layout-aware document understanding in Dhivehi and similar documents. The model can detect key structural elements such as headings, authorship, paragraphs, and text lines — with awareness of document reading direction (LTR/RTL).
## Model Summary
- **Base Model:** facebook/detr-resnet-50-dc5
- **Dataset:** Custom COCO-format document layout dataset (`coco-dv-layout`)
- **Categories:**
- `layout-analysis-QvA6`, `author`, `caption`, `columns`, `date`, `footnote`, `heading`, `paragraph`, `picture`, `textline`
- **Reading Direction Support:** Left-to-Right (LTR) and Right-to-Left (RTL) documents
- **Backbone:** ResNet-50 DC5
---
## Usage
### Inference Script
```python
from transformers import pipeline
from PIL import Image
import torch
image = Image.open("ocr.png")
obj_detector = pipeline(
"object-detection",
model="alakxender/detr-resnet-50-dc5-dv-layout-sm1",
device=torch.device("cuda:0" if torch.cuda.is_available() else "cpu"),
use_fast=True
)
results = obj_detector(image)
print(results)
```
### Test Script:
```python
import requests
from transformers import pipeline
import numpy as np
from PIL import Image, ImageDraw, ImageFont
import torch
import argparse
import json
import re
parser = argparse.ArgumentParser()
parser.add_argument("--threshold", type=float, default=0.6)
parser.add_argument("--rtl", action="store_true", default=True, help="Process as right-to-left language document")
args = parser.parse_args()
threshold = args.threshold
is_rtl = args.rtl
# Set device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(f"Device set to use {device}")
print(f"Document direction: {'Right-to-Left' if is_rtl else 'Left-to-Right'}")
image = Image.open("ocr-bill.jpeg")
obj_detector = pipeline(
"object-detection",
model="alakxender/detr-resnet-50-dc5-dv-layout-sm1",
device=device,
use_fast=True # Set use_fast=True to avoid slow processor warning
)
results = obj_detector(image)
print(results)
# Define colors for different labels
category_colors = {
"author": (0, 255, 0), # Green
"caption": (0, 0, 255), # Blue
"columns": (255, 255, 0), # Yellow
"date": (255, 0, 255), # Magenta
"footnote": (0, 255, 255), # Cyan
"heading": (128, 0, 0), # Dark Red
"paragraph": (0, 128, 0), # Dark Green
"picture": (0, 0, 128), # Dark Blue
"textline": (128, 128, 0) # Olive
}
# Define document element hierarchy (lower value = higher priority)
element_priority = {
"heading": 1,
"author": 2,
"date": 3,
"columns": 4,
"paragraph": 5,
"textline": 6,
"picture": 7,
"caption": 8,
"footnote": 9
}
def detect_text_direction(results, threshold=0.6):
"""
Attempt to automatically detect if the document is RTL based on detected text elements.
This is a heuristic approach - for production use, consider using language detection.
"""
# Filter by confidence threshold
filtered_results = [r for r in results if r['score'] > threshold]
# Focus on text elements (textline, paragraph, heading)
text_elements = [r for r in filtered_results if r['label'] in ['textline', 'paragraph', 'heading']]
if not text_elements:
return False # Default to LTR if no text elements
# Get coordinates
coordinates = []
for r in text_elements:
box = list(r['box'].values())
if len(box) == 4:
x1, y1, x2, y2 = box
width = x2 - x1
# Store element with its position info
coordinates.append({
'xmin': x1,
'xmax': x2,
'width': width,
'x_center': (x1 + x2) / 2
})
if not coordinates:
return False # Default to LTR
# Analyze the horizontal distribution of elements
image_width = max([c['xmax'] for c in coordinates])
# Calculate the average center position relative to image width
avg_center_position = sum([c['x_center'] for c in coordinates]) / len(coordinates)
relative_position = avg_center_position / image_width
# If elements tend to be more on the right side, it might be RTL
# This is a simple heuristic - a more sophisticated approach would use OCR or language detection
is_rtl_detected = relative_position > 0.55 # Slight bias to right side suggests RTL
print(f"Auto-detected document direction: {'Right-to-Left' if is_rtl_detected else 'Left-to-Right'}")
print(f"Average element center position: {relative_position:.2f} of document width")
return is_rtl_detected
def get_reading_order(results, threshold=0.6, rtl=is_rtl):
"""
Sort detection results in natural reading order for both LTR and RTL documents:
1. First by element priority (headings first)
2. Then by vertical position (top to bottom)
3. For elements with similar y-values, sort by horizontal position based on text direction
"""
# Filter by confidence threshold
filtered_results = [r for r in results if r['score'] > threshold]
# If no manual RTL flag is set, try to auto-detect
if rtl is None:
rtl = detect_text_direction(results, threshold)
# Group text lines by their vertical position
# Text lines within ~20 pixels vertically are considered on the same line
y_tolerance = 20
# Let's first check the structure of box to understand its keys
if filtered_results and 'box' in filtered_results[0]:
box_keys = filtered_results[0]['box'].keys()
print(f"Box structure keys: {box_keys}")
# Extract coordinates based on the box format
# Assuming box format is {'xmin', 'ymin', 'xmax', 'ymax'} or similar
if 'ymin' in box_keys:
y_key, height_key = 'ymin', None
x_key = 'xmin'
elif 'top' in box_keys:
y_key, height_key = 'top', 'height'
x_key = 'left'
else:
print("Unknown box format, defaulting to list unpacking")
# Default case using list unpacking method
y_key, x_key, height_key = None, None, None
else:
print("No box format detected, defaulting to list unpacking")
y_key, x_key, height_key = None, None, None
# Separate heading and non-heading elements
structural_elements = []
content_elements = []
for r in filtered_results:
if r['label'] in ["heading", "author", "date"]:
structural_elements.append(r)
else:
content_elements.append(r)
# Extract coordinate functions based on the format we have
def get_y(element):
if y_key:
return element['box'][y_key]
else:
# If we don't know the format, assume box values() returns [xmin, ymin, xmax, ymax]
return list(element['box'].values())[1] # ymin is typically the second value
def get_x(element):
if x_key:
return element['box'][x_key]
else:
# If we don't know the format, assume box values() returns [xmin, ymin, xmax, ymax]
return list(element['box'].values())[0] # xmin is typically the first value
def get_x_max(element):
box_values = list(element['box'].values())
if len(box_values) >= 4:
return box_values[2] # xmax is typically the third value
return get_x(element) # fallback
def get_y_center(element):
if y_key and height_key:
return element['box'][y_key] + (element['box'][height_key] / 2)
else:
# If using list format [xmin, ymin, xmax, ymax]
box_values = list(element['box'].values())
return (box_values[1] + box_values[3]) / 2 # (ymin + ymax) / 2
# Sort structural elements by priority first, then by y position
sorted_structural = sorted(
structural_elements,
key=lambda x: (
element_priority.get(x['label'], 999),
get_y(x)
)
)
# Group content elements that may be in the same row (similar y-coordinate)
rows = []
for element in content_elements:
y_center = get_y_center(element)
# Check if this element belongs to an existing row
found_row = False
for row in rows:
row_y_centers = [get_y_center(e) for e in row]
row_y_center = sum(row_y_centers) / len(row_y_centers)
if abs(y_center - row_y_center) < y_tolerance:
row.append(element)
found_row = True
break
# If not found in any existing row, create a new row
if not found_row:
rows.append([element])
# Sort elements within each row according to reading direction (left-to-right or right-to-left)
for row in rows:
if rtl:
# For RTL, sort from right to left (descending x values)
row.sort(key=lambda x: get_x(x), reverse=True)
else:
# For LTR, sort from left to right (ascending x values)
row.sort(key=lambda x: get_x(x))
# Sort rows by y position (top to bottom)
rows.sort(key=lambda row: sum(get_y_center(e) for e in row) / len(row))
# Flatten the rows into a single list
sorted_content = [element for row in rows for element in row]
# Combine structural and content elements
return sorted_structural + sorted_content
def plot_results(image, results, threshold=threshold, save_path='output.jpg', rtl=is_rtl):
# Convert image to appropriate format if it's not already a PIL Image
if not isinstance(image, Image.Image):
image = Image.fromarray(np.uint8(image))
draw = ImageDraw.Draw(image)
width, height = image.size
# If rtl is None (not explicitly specified), try to auto-detect
if rtl is None:
rtl = detect_text_direction(results, threshold)
# Get results in reading order
ordered_results = get_reading_order(results, threshold, rtl)
# Create a list to store formatted results
formatted_results = []
# Add order number to visualize the detection sequence
for i, result in enumerate(ordered_results):
label = result['label']
box = list(result['box'].values())
score = result['score']
# Make sure box has exactly 4 values
if len(box) == 4:
x1, y1, x2, y2 = tuple(box)
else:
print(f"Warning: Unexpected box format for {label}: {box}")
continue
color = category_colors.get(label, (255, 255, 255)) # Default to white if label not found
# Draw bounding box and labels
draw.rectangle((x1, y1, x2, y2), outline=color, width=2)
# Add order number to visualize the reading sequence
draw.text((x1 + 5, y1 - 20), f'#{i+1}', fill=(255, 255, 255))
# For RTL languages, draw indicators differently
if rtl and label in ['textline', 'paragraph', 'heading']:
draw.text((x1 + 5, y1 - 10), f'{label} (RTL)', fill=color)
# Draw arrow showing reading direction (right to left)
arrow_y = y1 - 5
draw.line([(x2 - 20, arrow_y), (x1 + 20, arrow_y)], fill=color, width=1)
draw.polygon([(x1 + 20, arrow_y - 3), (x1 + 20, arrow_y + 3), (x1 + 15, arrow_y)], fill=color)
else:
draw.text((x1 + 5, y1 - 10), label, fill=color)
draw.text((x1 + 5, y1 + 10), f'{score:.2f}', fill='green' if score > 0.7 else 'red')
# Add result to formatted list with order index
formatted_results.append({
"order_index": i,
"label": label,
"is_rtl": rtl if label in ['textline', 'paragraph', 'heading'] else False,
"score": float(score),
"bbox": {
"x1": float(x1),
"y1": float(y1),
"x2": float(x2),
"y2": float(y2)
}
})
image.save(save_path)
# Save results to JSON file with RTL information
with open('results.json', 'w') as f:
json.dump({
"document_direction": "rtl" if rtl else "ltr",
"elements": formatted_results
}, f, indent=2)
return image
image.save(save_path)
# Save results to JSON file
with open('results.json', 'w') as f:
json.dump(formatted_results, f, indent=2)
return image
if len(results) > 0: # Only plot if there are results
# If RTL flag not set, try to auto-detect
if not hasattr(args, 'rtl') or args.rtl is None:
is_rtl = detect_text_direction(results)
plot_results(image, results, rtl=is_rtl)
print(f"Processing complete. Document interpreted as {'RTL' if is_rtl else 'LTR'}")
else:
print("No objects detected in the image")
```
---
## Output Example
- **Visual Output**: Bounding boxes with labels and order
- **JSON Output:**
```json
{
"document_direction": "rtl",
"elements": [
{
"order_index": 0,
"label": "heading",
"is_rtl": true,
"score": 0.97,
"bbox": {
"x1": 120.5,
"y1": 65.2,
"x2": 620.4,
"y2": 120.7
}
}
]
}
```
---
## Training Summary
- **Training script**: Uses Hugging Face `Trainer` API
- **Eval Strategy**: `steps` with `MeanAveragePrecision` via `torchmetrics`
--- | [
"layout-analysis-qva6",
"author",
"caption",
"columns",
"date",
"footnote",
"heading",
"paragraph",
"picture",
"textline"
] |
mrpae/segmentation_experiment |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segmentation_experiment
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3662 | 1.6 | 1000 | 1.3265 |
| 1.2665 | 3.2 | 2000 | 1.2749 |
| 1.1957 | 4.8 | 3000 | 1.1709 |
| 1.0923 | 6.4 | 4000 | 1.0864 |
| 1.0472 | 8.0 | 5000 | 1.0626 |
| 0.9876 | 9.6 | 6000 | 1.0280 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
salmu/output |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 375
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8809 | 1.0 | 250 | 1.6853 |
| 1.4938 | 2.0 | 500 | 1.3327 |
| 1.3403 | 3.0 | 750 | 1.2940 |
| 1.3485 | 4.0 | 1000 | 1.2512 |
| 1.2645 | 5.0 | 1250 | 1.2189 |
| 1.2061 | 6.0 | 1500 | 1.1517 |
| 1.1632 | 7.0 | 1750 | 1.1042 |
| 1.179 | 8.0 | 2000 | 1.0796 |
| 1.1558 | 9.0 | 2250 | 1.0567 |
| 1.1294 | 10.0 | 2500 | 1.0661 |
| 1.0644 | 11.0 | 2750 | 1.0214 |
| 1.0545 | 12.0 | 3000 | 1.0508 |
| 1.0455 | 13.0 | 3250 | 0.9904 |
| 1.0242 | 14.0 | 3500 | 0.9863 |
| 1.0021 | 15.0 | 3750 | 0.9897 |
| 0.9947 | 16.0 | 4000 | 0.9774 |
| 0.9456 | 17.0 | 4250 | 0.9485 |
| 0.9339 | 18.0 | 4500 | 0.9553 |
| 0.9481 | 19.0 | 4750 | 0.9431 |
| 0.8992 | 20.0 | 5000 | 0.9348 |
| 0.8792 | 21.0 | 5250 | 0.9265 |
| 0.8946 | 22.0 | 5500 | 0.9271 |
| 0.8913 | 23.0 | 5750 | 0.9249 |
| 0.8813 | 24.0 | 6000 | 0.9072 |
| 0.885 | 25.0 | 6250 | 0.9115 |
| 0.8477 | 26.0 | 6500 | 0.9091 |
| 0.8792 | 27.0 | 6750 | 0.9045 |
| 0.8816 | 28.0 | 7000 | 0.9034 |
| 0.8633 | 29.0 | 7250 | 0.9005 |
| 0.8921 | 30.0 | 7500 | 0.9102 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
coinvariant/dert-resnet50-finetuned |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dert-resnet50-finetuned
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5913 | 1.0 | 750 | 1.4025 |
| 1.3884 | 2.0 | 1500 | 1.2689 |
| 1.3243 | 3.0 | 2250 | 1.2102 |
| 1.2227 | 4.0 | 3000 | 1.1668 |
| 1.1613 | 5.0 | 3750 | 1.1580 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
verontsik/HW3_DL_model |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HW3_DL_model
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 150 | 1.9194 |
| No log | 2.0 | 300 | 1.6750 |
| No log | 3.0 | 450 | 1.6363 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
andresnamm/andres-resnet-50_fashionpedia |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# andres-resnet-50_fashionpedia
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6095 | 1.5873 | 200 | 1.4550 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
MarieJP/detr_fashionpedia |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_fashionpedia
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"shirt, blouse",
"top, t-shirt, sweatshirt",
"sweater",
"cardigan"
] |
fedirky/detr-fashionpedia |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-fashionpedia
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8007
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
kaidittm/results |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5219 | 1.0 | 150 | 1.5095 |
| 1.4556 | 2.0 | 300 | 1.4327 |
| 1.377 | 3.0 | 450 | 1.3390 |
| 1.3398 | 4.0 | 600 | 1.2920 |
| 1.3035 | 5.0 | 750 | 1.2615 |
| 1.2632 | 6.0 | 900 | 1.2491 |
| 1.2236 | 7.0 | 1050 | 1.2270 |
| 1.2066 | 8.0 | 1200 | 1.2137 |
| 1.1853 | 9.0 | 1350 | 1.1804 |
| 1.1686 | 10.0 | 1500 | 1.1939 |
| 1.1531 | 11.0 | 1650 | 1.1614 |
| 1.1388 | 12.0 | 1800 | 1.1582 |
| 1.1157 | 13.0 | 1950 | 1.1663 |
| 1.0953 | 14.0 | 2100 | 1.1516 |
| 1.0903 | 15.0 | 2250 | 1.1507 |
| 1.0834 | 16.0 | 2400 | 1.1479 |
| 1.0779 | 17.0 | 2550 | 1.1305 |
| 1.0651 | 18.0 | 2700 | 1.1332 |
| 1.0607 | 19.0 | 2850 | 1.1350 |
| 1.0567 | 20.0 | 3000 | 1.1380 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
Fidan56/detr_fashion_ckpt |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_fashion_ckpt
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
mrpae/segmentation_experiment2 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segmentation_experiment2
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3546 | 1.6 | 1000 | 1.2940 |
| 1.2118 | 3.2 | 2000 | 1.2191 |
| 1.129 | 4.8 | 3000 | 1.1202 |
| 1.1204 | 6.4 | 4000 | 1.1027 |
| 1.059 | 8.0 | 5000 | 1.0284 |
| 1.0167 | 9.6 | 6000 | 1.0003 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
MarilinA1/results |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"3",
"3",
"2",
"2"
] |
kaidittm/imgdetection_model |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imgdetection_model
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 17
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4992 | 1.0 | 75 | 2.4205 |
| 2.0497 | 2.0 | 150 | 1.9037 |
| 1.7749 | 3.0 | 225 | 1.6897 |
| 1.6143 | 4.0 | 300 | 1.5540 |
| 1.5326 | 5.0 | 375 | 1.4655 |
| 1.468 | 6.0 | 450 | 1.3991 |
| 1.4381 | 7.0 | 525 | 1.4099 |
| 1.4019 | 8.0 | 600 | 1.3662 |
| 1.3626 | 9.0 | 675 | 1.3520 |
| 1.3426 | 10.0 | 750 | 1.3188 |
| 1.3249 | 11.0 | 825 | 1.2764 |
| 1.2988 | 12.0 | 900 | 1.2777 |
| 1.288 | 13.0 | 975 | 1.2534 |
| 1.2687 | 14.0 | 1050 | 1.2531 |
| 1.2655 | 15.0 | 1125 | 1.2834 |
| 1.2692 | 16.0 | 1200 | 1.2308 |
| 1.2642 | 17.0 | 1275 | 1.2403 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
Murr123/test |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
kuubikus/detr |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
leynessa/detection_model_checkpoints |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detection_model_checkpoints
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
Pl0uf/detr-fashion-4cat |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-fashion-4cat
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7384 | 1.0 | 300 | 1.6380 |
| 1.3931 | 2.0 | 600 | 1.4684 |
| 1.3187 | 3.0 | 900 | 1.3165 |
| 1.3685 | 4.0 | 1200 | 1.2539 |
| 1.2105 | 5.0 | 1500 | 1.1905 |
| 1.2659 | 6.0 | 1800 | 1.1981 |
| 1.1682 | 7.0 | 2100 | 1.1874 |
| 1.1649 | 8.0 | 2400 | 1.1931 |
| 1.1522 | 9.0 | 2700 | 1.1604 |
| 1.1317 | 10.0 | 3000 | 1.1260 |
| 1.0819 | 11.0 | 3300 | 1.1122 |
| 1.0025 | 12.0 | 3600 | 1.1030 |
| 0.985 | 13.0 | 3900 | 1.0857 |
| 0.9731 | 14.0 | 4200 | 1.0724 |
| 1.0007 | 15.0 | 4500 | 1.0754 |
| 0.9768 | 16.0 | 4800 | 1.0502 |
| 1.0687 | 17.0 | 5100 | 1.0470 |
| 1.0269 | 18.0 | 5400 | 1.0400 |
| 1.0892 | 19.0 | 5700 | 1.0273 |
| 0.9363 | 20.0 | 6000 | 1.0302 |
| 1.0073 | 21.0 | 6300 | 1.0330 |
| 1.002 | 22.0 | 6600 | 1.0335 |
| 0.9418 | 23.0 | 6900 | 1.0266 |
| 0.9004 | 24.0 | 7200 | 1.0192 |
| 0.9459 | 25.0 | 7500 | 1.0213 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
jv2lja/trainer_output |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer_output
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 125 | 1.9656 |
| No log | 2.0 | 250 | 1.7091 |
| No log | 3.0 | 375 | 1.6466 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
anderor/detr-fashion |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-fashion
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2019
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
QaneBatyss23/detr-fashionpedia |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-fashionpedia
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6296 | 1.0 | 75 | 1.6497 |
| 1.52 | 2.0 | 150 | 1.5349 |
| 1.4591 | 3.0 | 225 | 1.4861 |
| 1.3667 | 4.0 | 300 | 1.4379 |
| 1.3715 | 5.0 | 375 | 1.3941 |
| 1.3052 | 6.0 | 450 | 1.3743 |
| 1.2863 | 7.0 | 525 | 1.3524 |
| 1.2776 | 8.0 | 600 | 1.3335 |
| 1.2615 | 9.0 | 675 | 1.3339 |
| 1.2802 | 10.0 | 750 | 1.3404 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
| [
"accessories",
"bags",
"clothing",
"shoes"
] |
Genereux-akotenou/yolos-headwear |
# YOLO-Face-4-KYC
A fine-tuned YOLO model for detecting facial features and personal headwear, including glasses, hats, and masks — optimized for identity verification and KYC (Know Your Customer) applications. This model is built upon the YOLOS architecture and trained on a filtered and relabeled subset of the [Fashionpedia dataset](https://huggingface.co/datasets/detection-datasets/fashionpedia), focusing exclusively on categories relevant to liveness verification and facial compliance checks. For more details of the implementation, check out the source code [here](https://github.com/Genereux-akotenou/Yolo-Face-4-Kyc)
## Supported Categories
The model detects the following classes:
```python
CATEGORIES = [
'face',
'glasses',
'hat',
'mask',
'headband',
'head covering'
]
```
<img src="https://raw.githubusercontent.com/genereux-akotenou/Yolo-Face-4-Kyc/master/docs-utils/img1.png" style="width: 15em;"/> | [
"glasses",
"hat",
"headband, head covering, hair accessory",
"hood"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.