model_type
stringclasses
5 values
model
stringlengths
12
62
AVG
float64
0.03
0.71
CG
float64
0
0.68
EL
float64
0
0.77
FA
float64
0
0.62
HE
float64
0
0.83
MC
float64
0
0.95
MR
float64
0
0.95
MT
float64
0.19
0.86
NLI
float64
0
0.97
QA
float64
0
0.77
RC
float64
0
0.94
SUM
float64
0
0.29
aio_char_f1
float64
0
0.9
alt-e-to-j_bert_score_ja_f1
float64
0
0.88
alt-e-to-j_bleu_ja
float64
0
16
alt-e-to-j_comet_wmt22
float64
0.2
0.92
alt-j-to-e_bert_score_en_f1
float64
0
0.96
alt-j-to-e_bleu_en
float64
0
20.1
alt-j-to-e_comet_wmt22
float64
0.17
0.89
chabsa_set_f1
float64
0
0.77
commonsensemoralja_exact_match
float64
0
0.94
jamp_exact_match
float64
0
1
janli_exact_match
float64
0
1
jcommonsenseqa_exact_match
float64
0
0.98
jemhopqa_char_f1
float64
0
0.71
jmmlu_exact_match
float64
0
0.81
jnli_exact_match
float64
0
0.94
jsem_exact_match
float64
0
0.96
jsick_exact_match
float64
0
0.93
jsquad_char_f1
float64
0
0.94
jsts_pearson
float64
-0.35
0.94
jsts_spearman
float64
-0.6
0.91
kuci_exact_match
float64
0
0.93
mawps_exact_match
float64
0
0.95
mbpp_code_exec
float64
0
0.68
mbpp_pylint_check
float64
0
0.99
mmlu_en_exact_match
float64
0
0.86
niilc_char_f1
float64
0
0.7
wiki_coreference_set_f1
float64
0
0.4
wiki_dependency_set_f1
float64
0
0.89
wiki_ner_set_f1
float64
0
0.33
wiki_pas_set_f1
float64
0
0.57
wiki_reading_char_f1
float64
0
0.94
wikicorpus-e-to-j_bert_score_ja_f1
float64
0
0.88
wikicorpus-e-to-j_bleu_ja
float64
0
24
wikicorpus-e-to-j_comet_wmt22
float64
0.18
0.87
wikicorpus-j-to-e_bert_score_en_f1
float64
0
0.93
wikicorpus-j-to-e_bleu_en
float64
0
15.9
wikicorpus-j-to-e_comet_wmt22
float64
0.17
0.79
xlsum_ja_bert_score_ja_f1
float64
0
0.79
xlsum_ja_bleu_ja
float64
0
10.2
xlsum_ja_rouge1
float64
0
53.4
xlsum_ja_rouge2
float64
0
29.2
xlsum_ja_rouge2_scaling
float64
0
0.29
xlsum_ja_rougeLsum
float64
0
45.3
architecture
stringclasses
12 values
precision
stringclasses
3 values
license
stringclasses
14 values
params
float64
0
70.6
likes
int64
0
6.19k
revision
stringclasses
1 value
num_few_shot
int64
0
4
add_special_tokens
stringclasses
2 values
llm_jp_eval_version
stringclasses
1 value
vllm_version
stringclasses
1 value
🀝 : base merges and moerges
sthenno/tempesthenno-ppo-ckpt40
0.5509
0.5783
0.2066
0.1484
0.5539
0.8582
0.82
0.8385
0.7456
0.3431
0.8597
0.107
0.4451
0.8506
10.8592
0.9016
0.9527
15.8864
0.8828
0.2066
0.895
0.6264
0.825
0.9383
0.3225
0.6718
0.7572
0.7027
0.8169
0.8597
0.8876
0.8589
0.7412
0.82
0.5783
0.9157
0.436
0.2617
0.0151
0.0043
0
0.0044
0.7182
0.8018
8.4
0.824
0.8952
9.1475
0.7457
0.6979
2.8228
28.3771
10.7037
0.107
24.9181
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
4
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
sthenno/tempesthenno-ppo-ckpt40
0.6461
0.5783
0.567
0.2613
0.7421
0.8806
0.896
0.848
0.775
0.5448
0.9065
0.107
0.5432
0.8642
12.6658
0.909
0.9542
17.3334
0.8843
0.567
0.8978
0.6322
0.825
0.9535
0.5905
0.7153
0.8558
0.7778
0.7843
0.9065
0.8944
0.8703
0.7906
0.896
0.5783
0.9157
0.7688
0.5008
0.0796
0.3575
0
0.0742
0.7953
0.8291
11.0791
0.841
0.9046
10.9118
0.7576
0.6979
2.8228
28.3771
10.7037
0.107
24.9181
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
4
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
sthenno/tempesthenno-icy-0130
0.5653
0.6386
0.2144
0.1489
0.6989
0.8552
0.818
0.84
0.7343
0.3231
0.8445
0.1028
0.392
0.8482
10.5467
0.902
0.9531
16.0387
0.8832
0.2144
0.899
0.6322
0.8069
0.9312
0.2953
0.6651
0.7744
0.6458
0.8123
0.8445
0.8901
0.8591
0.7353
0.818
0.6386
0.9578
0.7328
0.2819
0.0245
0.0127
0.0177
0.0012
0.6886
0.7997
8.418
0.8245
0.8968
9.2462
0.7502
0.6963
2.8097
27.0511
10.2924
0.1028
23.7965
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
8
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
sthenno/tempesthenno-icy-0130
0.6475
0.6386
0.5668
0.2699
0.7353
0.8802
0.894
0.8468
0.7665
0.5186
0.9026
0.1028
0.5226
0.8629
12.2905
0.9074
0.9532
16.3792
0.8833
0.5668
0.8978
0.6063
0.7986
0.9517
0.5735
0.7055
0.8562
0.7746
0.7966
0.9026
0.8879
0.8593
0.791
0.894
0.6386
0.9578
0.7651
0.4596
0.0731
0.3969
0.0088
0.084
0.7865
0.8259
10.7835
0.8385
0.9044
10.8281
0.7581
0.6963
2.8097
27.0511
10.2924
0.1028
23.7965
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
8
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
CultriX/Qwen2.5-14B-Ultimav2
0.6471
0.5863
0.5772
0.2611
0.743
0.8871
0.894
0.8478
0.7761
0.5333
0.9062
0.1063
0.5433
0.8633
12.4239
0.9089
0.9537
16.6646
0.8842
0.5772
0.9053
0.6322
0.8222
0.9553
0.579
0.7167
0.8558
0.779
0.7914
0.9062
0.8965
0.8691
0.8008
0.894
0.5863
0.9438
0.7693
0.4775
0.079
0.3559
0
0.0704
0.8002
0.8285
11.1655
0.8394
0.9052
11.1491
0.7585
0.6973
2.8953
28.0711
10.6358
0.1063
24.5705
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
3
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
CultriX/Qwen2.5-14B-Ultimav2
0.5349
0.5843
0.1906
0.1538
0.4204
0.8575
0.79
0.8395
0.7495
0.326
0.8655
0.1063
0.4184
0.8514
11.1536
0.9031
0.952
15.9733
0.8809
0.1906
0.8975
0.6351
0.8139
0.933
0.3455
0.6603
0.7605
0.7216
0.8167
0.8655
0.8904
0.8606
0.742
0.79
0.5843
0.9438
0.1805
0.214
0.0173
0.0119
0.0088
0.0037
0.7271
0.8036
8.4054
0.826
0.8958
9.4151
0.7481
0.6973
2.8953
28.0711
10.6358
0.1063
24.5705
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
3
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
sthenno/tempesthenno-nuslerp-0124
0.5696
0.6446
0.2545
0.1531
0.6916
0.8519
0.802
0.8395
0.7275
0.347
0.8504
0.103
0.3968
0.847
10.8865
0.9006
0.953
15.7372
0.8836
0.2545
0.8993
0.6207
0.8097
0.9249
0.3847
0.6566
0.7617
0.6326
0.8127
0.8504
0.8935
0.8638
0.7315
0.802
0.6446
0.9679
0.7266
0.2596
0.0226
0.0076
0.0354
0
0.7002
0.799
8.2549
0.8233
0.897
9.3208
0.7503
0.697
2.8219
27.1078
10.3114
0.103
23.7136
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
4
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
sthenno/tempesthenno-nuslerp-0124
0.6496
0.6446
0.5681
0.2723
0.7342
0.8804
0.892
0.847
0.7673
0.5306
0.9057
0.103
0.5204
0.8618
12.5638
0.907
0.9534
16.4968
0.8836
0.5681
0.8998
0.5977
0.8
0.9535
0.589
0.7038
0.8603
0.7721
0.8066
0.9057
0.8967
0.8704
0.7878
0.892
0.6446
0.9679
0.7646
0.4824
0.0798
0.38
0.0265
0.0865
0.7889
0.8266
11.0373
0.8383
0.9049
10.9919
0.759
0.697
2.8219
27.1078
10.3114
0.103
23.7136
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
4
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
jpacifico/Chocolatine-2-14B-Instruct-v2.0b3
0.6443
0.6245
0.5628
0.2649
0.7366
0.8828
0.896
0.8423
0.7552
0.5085
0.9021
0.1113
0.4918
0.8569
11.5854
0.903
0.952
16.16
0.8811
0.5628
0.9038
0.6034
0.8014
0.9482
0.5803
0.7071
0.8463
0.7652
0.7597
0.9021
0.8813
0.8513
0.7963
0.896
0.6245
0.9799
0.766
0.4534
0.0683
0.3796
0.0619
0.0696
0.7454
0.8191
10.2383
0.8293
0.9021
10.6095
0.756
0.7073
2.6089
29.8721
11.1473
0.1113
25.7545
Qwen2ForCausalLM
float16
apache-2.0
14.766
2
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
jpacifico/Chocolatine-2-14B-Instruct-v2.0b3
0.546
0.6245
0.2347
0.1487
0.6796
0.7855
0.7
0.8298
0.7461
0.3203
0.8257
0.1113
0.3715
0.841
9.2345
0.8876
0.9518
15.7119
0.8819
0.2347
0.6801
0.6408
0.8083
0.933
0.3341
0.6628
0.7539
0.6951
0.8326
0.8257
0.8892
0.8528
0.7436
0.7
0.6245
0.9799
0.6965
0.2552
0.0175
0.005
0.0442
0
0.6768
0.7896
7.3442
0.8029
0.8948
9.3008
0.7467
0.7073
2.6089
29.8721
11.1473
0.1113
25.7545
Qwen2ForCausalLM
float16
apache-2.0
14.766
2
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/Oxyge1-33B
0.5055
0.2831
0.1063
0.1213
0.5663
0.8747
0.794
0.6899
0.7654
0.3803
0.8844
0.0946
0.4306
0.7644
10.9448
0.7332
0.8698
15.2391
0.661
0.1063
0.9033
0.6523
0.7944
0.9339
0.2681
0.5521
0.8184
0.7973
0.7646
0.8844
0.8949
0.8758
0.7868
0.794
0.2831
0.4739
0.5805
0.4421
0.0281
0.0069
0
0.0054
0.5661
0.7345
8.5113
0.7025
0.8634
9.1806
0.6628
0.6881
2.7576
25.2368
9.4588
0.0946
21.9712
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
2
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/Oxyge1-33B
0.6314
0.2831
0.5842
0.2733
0.7761
0.8963
0.942
0.8428
0.8097
0.5387
0.9041
0.0946
0.5532
0.8639
13.2869
0.9074
0.955
17.794
0.8852
0.5842
0.8973
0.6724
0.8417
0.958
0.5672
0.7515
0.8969
0.7803
0.8573
0.9041
0.8882
0.8755
0.8335
0.942
0.2831
0.4739
0.8007
0.4958
0.0527
0.3809
0
0.1191
0.8136
0.8228
11.0997
0.828
0.9019
11.0915
0.7507
0.6881
2.7576
25.2368
9.4588
0.0946
21.9712
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
2
main
4
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
RDson/CoderO1-DeepSeekR1-Coder-32B-Preview
0.595
0.5823
0.54
0.2511
0.6771
0.826
0.884
0.7509
0.7304
0.4275
0.8216
0.0538
0.3777
0.8234
11.8941
0.8511
0.8941
15.057
0.711
0.54
0.8474
0.6236
0.8472
0.9071
0.5281
0.6306
0.7712
0.6982
0.712
0.8216
0.8593
0.8317
0.7234
0.884
0.5823
0.7068
0.7237
0.3768
0.0358
0.349
0.1327
0.0862
0.6519
0.79
9.3874
0.7992
0.8616
9.9709
0.6422
0.6486
2.064
15.9258
5.3819
0.0538
13.4817
Qwen2ForCausalLM
bfloat16
32.764
9
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
RDson/CoderO1-DeepSeekR1-Coder-32B-Preview
0.4227
0.5823
0.2144
0.1032
0.573
0.7811
0.054
0.7393
0.5751
0.2278
0.7459
0.0538
0.2469
0.727
8.7327
0.7943
0.9164
13.1113
0.7899
0.2144
0.8179
0.5431
0.7819
0.8677
0.2298
0.4761
0.7038
0.2532
0.5937
0.7459
0.8345
0.8111
0.6577
0.054
0.5823
0.7068
0.6698
0.2069
0.0169
0.0227
0.0796
0.008
0.3886
0.696
7.3584
0.7223
0.8617
8.8189
0.6508
0.6486
2.064
15.9258
5.3819
0.0538
13.4817
Qwen2ForCausalLM
bfloat16
32.764
9
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Nohobby/Q2.5-Qwetiapin-32B
0.5389
0.3534
0.2121
0.1166
0.6465
0.8771
0.82
0.7287
0.7395
0.417
0.9024
0.115
0.4489
0.7518
11.7625
0.7116
0.9275
17.072
0.8081
0.2121
0.9061
0.6236
0.7833
0.9392
0.4397
0.7026
0.8213
0.803
0.6661
0.9024
0.8951
0.8751
0.7861
0.82
0.3534
0.5683
0.5904
0.3625
0.0083
0.0062
0
0.0016
0.567
0.7364
9.3074
0.7042
0.8753
10.0637
0.6909
0.6985
3.5171
28.0632
11.4932
0.115
24.4295
Qwen2ForCausalLM
bfloat16
32.76
3
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Nohobby/Q2.5-Qwetiapin-32B
0.6446
0.3534
0.5733
0.2834
0.791
0.9043
0.942
0.8326
0.803
0.5782
0.914
0.115
0.5844
0.86
13.3195
0.8984
0.9558
17.4251
0.8859
0.5733
0.9123
0.6954
0.8208
0.9643
0.6252
0.7704
0.8895
0.8106
0.7985
0.914
0.8996
0.8772
0.8364
0.942
0.3534
0.5683
0.8116
0.525
0.0456
0.3349
0.1327
0.1076
0.7964
0.8011
12.7202
0.7865
0.9087
11.6484
0.7596
0.6985
3.5171
28.0632
11.4932
0.115
24.4295
Qwen2ForCausalLM
bfloat16
32.76
3
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
nitky/RoguePlanet-DeepSeek-R1-Qwen-32B
0.6108
0.0141
0.5864
0.294
0.7735
0.8999
0.922
0.8501
0.7904
0.5635
0.9176
0.1077
0.591
0.8667
13.7547
0.9112
0.9554
17.5627
0.8868
0.5864
0.9161
0.6839
0.7944
0.9589
0.5714
0.7523
0.8919
0.8011
0.7808
0.9176
0.8956
0.8771
0.8248
0.922
0.0141
0.1044
0.7948
0.528
0.0209
0.3822
0.1681
0.0797
0.8189
0.8325
11.4595
0.8414
0.9058
11.2513
0.7609
0.697
2.6115
28.9837
10.7751
0.1077
21.3962
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
7
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
nitky/RoguePlanet-DeepSeek-R1-Qwen-32B
0.2928
0.0141
0.2417
0.1503
0
0.8087
0
0.7989
0.3221
0.2188
0.5585
0.1077
0.295
0.8268
11.3538
0.8639
0.937
15.3529
0.8349
0.2417
0.7067
0.5948
0.0069
0.9383
0.1514
0
0.5431
0.065
0.4006
0.5585
0.8938
0.8733
0.7812
0
0.0141
0.1044
0
0.21
0.0179
0.0111
0.0298
0.0088
0.6838
0.7793
8.8274
0.7895
0.8875
9.1808
0.7074
0.697
2.6115
28.9837
10.7751
0.1077
21.3962
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
7
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Ba2han/QwQenSeek-coder
0.4845
0
0.1847
0.1499
0.3706
0.8474
0.812
0.8354
0.7285
0.3801
0.9041
0.1167
0.342
0.8431
11.2372
0.8946
0.9545
16.8997
0.8851
0.1847
0.8865
0.6063
0.7236
0.9151
0.4001
0.3417
0.8279
0.786
0.6986
0.9041
0.8858
0.852
0.7405
0.812
0
0
0.3995
0.3981
0.0083
0.0001
0.0177
0.0065
0.7168
0.7934
8.6787
0.8176
0.8962
9.6747
0.7443
0.7067
2.4888
32.0163
11.6771
0.1167
27.1202
Qwen2ForCausalLM
bfloat16
32.76
5
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Ba2han/QwQenSeek-coder
0.6036
0
0.5534
0.2709
0.7831
0.8859
0.924
0.8454
0.7812
0.5558
0.9231
0.1167
0.549
0.8618
12.8448
0.9051
0.9562
17.4485
0.8867
0.5534
0.8788
0.6868
0.8097
0.9535
0.5721
0.7633
0.8907
0.7967
0.7221
0.9231
0.9017
0.8742
0.8255
0.924
0
0
0.8029
0.5463
0.0439
0.3421
0.1062
0.082
0.7802
0.8375
13.6905
0.8355
0.906
11.7521
0.7544
0.7067
2.4888
32.0163
11.6771
0.1167
27.1202
Qwen2ForCausalLM
bfloat16
32.76
5
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
TeetouchQQ/model_mergev2
0.3657
0.01
0.2101
0.131
0.4311
0.7721
0.006
0.8158
0.6619
0.2247
0.6543
0.1058
0.1979
0.8384
11.0437
0.8852
0.9485
14.99
0.8758
0.2101
0.6383
0.5948
0.7125
0.9267
0.2958
0.632
0.6923
0.5549
0.755
0.6543
0.8954
0.8718
0.7512
0.006
0.01
0.0783
0.2302
0.1806
0.0203
0.0137
0.0276
0.0042
0.5891
0.7681
8.2187
0.7764
0.8897
9.1268
0.7259
0.6975
2.4494
28.9195
10.5844
0.1058
24.8971
Qwen2ForCausalLM
bfloat16
32.764
1
main
0
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
TeetouchQQ/model_mergev2
0.6028
0.01
0.5961
0.2709
0.7514
0.8846
0.922
0.846
0.7831
0.5448
0.9157
0.1058
0.5417
0.8609
12.1821
0.9065
0.9536
17.2996
0.885
0.5961
0.9013
0.6782
0.7667
0.9491
0.5653
0.7232
0.8817
0.7986
0.7903
0.9157
0.9005
0.8786
0.8033
0.922
0.01
0.0783
0.7796
0.5274
0.0202
0.3688
0.1593
0.0475
0.7589
0.8219
10.3336
0.8337
0.904
10.8448
0.7587
0.6975
2.4494
28.9195
10.5844
0.1058
24.8971
Qwen2ForCausalLM
bfloat16
32.764
1
main
4
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
TeetouchQQ/model_mergev2
0.3657
0.01
0.2101
0.131
0.4311
0.7721
0.006
0.8158
0.6619
0.2247
0.6543
0.1058
0.1979
0.8384
11.0437
0.8852
0.9485
14.99
0.8758
0.2101
0.6383
0.5948
0.7125
0.9267
0.2958
0.632
0.6923
0.5549
0.755
0.6543
0.8954
0.8718
0.7512
0.006
0.01
0.0783
0.2302
0.1806
0.0203
0.0137
0.0276
0.0042
0.5891
0.7681
8.2187
0.7764
0.8897
9.1268
0.7259
0.6975
2.4494
28.9195
10.5844
0.1058
24.8971
Qwen2ForCausalLM
bfloat16
32.764
1
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
TeetouchQQ/model_mergev2
0.6028
0.01
0.5961
0.2709
0.7514
0.8846
0.922
0.846
0.7831
0.5448
0.9157
0.1058
0.5417
0.8609
12.1821
0.9065
0.9536
17.2996
0.885
0.5961
0.9013
0.6782
0.7667
0.9491
0.5653
0.7232
0.8817
0.7986
0.7903
0.9157
0.9005
0.8786
0.8033
0.922
0.01
0.0783
0.7796
0.5274
0.0202
0.3688
0.1593
0.0475
0.7589
0.8219
10.3336
0.8337
0.904
10.8448
0.7587
0.6975
2.4494
28.9195
10.5844
0.1058
24.8971
Qwen2ForCausalLM
bfloat16
32.764
1
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
hotmailuser/Deeepseek-QwenSlerp4-32B
0.6308
0.2189
0.5809
0.2983
0.7777
0.8999
0.936
0.8476
0.8133
0.5377
0.9146
0.1135
0.5443
0.8628
13.0584
0.9053
0.9552
17.7952
0.8866
0.5809
0.9063
0.7328
0.8292
0.9589
0.5693
0.7546
0.8948
0.8018
0.808
0.9146
0.8912
0.8768
0.8344
0.936
0.2189
0.4418
0.8007
0.4996
0.0744
0.4031
0.1681
0.05
0.7958
0.8294
11.2483
0.8363
0.9067
10.9829
0.7621
0.7032
2.7268
30.8653
11.3458
0.1135
26.4612
Qwen2ForCausalLM
bfloat16
mit
32.764
0
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
hotmailuser/Deeepseek-QwenSlerp4-32B
0.497
0.2189
0.1594
0.1638
0.4866
0.8746
0.608
0.8382
0.7548
0.3804
0.8693
0.1135
0.3825
0.845
11.1952
0.898
0.9532
16.2638
0.884
0.1594
0.9038
0.6351
0.7792
0.9294
0.3351
0.4538
0.8003
0.798
0.7615
0.8693
0.7491
0.8812
0.7905
0.608
0.2189
0.4418
0.5193
0.4238
0.0231
0.0073
0.059
0.0059
0.7238
0.7956
8.8319
0.8192
0.8985
9.9968
0.7514
0.7032
2.7268
30.8653
11.3458
0.1135
26.4612
Qwen2ForCausalLM
bfloat16
mit
32.764
0
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
RDson/WomboCombo-R1-Coder-14B-Preview
0.4415
0.1827
0.2615
0.1357
0.5354
0.7948
0.326
0.8077
0.6939
0.2418
0.769
0.1078
0.2962
0.8131
9.0207
0.8455
0.9499
14.4527
0.8792
0.2615
0.7883
0.5805
0.6972
0.9071
0.1798
0.3906
0.7564
0.649
0.7867
0.769
0.8844
0.8604
0.689
0.326
0.1827
0.3554
0.6803
0.2493
0.006
0.0163
0.0442
0.0007
0.6111
0.7659
8.0378
0.7712
0.8898
9.1553
0.7351
0.7004
2.6722
29.2873
10.7788
0.1078
25.3713
Qwen2ForCausalLM
bfloat16
14.77
3
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
RDson/WomboCombo-R1-Coder-14B-Preview
0.6094
0.1827
0.5885
0.2742
0.7268
0.8804
0.888
0.8453
0.771
0.5322
0.9071
0.1078
0.5227
0.8592
12.0033
0.9065
0.9531
16.7884
0.8837
0.5885
0.8933
0.6236
0.7806
0.9589
0.5561
0.6981
0.8587
0.7841
0.8082
0.9071
0.891
0.8619
0.7889
0.888
0.1827
0.3554
0.7555
0.5178
0.0572
0.3648
0.1416
0.0535
0.754
0.8219
10.5989
0.8353
0.9033
10.7862
0.7555
0.7004
2.6722
29.2873
10.7788
0.1078
25.3713
Qwen2ForCausalLM
bfloat16
14.77
3
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
0.3928
0
0.3136
0.0614
0.4666
0.5645
0.714
0.6503
0.5876
0.1993
0.742
0.0214
0.1075
0.7382
4.0252
0.6127
0.9161
10.7788
0.7958
0.3136
0.6568
0.4425
0.6208
0.5979
0.3482
0.4225
0.5464
0.6951
0.633
0.742
0.7162
0.6238
0.4387
0.714
0
0
0.5107
0.1421
0.0121
0.0774
0.0354
0.0146
0.1673
0.6749
4.1897
0.5365
0.8659
7.619
0.656
0.5765
0.5591
9.0398
2.1444
0.0214
7.2766
Qwen2ForCausalLM
bfloat16
mit
7.616
545
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
0.1494
0
0
0.0311
0.104
0.3131
0
0.5913
0.0948
0.0715
0.4158
0.0214
0.0413
0.6945
2.4152
0.5372
0.8963
7.582
0.7428
0
0.505
0.0029
0.2917
0.1859
0.1085
0.0356
0.0399
0.0739
0.0658
0.4158
-0.0007
0.0066
0.2484
0
0
0
0.1723
0.0647
0
0
0
0
0.1557
0.6426
2.7038
0.4935
0.8402
4.9795
0.5916
0.5765
0.5591
9.0398
2.1444
0.0214
7.2766
Qwen2ForCausalLM
bfloat16
mit
7.616
545
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/Saka-7.2B
0.0302
0
0
0
0
0
0
0.3309
0
0
0.0011
0
0
0
0
0.3224
0
0
0.3407
0
0
0
0
0
0
0
0
0
0
0.0011
0
0
0
0
0
0.8534
0
0
0
0
0
0
0
0.0013
0
0.3298
0.0004
0
0.3306
0
0
0
0
0
0
LlamaForCausalLM
float16
apache-2.0
7.292
0
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/Saka-7.2B
0.0302
0
0
0
0
0
0
0.3309
0
0
0.0011
0
0
0
0
0.3224
0
0
0.3407
0
0
0
0
0
0
0
0
0
0
0.0011
0
0
0
0
0
0.8534
0
0
0
0
0
0
0
0
0
0.3301
0
0
0.3306
0
0
0
0
0
0
LlamaForCausalLM
float16
apache-2.0
7.292
0
main
4
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Sakalti/Saka-14B
0.6517
0.6446
0.5705
0.2718
0.7387
0.8825
0.896
0.8466
0.7666
0.5404
0.9047
0.1061
0.5204
0.8618
12.5093
0.9071
0.9535
16.6096
0.8835
0.5705
0.9013
0.6034
0.8014
0.9508
0.5969
0.7103
0.8615
0.7734
0.7932
0.9047
0.8862
0.8618
0.7954
0.896
0.6446
0.9819
0.7671
0.5039
0.0822
0.375
0.0531
0.0766
0.7718
0.8262
10.7134
0.8378
0.904
10.7985
0.758
0.701
2.8247
28.0076
10.6125
0.1061
24.4518
Qwen2ForCausalLM
float16
14.766
7
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Sakalti/Saka-14B
0.573
0.6446
0.2295
0.1537
0.6993
0.8582
0.816
0.8379
0.7371
0.3623
0.8586
0.1061
0.4027
0.8447
10.3957
0.8985
0.9533
15.9594
0.8836
0.2295
0.8983
0.6293
0.8111
0.9357
0.352
0.6651
0.7769
0.6465
0.8216
0.8586
0.8901
0.8575
0.7407
0.816
0.6446
0.9819
0.7334
0.3322
0.0239
0.0109
0.0442
0
0.6894
0.7959
8.1108
0.8187
0.8966
9.2848
0.7506
0.701
2.8247
28.0076
10.6125
0.1061
24.4518
Qwen2ForCausalLM
float16
14.766
7
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
simplescaling/s1-32B
0.5429
0.6305
0.1443
0.1425
0.6409
0.8675
0.746
0.787
0.7591
0.3278
0.8661
0.0603
0.3325
0.8377
10.8997
0.8802
0.9227
13.6057
0.7869
0.1443
0.9041
0.6121
0.8042
0.9339
0.2548
0.6461
0.735
0.7999
0.8445
0.8661
0.8778
0.8708
0.7645
0.746
0.6305
0.9639
0.6357
0.3961
0.0181
0.0097
0.0088
0.0046
0.6712
0.7944
8.6515
0.8163
0.8749
8.8676
0.6645
0.6599
1.4725
19.2312
6.0348
0.0603
16.1667
Qwen2ForCausalLM
float32
apache-2.0
32.764
288
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
simplescaling/s1-32B
0.6596
0.6305
0.5782
0.3129
0.7707
0.8932
0.93
0.8094
0.8138
0.5496
0.9067
0.0603
0.5645
0.8635
13.3414
0.9088
0.9239
17.3607
0.8121
0.5782
0.9021
0.6925
0.8514
0.9571
0.5864
0.747
0.8882
0.7917
0.8449
0.9067
0.8835
0.8742
0.8205
0.93
0.6305
0.9639
0.7944
0.498
0.1109
0.3882
0.1593
0.1063
0.7998
0.8264
11.2291
0.839
0.8717
10.8441
0.6777
0.6599
1.4725
19.2312
6.0348
0.0603
16.1667
Qwen2ForCausalLM
float32
apache-2.0
32.764
288
main
4
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
jpacifico/Chocolatine-2-14B-Instruct-v2.0.3
0.6435
0.6044
0.5584
0.2564
0.7423
0.8857
0.9
0.8447
0.7634
0.5003
0.9093
0.1132
0.5168
0.8592
12.2367
0.9062
0.9523
16.6313
0.8811
0.5584
0.9033
0.6408
0.8056
0.9517
0.5683
0.7139
0.85
0.7715
0.7493
0.9093
0.8876
0.8563
0.8021
0.9
0.6044
0.9859
0.7707
0.4159
0.0618
0.3657
0.0177
0.0711
0.7657
0.823
10.408
0.8334
0.9033
10.7928
0.7581
0.7071
2.8978
30.5362
11.3282
0.1132
26.331
Qwen2ForCausalLM
float16
apache-2.0
14.766
11
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
jpacifico/Chocolatine-2-14B-Instruct-v2.0.3
0.5125
0.6044
0.1861
0.1452
0.4343
0.8584
0.642
0.7766
0.7466
0.3017
0.8286
0.1132
0.3635
0.8017
9.3773
0.7524
0.9524
15.8243
0.8833
0.1861
0.8915
0.6149
0.8111
0.9366
0.3743
0.6461
0.7749
0.7241
0.808
0.8286
0.8875
0.8533
0.7471
0.642
0.6044
0.9859
0.2225
0.1674
0.0213
0.0068
0
0.0016
0.696
0.7683
8.0359
0.7221
0.8952
9.3679
0.7486
0.7071
2.8978
30.5362
11.3282
0.1132
26.331
Qwen2ForCausalLM
float16
apache-2.0
14.766
11
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/Saka-1.5B
0.4465
0.0582
0.4633
0.1799
0.4716
0.7023
0.584
0.7416
0.4619
0.4175
0.8009
0.0305
0.4617
0.8086
11.211
0.7713
0.9062
12.979
0.7766
0.4633
0.8016
0.3477
0.5625
0.8043
0.3391
0.4572
0.4244
0.5814
0.3935
0.8009
0.432
0.395
0.5011
0.584
0.0582
0.1365
0.486
0.4517
0.0017
0.2446
0.0354
0.0379
0.5797
0.7582
7.121
0.7205
0.8799
8.7137
0.6979
0.1736
0.5176
8.325
3.0408
0.0305
7.0197
Qwen2ForCausalLM
float16
1.777
1
main
4
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/Saka-1.5B
0.2389
0.0582
0
0.0525
0.3242
0.1895
0.362
0.5216
0.3645
0.0778
0.6469
0.0305
0.0648
0.6481
0.7462
0.5129
0.7826
2.1174
0.6194
0
0.0023
0.4138
0.525
0.3146
0.0085
0.3499
0.2186
0.3112
0.3538
0.6469
0
0
0.2516
0.362
0.0582
0.1365
0.2985
0.1601
0
0
0
0
0.2625
0.5044
0.6374
0.441
0.7685
1.4852
0.513
0.1736
0.5176
8.325
3.0408
0.0305
7.0197
Qwen2ForCausalLM
float16
1.777
1
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/SJT-1.5B-Alpha
0.2722
0.0763
0
0.0439
0.127
0.4595
0.24
0.6691
0.5568
0.075
0.6978
0.049
0.0571
0.739
3.1994
0.737
0.8948
7.7461
0.7602
0
0.5321
0.3649
0.4903
0.4978
0.092
0.1669
0.5908
0.6572
0.6805
0.6978
0.0816
0.0774
0.3488
0.24
0.0763
0.2189
0.0871
0.0759
0
0
0
0
0.2195
0.6761
2.5006
0.6226
0.8089
3.8472
0.5567
0.6343
1.6757
14.1247
4.9096
0.049
12.2804
Qwen2ForCausalLM
float16
apache-2.0
1.777
0
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/SJT-1.5B-Alpha
0.4399
0.0763
0.4214
0.0971
0.4972
0.6694
0.614
0.7695
0.552
0.2421
0.8512
0.049
0.2132
0.81
7.9348
0.831
0.9316
12.1181
0.8382
0.4214
0.6899
0.3793
0.5764
0.8043
0.268
0.4434
0.7231
0.6641
0.4169
0.8512
0.5175
0.5141
0.514
0.614
0.0763
0.2189
0.5511
0.2453
0.0046
0.1183
0.0088
0.0183
0.3355
0.7505
6.0729
0.7305
0.8735
7.5489
0.6782
0.6343
1.6757
14.1247
4.9096
0.049
12.2804
Qwen2ForCausalLM
float16
apache-2.0
1.777
0
main
4
False
v1.4.1
v0.6.3.post1
🟦 : RL-tuned (Preference optimization)
FINGU-AI/FINGU-2.5-instruct-70B
0.4432
0.4498
0.3185
0.1686
0.5812
0.858
0.026
0.8528
0.2298
0.5424
0.6681
0.1796
0.6762
0.8632
13.4236
0.9092
0.959
18.427
0.8908
0.3185
0.8742
0.5402
0
0.9303
0.3583
0.5484
0.2502
0.0013
0.3574
0.6681
0.9036
0.8747
0.7696
0.026
0.4498
0.7108
0.614
0.5926
0
0.0117
0.0265
0.0069
0.7979
0.8448
13.2998
0.8521
0.9072
12.1269
0.7592
0.7365
6.4326
35.7224
17.9623
0.1796
31.3322
LlamaForCausalLM
bfloat16
mit
70.554
0
main
0
False
v1.4.1
v0.6.3.post1
🟦 : RL-tuned (Preference optimization)
FINGU-AI/FINGU-2.5-instruct-70B
0.6739
0.4498
0.5817
0.3009
0.759
0.904
0.946
0.859
0.8139
0.7091
0.9105
0.1796
0.8243
0.8669
13.5317
0.9113
0.9598
18.8892
0.8919
0.5817
0.9211
0.6724
0.8958
0.9535
0.6548
0.7275
0.8365
0.8125
0.8522
0.9105
0.8954
0.874
0.8372
0.946
0.4498
0.7108
0.7905
0.648
0.0599
0.3962
0.1062
0.0619
0.8802
0.8613
17.4285
0.8599
0.9166
13.0656
0.7728
0.7365
6.4326
35.7224
17.9623
0.1796
31.3322
LlamaForCausalLM
bfloat16
mit
70.554
0
main
4
False
v1.4.1
v0.6.3.post1
🟦 : RL-tuned (Preference optimization)
FINGU-AI/FINGU-2.5-instruct-32B-v1
0.5872
0.6767
0.2073
0.168
0.735
0.8731
0.782
0.8449
0.7363
0.412
0.9088
0.1153
0.3926
0.856
12.4439
0.9078
0.9541
16.0765
0.8851
0.2073
0.8995
0.6178
0.7528
0.9366
0.3981
0.7122
0.7679
0.798
0.7449
0.9088
0.8926
0.8735
0.7831
0.782
0.6767
0.9699
0.7578
0.4452
0
0.0044
0.0531
0.01
0.7723
0.8079
9.1155
0.8344
0.8984
9.6882
0.7521
0.705
3.1296
29.2822
11.5263
0.1153
25.2097
Qwen2ForCausalLM
bfloat16
mit
32.764
0
main
0
False
v1.4.1
v0.6.3.post1
🟦 : RL-tuned (Preference optimization)
FINGU-AI/FINGU-2.5-instruct-32B-v1
0.6793
0.6767
0.5971
0.3025
0.7867
0.9015
0.932
0.8508
0.7995
0.5858
0.925
0.1153
0.6033
0.8674
13.4412
0.9107
0.9562
17.4854
0.887
0.5971
0.9111
0.704
0.8181
0.9598
0.5926
0.7679
0.8965
0.8056
0.7733
0.925
0.895
0.8789
0.8336
0.932
0.6767
0.9699
0.8055
0.5615
0.0133
0.4194
0.1416
0.1017
0.8364
0.8383
12.4772
0.8424
0.9083
11.8037
0.7631
0.705
3.1296
29.2822
11.5263
0.1153
25.2097
Qwen2ForCausalLM
bfloat16
mit
32.764
0
main
4
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/Saka-7.6B
0.5386
0.0141
0.4783
0.1887
0.6668
0.844
0.798
0.7777
0.7407
0.434
0.8965
0.0853
0.387
0.8174
11.6781
0.8068
0.9504
15.7626
0.8787
0.4783
0.8615
0.6063
0.7472
0.9187
0.4866
0.6317
0.8443
0.7153
0.7905
0.8965
0.865
0.8414
0.7517
0.798
0.0141
0.002
0.7019
0.4283
0.0325
0.3367
0.0619
0.0747
0.4375
0.7708
8.6671
0.7187
0.8828
9.6991
0.7066
0.6815
2.3147
24.1901
8.537
0.0853
20.0686
Qwen2ForCausalLM
float16
7.616
1
main
4
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/Saka-7.6B
0.3798
0.0141
0.2375
0.0871
0.6245
0.5301
0.55
0.6295
0.4362
0.2764
0.7071
0.0853
0.2935
0.6957
7.1507
0.7318
0.8216
12.186
0.5476
0.2375
0.0974
0.5374
0.0569
0.8356
0.2287
0.5851
0.7576
0.0562
0.7727
0.7071
0.8848
0.852
0.6572
0.55
0.0141
0.002
0.6639
0.307
0.0009
0.0246
0.0016
0
0.4083
0.6727
6.7367
0.6801
0.8135
7.4401
0.5584
0.6815
2.3147
24.1901
8.537
0.0853
20.0686
Qwen2ForCausalLM
float16
7.616
1
main
0
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
maldv/Qwenstein2.5-32B-Instruct
0.5619
0.4779
0.2019
0.143
0.6126
0.8716
0.848
0.8449
0.7497
0.4158
0.9081
0.1073
0.3931
0.8511
11.9648
0.9048
0.9544
16.2482
0.8862
0.2019
0.9038
0.6322
0.7681
0.9357
0.4386
0.6046
0.8127
0.803
0.7325
0.9081
0.8885
0.8686
0.7754
0.848
0.4779
0.6446
0.6206
0.4156
0.005
0.0089
0.0088
0.0031
0.6894
0.8052
9.2293
0.8338
0.8994
9.8968
0.7547
0.7006
3.2051
26.9863
10.7256
0.1073
23.6389
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
2
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
maldv/Qwenstein2.5-32B-Instruct
0.6593
0.4779
0.5861
0.2961
0.7851
0.8999
0.942
0.8512
0.8142
0.5721
0.9202
0.1073
0.5778
0.8658
13.1457
0.9107
0.9558
17.1323
0.8861
0.5861
0.9053
0.7213
0.8208
0.9625
0.6149
0.7636
0.8932
0.8081
0.8279
0.9202
0.8972
0.88
0.832
0.942
0.4779
0.6446
0.8066
0.5235
0.0396
0.4041
0.1239
0.0964
0.8166
0.8379
12.521
0.8436
0.9086
11.5639
0.7646
0.7006
3.2051
26.9863
10.7256
0.1073
23.6389
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
2
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/CodeSaka-7.6B
0.3096
0.0281
0.2383
0.0676
0.3837
0.5037
0.056
0.6614
0.4904
0.2201
0.7221
0.0339
0.109
0.7539
5.4566
0.6992
0.9027
9.1914
0.7601
0.2383
0.6478
0.3592
0.5597
0.5004
0.3887
0.3234
0.5731
0.5896
0.3702
0.7221
0.6777
0.6699
0.3629
0.056
0.0281
0.0763
0.4441
0.1627
0
0.0417
0.0619
0.0059
0.2286
0.6808
4.3093
0.5787
0.852
6.7678
0.6074
0.6133
1.6394
14.3824
3.4062
0.0339
11.6916
Qwen2ForCausalLM
float16
7.616
0
main
4
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/CodeSaka-7.6B
0.1456
0.0281
0.0022
0.0252
0.054
0.1685
0
0.5284
0.3424
0.0701
0.3493
0.0339
0.0421
0.6734
1.7035
0.5661
0.8377
5.7029
0.5879
0.0022
0.0005
0.2471
0.3278
0.2449
0.1111
0.0172
0.4827
0.0745
0.5801
0.3493
0.1428
0.1411
0.26
0
0.0281
0.0763
0.0908
0.0572
0
0
0
0
0.1258
0.6358
2.4282
0.5083
0.7747
3.0901
0.4511
0.6133
1.6394
14.3824
3.4062
0.0339
11.6916
Qwen2ForCausalLM
float16
7.616
0
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/MathSaka-7.6B
0.0999
0
0
0.01
0.0169
0.2924
0.032
0.3735
0.0649
0.0609
0.2392
0.0089
0.0302
0.5792
2.222
0.3799
0.7719
3.8787
0.4046
0
0.4384
0.0057
0.2931
0.1984
0.1167
0.0263
0.0078
0.0095
0.0085
0.2392
-0.0611
-0.1105
0.2403
0.032
0
0.3273
0.0075
0.0359
0
0
0
0
0.0498
0.5622
2.2657
0.3735
0.7446
2.7938
0.3361
0.533
1.4017
5.4879
0.887
0.0089
4.4833
Qwen2ForCausalLM
float16
7.616
0
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/MathSaka-7.6B
0.2101
0
0.1763
0.0102
0.2188
0.3121
0.588
0.3463
0.2481
0.1502
0.2523
0.0089
0.0741
0.646
2.8737
0.442
0.8013
5.2254
0.4344
0.1763
0.4679
0.1695
0.5472
0.2136
0.2584
0.2705
0.145
0.1578
0.2208
0.2523
0.4137
0.3969
0.2549
0.588
0
0.3273
0.167
0.1181
0
0
0.0088
0
0.0421
0.5141
1.4154
0.2495
0.7178
2.015
0.2594
0.533
1.4017
5.4879
0.887
0.0089
4.4833
Qwen2ForCausalLM
float16
7.616
0
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
llm-jp/llm-jp-3-150m
0.1802
0
0.0417
0.0437
0.253
0.3166
0.01
0.4561
0.5299
0.1074
0.1985
0.0253
0.0507
0.6319
1.9801
0.478
0.8257
5.3909
0.471
0.0417
0.4992
0.3247
0.4972
0.2029
0.1871
0.2392
0.5534
0.6717
0.6024
0.1985
0
0
0.2476
0.01
0
0.0141
0.2668
0.0843
0.0008
0.0053
0.0088
0
0.2037
0.6048
0.6694
0.4251
0.7946
3.7239
0.4502
0.5956
1.5022
9.6713
2.5257
0.0253
8.0996
LlamaForCausalLM
bfloat16
apache-2.0
0.152
1
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
llm-jp/llm-jp-3-150m
0.0568
0
0
0.0201
0
0
0
0.4233
0
0.0771
0.0788
0.0253
0.0445
0.6518
0.3972
0.4599
0.7519
0.1284
0.4822
0
0
0
0
0
0.1033
0
0
0
0
0.0788
-0.0425
-0.028
0
0
0
0.0141
0
0.0834
0
0
0
0
0.1003
0.6089
0.3367
0.387
0.749
0.0472
0.3642
0.5956
1.5022
9.6713
2.5257
0.0253
8.0996
LlamaForCausalLM
bfloat16
apache-2.0
0.152
1
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/Saka-14B
0.573
0.6446
0.2295
0.1537
0.6993
0.8582
0.816
0.8379
0.7371
0.3623
0.8586
0.1061
0.4027
0.8447
10.3957
0.8985
0.9533
15.9594
0.8836
0.2295
0.8983
0.6293
0.8111
0.9357
0.352
0.6651
0.7769
0.6465
0.8216
0.8586
0.8901
0.8575
0.7407
0.816
0.6446
0.9819
0.7334
0.3322
0.0239
0.0109
0.0442
0
0.6894
0.7959
8.1108
0.8187
0.8966
9.2848
0.7506
0.701
2.8247
28.0076
10.6125
0.1061
24.4518
Qwen2ForCausalLM
float16
14.766
7
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sakalti/Saka-14B
0.6517
0.6446
0.5705
0.2718
0.7387
0.8825
0.896
0.8466
0.7666
0.5404
0.9047
0.1061
0.5204
0.8618
12.5093
0.9071
0.9535
16.6096
0.8835
0.5705
0.9013
0.6034
0.8014
0.9508
0.5969
0.7103
0.8615
0.7734
0.7932
0.9047
0.8862
0.8618
0.7954
0.896
0.6446
0.9819
0.7671
0.5039
0.0822
0.375
0.0531
0.0766
0.7718
0.8262
10.7134
0.8378
0.904
10.7985
0.758
0.701
2.8247
28.0076
10.6125
0.1061
24.4518
Qwen2ForCausalLM
float16
14.766
7
main
4
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
CultriX/Qwen2.5-14B-HyperSeekv5
0.6287
0.4518
0.5605
0.2581
0.7342
0.8797
0.89
0.8405
0.7591
0.5404
0.9022
0.099
0.5108
0.8628
12.3521
0.9079
0.9439
16.7052
0.8622
0.5605
0.8983
0.6236
0.8236
0.9508
0.5861
0.704
0.8246
0.7639
0.7599
0.9022
0.8861
0.8569
0.7898
0.89
0.4518
0.7129
0.7643
0.5242
0.0964
0.3285
0.0147
0.0806
0.7704
0.8218
9.9688
0.8352
0.9021
10.4806
0.7566
0.6941
2.7383
25.529
9.9026
0.099
22.4373
Qwen2ForCausalLM
bfloat16
14.766
1
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
CultriX/Qwen2.5-14B-HyperSeekv5
0.5489
0.4518
0.2294
0.1302
0.688
0.853
0.8
0.8026
0.7465
0.3844
0.8526
0.099
0.4015
0.842
9.3105
0.8956
0.9166
15.5229
0.7982
0.2294
0.8935
0.6322
0.8292
0.9267
0.3261
0.6653
0.772
0.6888
0.8102
0.8526
0.8827
0.8541
0.7387
0.8
0.4518
0.7129
0.7107
0.4255
0.0247
0.0083
0.0088
0.0008
0.6081
0.7922
7.6501
0.8119
0.8749
8.8208
0.7048
0.6941
2.7383
25.529
9.9026
0.099
22.4373
Qwen2ForCausalLM
bfloat16
14.766
1
main
0
True
v1.4.1
v0.6.3.post1
🟒 : pretrained
llm-jp/llm-jp-3-980m
0.2908
0
0.364
0.1406
0.2544
0.3089
0.02
0.7217
0.5312
0.3909
0.4465
0.0203
0.488
0.808
8.6283
0.823
0.9121
10.2781
0.783
0.364
0.4917
0.3132
0.5
0.1841
0.3412
0.2544
0.5534
0.6717
0.6174
0.4465
0
0
0.2508
0.02
0
0.0904
0.2543
0.3435
0
0.2004
0
0.0084
0.4945
0.724
6.0661
0.6741
0.8546
7.0346
0.6069
0.6101
0.5272
13.8221
2.0424
0.0203
11.5575
LlamaForCausalLM
bfloat16
apache-2.0
0.99
3
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
llm-jp/llm-jp-3-980m
0.0808
0
0
0.0382
0
0
0
0.4896
0.0002
0.1126
0.2282
0.0203
0.0978
0.6766
0.1335
0.5147
0.7639
0.318
0.5815
0
0
0
0
0
0.1173
0
0
0.0006
0.0002
0.2282
0.0183
0.0251
0.0001
0
0
0.0904
0
0.1227
0
0
0
0
0.191
0.6417
0.2052
0.437
0.761
0.2569
0.4251
0.6101
0.5272
13.8221
2.0424
0.0203
11.5575
LlamaForCausalLM
bfloat16
apache-2.0
0.99
3
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
llm-jp/llm-jp-3-980m-instruct2
0.1737
0
0
0.0461
0.1152
0.2201
0
0.7417
0.2642
0.1182
0.3151
0.09
0.1884
0.8041
7.9959
0.8426
0.9207
10.6235
0.8097
0
0.4602
0.1695
0
0.0054
0.0188
0.0079
0.1039
0.6705
0.3769
0.3151
-0.056
-0.0463
0.1947
0
0
0
0.2225
0.1474
0
0
0
0
0.2306
0.7165
5.1155
0.682
0.8609
7.5758
0.6324
0.6861
2.1841
27.3893
9.0039
0.09
22.9684
LlamaForCausalLM
bfloat16
apache-2.0
0.99
2
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
llm-jp/llm-jp-3-980m-instruct2
0.328
0
0.3911
0.1325
0.2382
0.3282
0.12
0.7618
0.5367
0.3846
0.6254
0.09
0.4424
0.8129
8.0806
0.8536
0.9182
10.6297
0.807
0.3911
0.5333
0.3391
0.5
0.2002
0.344
0.2316
0.5534
0.6736
0.6172
0.6254
0
0
0.251
0.12
0
0
0.2449
0.3673
0.005
0.1274
0
0.0166
0.5134
0.7463
6.1072
0.7269
0.8689
8.4786
0.6596
0.6861
2.1841
27.3893
9.0039
0.09
22.9684
LlamaForCausalLM
bfloat16
apache-2.0
0.99
2
main
4
False
v1.4.1
v0.6.3.post1
🟦 : RL-tuned (Preference optimization)
llm-jp/llm-jp-3-980m-instruct3
0.3253
0
0.397
0.1309
0.2348
0.3279
0.124
0.765
0.5331
0.3788
0.6034
0.0835
0.438
0.8149
7.8943
0.8598
0.9183
10.5947
0.808
0.397
0.5321
0.3391
0.4819
0.2011
0.3287
0.2287
0.5534
0.6736
0.6174
0.6034
0
0
0.2505
0.124
0
0
0.2408
0.3696
0.0083
0.1258
0
0.0234
0.4968
0.7477
6.135
0.7303
0.8695
8.3773
0.6618
0.6843
1.6133
25.5995
8.3526
0.0835
21.7954
LlamaForCausalLM
bfloat16
apache-2.0
0.99
3
main
4
False
v1.4.1
v0.6.3.post1
🟦 : RL-tuned (Preference optimization)
llm-jp/llm-jp-3-980m-instruct3
0.1688
0
0
0.047
0.1108
0.1784
0
0.7486
0.2465
0.1186
0.3234
0.0835
0.1393
0.8035
8.0845
0.8525
0.9217
10.5228
0.8115
0
0.357
0.1379
0
0
0.0862
0.0003
0.0838
0.6717
0.3389
0.3234
0
0
0.1783
0
0
0
0.2214
0.1304
0
0
0
0
0.2349
0.7153
4.9927
0.6965
0.8635
7.464
0.6338
0.6843
1.6133
25.5995
8.3526
0.0835
21.7954
LlamaForCausalLM
bfloat16
apache-2.0
0.99
3
main
0
False
v1.4.1
v0.6.3.post1
🟦 : RL-tuned (Preference optimization)
llm-jp/llm-jp-3-3.7b-instruct3
0.4336
0
0.3947
0.2393
0.3357
0.5439
0.482
0.8221
0.4756
0.5287
0.8463
0.1008
0.6744
0.8509
11.2451
0.8952
0.9411
13.7129
0.8612
0.3947
0.7422
0.3477
0.5292
0.521
0.4341
0.3284
0.6064
0.6679
0.2267
0.8463
0.5467
0.5215
0.3685
0.482
0
0
0.343
0.4776
0.0177
0.3338
0.0088
0.0633
0.7726
0.7985
8.8945
0.8001
0.8955
9.5916
0.732
0.6954
1.9457
29.2573
10.074
0.1008
24.6475
LlamaForCausalLM
bfloat16
apache-2.0
3.783
1
main
4
False
v1.4.1
v0.6.3.post1
🟦 : RL-tuned (Preference optimization)
llm-jp/llm-jp-3-3.7b-instruct3
0.2489
0
0
0.1123
0.1394
0.2551
0
0.7889
0.4601
0.2051
0.6769
0.1008
0.2627
0.8065
9.5494
0.8366
0.9405
13.4278
0.8595
0
0.5321
0.3391
0.5
0.0161
0.0723
0.0011
0.5534
0.2904
0.6174
0.6769
0.1331
0.1561
0.2172
0
0
0
0.2777
0.2801
0
0
0
0
0.5615
0.7542
6.8517
0.7484
0.8856
8.4913
0.7108
0.6954
1.9457
29.2573
10.074
0.1008
24.6475
LlamaForCausalLM
bfloat16
apache-2.0
3.783
1
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
llm-jp/llm-jp-3-3.7b-instruct2
0.2547
0
0
0.1061
0.1491
0.2917
0
0.7942
0.484
0.1894
0.6813
0.1056
0.2263
0.8202
9.9225
0.8554
0.9393
13.1602
0.8578
0
0.5316
0.3391
0.5028
0.0786
0.0674
0.009
0.5534
0.4072
0.6174
0.6813
-0.0006
-0.0375
0.2648
0
0
0
0.2892
0.2746
0
0
0
0
0.5305
0.7635
7.3174
0.7651
0.8801
8.4638
0.6984
0.6978
2.309
32.3451
10.5551
0.1056
26.7879
LlamaForCausalLM
bfloat16
apache-2.0
3.783
0
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
llm-jp/llm-jp-3-3.7b-instruct2
0.4322
0
0.3765
0.2464
0.3343
0.5427
0.49
0.8188
0.477
0.5153
0.8481
0.1056
0.6799
0.8506
11.149
0.8934
0.9414
13.5881
0.8604
0.3765
0.7525
0.3391
0.5264
0.5004
0.3854
0.3262
0.613
0.6824
0.2239
0.8481
0.5527
0.5187
0.3751
0.49
0
0
0.3423
0.4806
0.0223
0.3439
0.0354
0.0575
0.773
0.7944
8.8518
0.7924
0.8952
9.8054
0.729
0.6978
2.309
32.3451
10.5551
0.1056
26.7879
LlamaForCausalLM
bfloat16
apache-2.0
3.783
0
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
llm-jp/llm-jp-3-440m
0.0651
0
0
0.0211
0
0.0004
0
0.4594
0.0004
0.075
0.1312
0.0291
0.0437
0.67
0.2424
0.5088
0.7659
0.2184
0.5256
0
0
0
0
0
0.1016
0
0
0.0019
0
0.1312
0.007
0.0059
0.0011
0
0
0.3373
0
0.0797
0
0
0
0
0.1057
0.6243
0.2347
0.4193
0.7624
0.1522
0.3839
0.5997
0.7066
9.2107
2.9063
0.0291
7.6895
LlamaForCausalLM
bfloat16
apache-2.0
0.447
0
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
llm-jp/llm-jp-3-440m
0.2425
0
0.2485
0.0588
0.2527
0.3272
0.022
0.6068
0.5296
0.2726
0.3201
0.0291
0.2893
0.7733
6.9923
0.7668
0.8892
8.3245
0.6989
0.2485
0.5291
0.3563
0.5111
0.202
0.2959
0.2406
0.5534
0.6717
0.5553
0.3201
0
0
0.2506
0.022
0
0.3373
0.2648
0.2326
0.0042
0.052
0.0177
0
0.2202
0.6325
2.856
0.5082
0.8045
4.6172
0.4533
0.5997
0.7066
9.2107
2.9063
0.0291
7.6895
LlamaForCausalLM
bfloat16
apache-2.0
0.447
0
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
llm-jp/llm-jp-3-440m-instruct2
0.1272
0
0
0.0305
0.087
0.2346
0
0.5276
0.1282
0.0807
0.2373
0.0728
0.0621
0.6542
0.6665
0.5064
0.8699
6.5019
0.6797
0
0.5276
0.2011
0
0.0027
0.0816
0
0.1446
0.0745
0.2208
0.2373
0.0361
0.0379
0.1735
0
0
0
0.1739
0.0984
0
0
0
0
0.1526
0.622
0.7157
0.4417
0.8047
3.6554
0.4827
0.677
2.2099
26.8712
7.3172
0.0728
21.4412
LlamaForCausalLM
bfloat16
apache-2.0
0.447
0
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
llm-jp/llm-jp-3-440m-instruct2
0.2527
0
0.2959
0.068
0.2466
0.3314
0.018
0.6396
0.4478
0.2403
0.4193
0.0728
0.2821
0.7715
5.4872
0.7896
0.8848
6.354
0.6896
0.2959
0.5318
0.3391
0.4986
0.21
0.2279
0.2364
0.5534
0.6717
0.1762
0.4193
0
0
0.2524
0.018
0
0
0.2569
0.2109
0.0141
0.039
0.0177
0.0097
0.2595
0.6581
3.4257
0.5511
0.8241
5.3961
0.5281
0.677
2.2099
26.8712
7.3172
0.0728
21.4412
LlamaForCausalLM
bfloat16
apache-2.0
0.447
0
main
4
False
v1.4.1
v0.6.3.post1
🟦 : RL-tuned (Preference optimization)
llm-jp/llm-jp-3-440m-instruct3
0.2524
0
0.2976
0.0633
0.248
0.3312
0.018
0.6499
0.4463
0.2342
0.4094
0.0786
0.2741
0.7702
5.3106
0.7958
0.886
6.3425
0.7017
0.2976
0.5321
0.3391
0.5014
0.21
0.241
0.2378
0.5534
0.6591
0.1784
0.4094
0
0
0.2516
0.018
0
0
0.2582
0.1875
0.0135
0.0342
0.0177
0.0044
0.2465
0.6642
3.5916
0.5662
0.8266
5.4703
0.536
0.678
1.765
27.4289
7.8603
0.0786
22.3521
LlamaForCausalLM
bfloat16
apache-2.0
0.447
1
main
4
False
v1.4.1
v0.6.3.post1
🟦 : RL-tuned (Preference optimization)
llm-jp/llm-jp-3-440m-instruct3
0.0955
0
0
0.033
0.0728
0.0137
0
0.4769
0.1133
0.0299
0.2318
0.0786
0.029
0.6145
0.7456
0.4815
0.8261
4.0231
0.5775
0
0.0168
0.1983
0
0
0.0272
0
0.1434
0.0044
0.2202
0.2318
0.0708
0.0613
0.0244
0
0
0
0.1456
0.0335
0
0
0
0
0.165
0.5801
0.668
0.4297
0.7757
2.0127
0.4187
0.678
1.765
27.4289
7.8603
0.0786
22.3521
LlamaForCausalLM
bfloat16
apache-2.0
0.447
1
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
ibm-granite/granite-3.2-8b-instruct-preview
0.4892
0
0.4651
0.1858
0.5483
0.7302
0.662
0.8133
0.6429
0.3211
0.8939
0.1188
0.265
0.8531
11.3175
0.8845
0.9491
16.9101
0.8752
0.4651
0.7896
0.5374
0.6472
0.8204
0.4159
0.4691
0.6397
0.7191
0.6712
0.8939
0.8576
0.8197
0.5806
0.662
0
0
0.6275
0.2825
0.0335
0.2821
0.0619
0.0235
0.5281
0.7745
7.5798
0.7719
0.8851
8.7765
0.7215
0.7113
3.0717
35.6148
11.8813
0.1188
28.9839
GraniteForCausalLM
bfloat16
apache-2.0
8.171
69
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
ibm-granite/granite-3.2-8b-instruct-preview
0.385
0
0
0.0561
0.511
0.6196
0.432
0.8082
0.6214
0.1996
0.8686
0.1188
0.2062
0.8373
10.8899
0.8836
0.9464
14.4985
0.8689
0
0.7866
0.4971
0.6306
0.6309
0.1403
0.429
0.5904
0.7374
0.6517
0.8686
0.8171
0.7867
0.4412
0.432
0
0
0.5929
0.2522
0
0
0.0088
0
0.2717
0.7546
6.529
0.7636
0.8828
8.6463
0.7168
0.7113
3.0717
35.6148
11.8813
0.1188
28.9839
GraniteForCausalLM
bfloat16
apache-2.0
8.171
69
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
cognitivecomputations/Dolphin3.0-R1-Mistral-24B
0.2078
0.0783
0.0052
0.0409
0.0784
0.3782
0.096
0.6772
0.2079
0.091
0.5636
0.0697
0.0722
0.7257
4.488
0.6622
0.918
9.5389
0.8085
0.0052
0.481
0.0517
0.0125
0.4013
0.103
0.009
0.0522
0.6919
0.231
0.5636
0.1772
0.1729
0.2523
0.096
0.0783
0.1305
0.1478
0.0978
0
0
0
0
0.2045
0.6668
3.4801
0.5909
0.8549
5.6017
0.647
0.6726
1.6885
20.6644
6.9802
0.0697
17.8733
MistralForCausalLM
bfloat16
23.572
166
main
0
True
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
cognitivecomputations/Dolphin3.0-R1-Mistral-24B
0.4487
0.0783
0.4018
0.0745
0.4713
0.6075
0.662
0.7579
0.6629
0.2824
0.8674
0.0697
0.1993
0.8131
8.6164
0.8257
0.9346
12.9278
0.8442
0.4018
0.6759
0.5833
0.6194
0.7498
0.4256
0.4072
0.7071
0.7039
0.7006
0.8674
0.7851
0.7485
0.3968
0.662
0.0783
0.1305
0.5353
0.2224
0.0097
0.1388
0.0354
0.0046
0.1838
0.7176
5.4166
0.6717
0.8766
8.2148
0.6898
0.6726
1.6885
20.6644
6.9802
0.0697
17.8733
MistralForCausalLM
bfloat16
23.572
166
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
suayptalha/Lamarckvergence-14B
0.6511
0.6325
0.57
0.2617
0.743
0.8856
0.908
0.8465
0.7702
0.5239
0.9092
0.1108
0.532
0.8619
12.4966
0.9073
0.953
16.4494
0.8827
0.57
0.9048
0.6264
0.7931
0.9526
0.5663
0.7159
0.8661
0.7828
0.7828
0.9092
0.9011
0.8738
0.7992
0.908
0.6325
0.99
0.7701
0.4735
0.0669
0.365
0.0265
0.0661
0.7841
0.8272
10.9736
0.8378
0.9048
11.0687
0.7582
0.7029
2.9024
29.0601
11.083
0.1108
25.3258
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
19
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
suayptalha/Lamarckvergence-14B
0.5732
0.6325
0.2551
0.1514
0.6605
0.8572
0.826
0.8393
0.7431
0.3567
0.8723
0.1108
0.4223
0.8476
10.5479
0.8997
0.9536
16.228
0.8844
0.2551
0.895
0.6178
0.8097
0.9339
0.3974
0.6653
0.7732
0.7064
0.8084
0.8723
0.8873
0.8538
0.7426
0.826
0.6325
0.99
0.6557
0.2505
0.0195
0.0149
0.0177
0.0009
0.704
0.8001
8.2596
0.8219
0.8967
9.5121
0.7511
0.7029
2.9024
29.0601
11.083
0.1108
25.3258
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
19
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-7B-v1.1
0.3891
0
0.2552
0.0481
0.4632
0.547
0.742
0.6342
0.5961
0.2265
0.7382
0.0293
0.1146
0.7126
3.8643
0.6048
0.9156
10.4985
0.7903
0.2552
0.6769
0.3908
0.6125
0.6023
0.4227
0.4002
0.5826
0.6951
0.6996
0.7382
0.7675
0.7349
0.3619
0.742
0
0.012
0.5262
0.1422
0
0.0569
0.0088
0.0136
0.1614
0.6399
4.1998
0.5036
0.8649
7.4403
0.6379
0.6071
1.1504
12.6498
2.9175
0.0293
10.3111
Qwen2ForCausalLM
bfloat16
mit
7.616
15
main
4
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-7B-v1.1
0.1283
0
0
0.0335
0
0.2608
0
0.5818
0.0842
0.0775
0.3438
0.0293
0.0434
0.681
2.676
0.5277
0.8973
8.3684
0.7409
0
0
0.2328
0
0.521
0.1191
0
0.0785
0
0.1098
0.3438
0.4023
0.3617
0.2615
0
0
0.012
0
0.07
0
0
0
0
0.1677
0.6374
3.3122
0.4868
0.8351
5.7518
0.5717
0.6071
1.1504
12.6498
2.9175
0.0293
10.3111
Qwen2ForCausalLM
bfloat16
mit
7.616
15
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-7B-v1.1
0.1228
0
0
0.0345
0
0.1998
0
0.5955
0.0784
0.0801
0.3314
0.0313
0.0419
0.686
2.5333
0.5398
0.9053
8.8619
0.7632
0
0
0.2184
0
0.454
0.1303
0
0.0711
0
0.1023
0.3314
0.7099
0.6575
0.1456
0
0
0.0161
0
0.0682
0
0
0
0
0.1723
0.6395
3.2519
0.4938
0.8369
5.6573
0.585
0.6104
1.1932
12.9563
3.1117
0.0313
10.4156
Qwen2ForCausalLM
bfloat16
mit
7.616
15
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-7B-v1.1
0.3888
0
0.2334
0.0589
0.4685
0.553
0.73
0.6393
0.5968
0.224
0.7417
0.0313
0.1134
0.7136
3.8763
0.6105
0.9161
10.5389
0.7924
0.2334
0.6748
0.3793
0.625
0.6059
0.4101
0.4058
0.5937
0.7014
0.6844
0.7417
0.7813
0.7434
0.3784
0.73
0
0.0161
0.5312
0.1486
0.0114
0.0587
0.0177
0.0113
0.1952
0.6393
4.1698
0.507
0.8669
7.6702
0.6471
0.6104
1.1932
12.9563
3.1117
0.0313
10.4156
Qwen2ForCausalLM
bfloat16
mit
7.616
15
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
mobiuslabsgmbh/DeepSeek-R1-ReDistill-Llama3-8B-v1.1
0.2303
0.004
0.0183
0.0487
0.1793
0.45
0.002
0.7392
0.3379
0.1167
0.5676
0.0699
0.0585
0.7883
6.1011
0.8022
0.9305
10.0258
0.8368
0.0183
0.6961
0.4167
0.5
0.3727
0.2042
0.0887
0.2325
0.262
0.2785
0.5676
0.4946
0.4681
0.2811
0.002
0.004
0.004
0.2698
0.0874
0
0
0
0
0.2437
0.7222
4.7517
0.6794
0.8612
6.0269
0.6385
0.6685
1.5698
20.3578
6.9846
0.0699
17.186
LlamaForCausalLM
bfloat16
mit
8.03
10
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
mobiuslabsgmbh/DeepSeek-R1-ReDistill-Llama3-8B-v1.1
0.4346
0.004
0.4061
0.1165
0.4599
0.6344
0.636
0.7867
0.6144
0.2308
0.8216
0.0699
0.2334
0.826
8.691
0.8603
0.9401
13.7073
0.857
0.4061
0.767
0.5086
0.525
0.7399
0.2008
0.3872
0.7293
0.7266
0.5825
0.8216
0.7077
0.7188
0.3964
0.636
0.004
0.004
0.5327
0.2581
0.006
0.1797
0.0531
0.0443
0.2992
0.7508
7.8466
0.7126
0.8888
9.2194
0.717
0.6685
1.5698
20.3578
6.9846
0.0699
17.186
LlamaForCausalLM
bfloat16
mit
8.03
10
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
maldv/Awqward2.5-32B-Instruct
0.6665
0.6426
0.5849
0.2811
0.7768
0.8979
0.94
0.8486
0.8092
0.5474
0.9073
0.0961
0.5587
0.8647
13.2201
0.9083
0.9554
17.6997
0.8859
0.5849
0.9013
0.6724
0.8417
0.9589
0.5786
0.7523
0.8993
0.7803
0.8524
0.9073
0.8925
0.8784
0.8334
0.94
0.6426
0.9297
0.8012
0.505
0.0609
0.3882
0.0177
0.1186
0.82
0.831
11.4098
0.8403
0.9056
11.1965
0.7599
0.6921
2.7545
25.5406
9.601
0.0961
22.2548
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
4
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
maldv/Awqward2.5-32B-Instruct
0.5602
0.6426
0.1147
0.1439
0.6185
0.8761
0.774
0.8402
0.7677
0.3952
0.8936
0.0961
0.4509
0.8499
11.4325
0.9018
0.9521
15.9367
0.8822
0.1147
0.9043
0.6552
0.7986
0.9357
0.2831
0.6264
0.8221
0.798
0.7648
0.8936
0.8952
0.8768
0.7884
0.774
0.6426
0.9297
0.6106
0.4517
0.0281
0.0054
0.0354
0.0038
0.6468
0.8017
8.7585
0.8282
0.8977
9.7368
0.7486
0.6921
2.7545
25.5406
9.601
0.0961
22.2548
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
4
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
sail/Sailor2-1B-Chat
0.1426
0
0
0.0381
0.1272
0.3204
0.02
0.5044
0.0302
0.0596
0.4119
0.0567
0.0532
0.6913
2.854
0.5506
0.787
1.8226
0.5576
0
0.4679
0.0057
0
0.2395
0.0167
0.0955
0.0468
0.0915
0.0067
0.4119
0
0
0.2536
0.02
0
0
0.159
0.109
0
0
0
0
0.1907
0.627
2.0655
0.4419
0.7643
1.2142
0.4674
0.6711
1.4044
20.8129
5.6895
0.0567
17.0481
Qwen2ForCausalLM
bfloat16
apache-2.0
0.988
16
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
sail/Sailor2-1B-Chat
0.2862
0
0.3086
0.0576
0.2628
0.3542
0.172
0.6467
0.4443
0.1942
0.651
0.0567
0.1217
0.7414
4.8854
0.6622
0.9022
8.7308
0.7654
0.3086
0.4679
0.3822
0.5042
0.3351
0.32
0.2635
0.5119
0.6591
0.164
0.651
-0.2949
-0.3865
0.2595
0.172
0
0
0.2621
0.141
0.0013
0.0182
0.0088
0.0138
0.246
0.661
3.7596
0.544
0.8459
5.7273
0.6151
0.6711
1.4044
20.8129
5.6895
0.0567
17.0481
Qwen2ForCausalLM
bfloat16
apache-2.0
0.988
16
main
4
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
sthenno-com/miscii-14b-0218
0.6489
0.5964
0.5701
0.2502
0.741
0.8816
0.9
0.8478
0.7837
0.5562
0.9144
0.0965
0.5429
0.8631
12.8617
0.9091
0.9538
17.1485
0.8838
0.5701
0.9038
0.6897
0.8014
0.9508
0.5904
0.7131
0.8611
0.779
0.7871
0.9144
0.8985
0.8713
0.7902
0.9
0.5964
0.9438
0.769
0.5352
0.0759
0.3093
0.0088
0.0774
0.7797
0.8275
11.1756
0.8402
0.9045
11.0641
0.7579
0.6923
2.903
24.6089
9.6557
0.0965
21.7174
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
21
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
sthenno-com/miscii-14b-0218
0.5659
0.5944
0.2035
0.1176
0.6521
0.8542
0.844
0.831
0.7447
0.4092
0.8776
0.0965
0.44
0.8488
10.4554
0.9036
0.9456
15.8815
0.8654
0.2035
0.897
0.6264
0.8208
0.9339
0.3656
0.667
0.7494
0.7203
0.8064
0.8776
0.8863
0.8556
0.7318
0.844
0.5944
0.9438
0.6372
0.4219
0.0206
0.0071
0
0.0018
0.5587
0.8008
8.3942
0.8274
0.8859
9.0839
0.7277
0.6923
2.903
24.6089
9.6557
0.0965
21.7174
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
21
main
0
True
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Rakuten/RakutenAI-2.0-mini-instruct
0.448
0
0.5467
0.2933
0.3378
0.7231
0.328
0.7857
0.5878
0.4173
0.8482
0.0605
0.4199
0.8302
8.4014
0.865
0.9154
11.2978
0.7683
0.5467
0.6335
0.4943
0.5486
0.8463
0.4198
0.34
0.6323
0.6357
0.628
0.8482
0.7876
0.7536
0.6894
0.328
0
0.0884
0.3356
0.4124
0.0868
0.5616
0.0265
0.116
0.6754
0.7997
11.7606
0.7766
0.9027
11.1908
0.7328
0.6614
1.5224
17.6215
6.0602
0.0605
15.2492
MistralForCausalLM
bfloat16
apache-2.0
1.535
16
main
4
True
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
Rakuten/RakutenAI-2.0-mini-instruct
0.404
0
0.5255
0.3792
0.1065
0.7804
0.012
0.7162
0.7381
0.3299
0.7961
0.0605
0.3422
0.7862
6.4587
0.7776
0.8679
7.9085
0.7264
0.5255
0.6385
0.7874
0.9139
0.8713
0.395
0.0181
0.7827
0.4331
0.7737
0.7961
0.8914
0.8669
0.8313
0.012
0
0.0884
0.195
0.2525
0.136
0.6899
0.0265
0.2339
0.8096
0.7675
10.1966
0.7226
0.8514
6.08
0.6383
0.6614
1.5224
17.6215
6.0602
0.0605
15.2492
MistralForCausalLM
bfloat16
apache-2.0
1.535
16
main
0
True
v1.4.1
v0.6.3.post1
🟒 : pretrained
AIDC-AI/Marco-LLM-GLO
0.2675
0
0
0.0548
0.2713
0.5859
0.108
0.7132
0.1895
0.1609
0.7664
0.0927
0.1724
0.8322
9.1884
0.8768
0.8496
6.384
0.7121
0
0.5326
0.2155
0.4417
0.6971
0.0751
0.0127
0.1253
0.0013
0.164
0.7664
0.7949
0.7247
0.528
0.108
0
0
0.5298
0.2351
0
0.0003
0
0
0.2737
0.733
4.6039
0.7257
0.8006
2.8995
0.5381
0.6941
2.2966
25.2266
9.2781
0.0927
21.4254
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
2
main
0
True
v1.4.1
v0.6.3.post1
🟒 : pretrained
AIDC-AI/Marco-LLM-GLO
0.5247
0
0.4223
0.2366
0.6447
0.8055
0.72
0.8231
0.6891
0.4422
0.8953
0.0927
0.4377
0.8588
11.728
0.9027
0.9514
16.0875
0.8792
0.4223
0.8587
0.5086
0.6861
0.8731
0.4498
0.6001
0.8607
0.7393
0.6509
0.8953
0.8493
0.8156
0.6846
0.72
0
0
0.6894
0.439
0.0205
0.296
0.115
0.053
0.6986
0.8009
10.1615
0.7932
0.8873
9.2224
0.7172
0.6941
2.2966
25.2266
9.2781
0.0927
21.4254
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
2
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
wanlige/li-14b-v0.4
0.506
0.2329
0.231
0.1092
0.6432
0.8051
0.788
0.6495
0.7472
0.3886
0.8768
0.0948
0.3802
0.7462
9.8565
0.7101
0.8488
14.8133
0.5824
0.231
0.7503
0.6207
0.8153
0.9339
0.4143
0.6673
0.7995
0.7241
0.7763
0.8768
0.8886
0.8617
0.7313
0.788
0.2329
0.4498
0.6191
0.3714
0.0096
0.0088
0
0.0007
0.5269
0.7245
8.1027
0.6944
0.842
8.8589
0.6111
0.6859
2.9512
24.6266
9.4794
0.0948
21.5086
Qwen2ForCausalLM
bfloat16
14.77
15
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
wanlige/li-14b-v0.4
0.6043
0.2329
0.5709
0.2256
0.7387
0.8791
0.894
0.7989
0.7746
0.5293
0.9086
0.0948
0.5206
0.8178
12.0742
0.8301
0.9539
17.0394
0.8847
0.5709
0.8973
0.6552
0.8444
0.9562
0.5615
0.7114
0.8505
0.7841
0.739
0.9086
0.9024
0.878
0.7839
0.894
0.2329
0.4498
0.766
0.5057
0.0456
0.337
0.0796
0.0711
0.5945
0.7477
10.4914
0.722
0.9044
10.952
0.7587
0.6859
2.9512
24.6266
9.4794
0.0948
21.5086
Qwen2ForCausalLM
bfloat16
14.77
15
main
4
True
v1.4.1
v0.6.3.post1