AWQ quantization script to Qwen-VL-2.5 7B model
#8 opened 18 days ago
by
Berkesule

Necessary transformers version
🤯
1
1
#7 opened 26 days ago
by
chefexperte

lm head in the trained model is not in AWQ format
👍
1
#6 opened about 2 months ago
by
pooya-mohammadi

Is AWQ quantization applied only to the anguage model of this model?
1
#4 opened 2 months ago
by
sinchir0
License clarification
#3 opened 2 months ago
by
ValerianGuillot
Empty output when using Qwen2.5-VL-7B-Instruct-AWQ example code from README
7
#2 opened 2 months ago
by
WpythonW

Add link to paper and code
#1 opened 4 months ago
by
nielsr
