|
--- |
|
license: mit |
|
--- |
|
# Ling-Coder-lite-GPTQ-Int8 |
|
|
|
<p align="center"> |
|
<img src="https://modelscope.cn/api/v1/models/inclusionAI/Ling-lite-base/repo?Revision=master&FilePath=ant-bailing.png&View=true" width="100"/> |
|
<p> |
|
|
|
<p align="center"> |
|
π€ <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a> |
|
π€ <a href="https://huggingface.co/inclusionAI">Hugging Face</a> |
|
π₯οΈ <a href="https://github.com/codefuse-ai/Ling-Coder-Lite">GitHub</a> |
|
<p> |
|
|
|
## Introduction |
|
|
|
Ling-Coder-Lite is a MoE LLM provided and open-sourced by InclusionAI, which has 16.8B parameters with 2.75B activated parameters. This model demonstrates state-of-the-art performance on 12 coding benchmarks, while simultaneously offering competitive latency and throughput compared to code LLMs of similar size. In addition to open-sourcing the model itself, we also release a substantial amount of code-related data, including synthetic QA, SFT and DPO datasets. More details are described in the technique report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793). |
|
|
|
This repo contains the GPTQ-quantized 8-bit Ling-Coder-lite model which can be served using vLLM. |
|
|
|
## Model Downloads |
|
|
|
You can download the following table to see the various parameters for your use case. If you are located in mainland China, we also provide the model on modelscope.cn to speed up the download process. |
|
|
|
<div align="center"> |
|
|
|
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | |
|
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: | |
|
| Ling-Coder-lite-base | 16.8B | 2.75B | 16K | [π€ HuggingFace](https://huggingface.co/inclusionAI/Ling-Coder-lite-base) | |
|
| Ling-Coder-lite | 16.8B | 2.75B | 16K | [π€ HuggingFace](https://huggingface.co/inclusionAI/Ling-Coder-lite) | |
|
| Ling-Coder-lite-GPTQ-Int8 | 16.8B | 2.75B | 16K | [π€ HuggingFace](https://huggingface.co/inclusionAI/Ling-Coder-lite-GPTQ-Int8) | |
|
</div> |
|
|
|
## Dataset Downloads |
|
|
|
<div align="center"> |
|
|
|
| **Model** | **Samples** | **Download** | |
|
| :------------: | :----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------: | |
|
| Ling-Coder-SyntheticQA | 24M | [π€ HuggingFace](https://huggingface.co/datasets/inclusionAI/Ling-Coder-SyntheticQA) | |
|
| Ling-Coder-SFT | 5M | [π€ HuggingFace](https://huggingface.co/datasets/inclusionAI/Ling-Coder-SFT) | |
|
| Ling-Coder-DPO | 250K | [π€ HuggingFace](https://huggingface.co/datasets/inclusionAI/Ling-Coder-DPO) | |
|
|
|
</div> |
|
|
|
## Evaluation |
|
|
|
Detailed evaluation results are reported in our technical report [Ling-Coder-TR](https://huggingface.co/papers/2503.17793). |
|
|
|
For the quantized model deployed via vLLM, preliminary evaluation results are presented below: |
|
|
|
<div align="center"> |
|
|
|
| **Benchmark** | **Scores** | |
|
| :------------: | :----------------: | |
|
| HumanEval-Python | 88.41 | |
|
| MBPP-Python | 73.28 | |
|
| EvalPlus-HumanEval+ | 85.37 | |
|
| EvalPlus-MBPP+ | 73.28 | |
|
|
|
</div> |
|
|
|
## Quickstart |
|
### vLLM |
|
|
|
Requirement: `vllm==0.6.3.post1`. |
|
|
|
Patch `ling_gptq.patch` onto vLLM by executing: |
|
```bash |
|
patch -p1 < ling_gptq.patch -d $(python -c "from importlib.util import find_spec; print(find_spec('vllm').submodule_search_locations[0])") |
|
``` |
|
|
|
```python |
|
from vllm import LLM |
|
from vllm.sampling_params import SamplingParams |
|
from transformers import AutoTokenizer |
|
|
|
model_name = "inclusionAI/Ling-Coder-lite-GPTQ-Int8" |
|
|
|
model = LLM(model_name, trust_remote_code=True, gpu_memory_utilization=0.80, max_model_len=4096) |
|
|
|
tokenizer = AutoTokenizer.from_pretrained( |
|
model_name, |
|
trust_remote_code=True |
|
) |
|
|
|
prompt = "Write a quick sort algorithm in python." |
|
messages = [ |
|
{"role": "user", "content": prompt} |
|
] |
|
text = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=False, |
|
add_generation_prompt=True |
|
) |
|
|
|
sample_params = SamplingParams(max_tokens=1024, ignore_eos=False) |
|
outputs = model.generate(text, sampling_params=sample_params, prompt_token_ids=None) |
|
|
|
for output in outputs: |
|
generated_text = output.outputs[0].text |
|
print(generated_text) |
|
``` |
|
|
|
Note: Ling Coder Lite GTPQ int8 only borrows the inference code of DeepSeek MoE in vLLM. The model itself is not related to DeepSeek. |
|
|
|
## Deployment |
|
Please refer to [Github](https://github.com/inclusionAI/Ling/blob/master/README.md) |
|
|
|
## License |
|
This code repository is licensed under [the MIT License](https://www.modelscope.cn/models/inclusionAI/Ling-Coder-lite/file/view/master?fileName=LICENCE&status=0). |
|
|
|
## Citation |
|
|
|
``` |
|
@misc{codefuse2025samplemattersleveragingmixtureofexperts, |
|
title={Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM}, |
|
author={Codefuse and Ling Team}, |
|
year={2025}, |
|
eprint={2503.17793}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.LG}, |
|
url={https://arxiv.org/abs/2503.17793}, |
|
} |
|
``` |