[Model checkpoints will be released soon.]
Model Details
We introduce a new streaming paradigm that enables large language models to achieve strong performance and generalization in streaming settings, without requiring any architectural modifications.
- Batch-processing: The LLMs process inputs all at once after receiving the full sequence.
- Streaming-processing: The LLMs process the input as it arrives, incrementally and in real time.
Model Sources
- Paper: https://arxiv.org/abs/2505.16983
- Repository: https://github.com/EIT-NLP/StreamingLLM
Citation
@misc{tong2025llmeffectivestreamingprocessor,
title={LLM as Effective Streaming Processor: Bridging Streaming-Batch Mismatches with Group Position Encoding},
author={Junlong Tong and Jinlan Fu and Zixuan Lin and Yingqi Fan and Anhao Zhao and Hui Su and Xiaoyu Shen},
year={2025},
eprint={2505.16983},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.16983},
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support