--- license: mit base_model: - Wan-AI/Wan2.1-T2V-14B-Diffusers - hunyuanvideo-community/HunyuanVideo pipeline_tag: text-to-video library_name: diffusers --- # VORTA: Efficient Video Diffusion via Routing Sparse Attention > TL;DR - VORTA accelerates video diffusion transformers by sparse attention and dynamic routing, achieving speedup with negligible quality loss. ## Quick Start 1. Download the checkpoints into the `./results` directory under the VORTA GitHub code repository. ```bash git lfs install git clone git@hf.co:anonymous728/VORTA # mv VORTA/ results/, : wan-14B, hunyuan; e.g. mv VORTA/wan-14B results/ ``` _Other alternative methods to download the models can be found [here](https://huggingface.co/docs/hub/models-downloading#using-git)._ 2. Follow the `README.md` instructions to run the sampling with speedup. 🤗