--- title: Quantize My Repo emoji: 🔶 colorFrom: gray colorTo: pink sdk: docker hf_oauth: true hf_oauth_scopes: - read-repos - write-repos - manage-repos pinned: false failure_strategy: rollback --- # About this Space
logo
[![Website](https://img.shields.io/badge/Website-antigma.ai-orange?logo=google-chrome&logoColor=white)](https://antigma.ai) [![Twitter](https://img.shields.io/twitter/follow/antigma_labs?style=social)](https://twitter.com/antigma_labs) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/svxG9ffc) [![🤗 Hugging Face](https://img.shields.io/badge/HuggingFace-Antigma-yellow?logo=huggingface&logoColor=white)](https://huggingface.co/Antigma) [![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/AntigmaLabs) ## Quantize your model ### For now we only support GGUF format Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference To run this space locally: 1. Login huggingface CLI: `huggingface-cli login` 2. Run command: `HF_TOKEN=$(cat ~/.cache/huggingface/token) docker compose up`