# gemma3-270m-leetcode-gguf | |
**Original model**: [Codingstark/gemma3-270m-leetcode](https://huggingface.co/Codingstark/gemma3-270m-leetcode) | |
**Format**: GGUF | |
**Quantization**: bf16 | |
This is a GGUF conversion of the Codingstark/gemma3-270m-leetcode model, optimized for use with applications like LM Studio, Ollama, and other GGUF-compatible inference engines. | |
## Usage | |
Load this model in any GGUF-compatible application by referencing the `.gguf` file. | |
## Model Details | |
- **Original Repository**: Codingstark/gemma3-270m-leetcode | |
- **Converted Format**: GGUF | |
- **Quantization Level**: bf16 | |
- **Compatible With**: LM Studio, Ollama, llama.cpp, and other GGUF inference engines | |
## Conversion Process | |
This model was converted using the llama.cpp conversion scripts with the following settings: | |
- Input format: Hugging Face Transformers | |
- Output format: GGUF | |
- Quantization: bf16 | |
## License | |
Please refer to the original model's license terms. | |