Update README.md
Browse files
README.md
CHANGED
@@ -26,7 +26,7 @@ tags:
|
|
26 |
|
27 |
I simply converted it to MLX format with a quantization of 8-bits for better performance on Apple Silicon Macs (M1,M2,M3,M4 Chips).
|
28 |
|
29 |
-
# Alejandroolmedo/DeepScaleR-1.5B-Preview-
|
30 |
|
31 |
The Model [Alejandroolmedo/DeepScaleR-1.5B-Preview-8bit-mlx](https://huggingface.co/Alejandroolmedo/DeepScaleR-1.5B-Preview-8bit-mlx) was converted to MLX format from [agentica-org/DeepScaleR-1.5B-Preview](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview) using mlx-lm version **0.20.5**.
|
32 |
|
@@ -39,7 +39,7 @@ pip install mlx-lm
|
|
39 |
```python
|
40 |
from mlx_lm import load, generate
|
41 |
|
42 |
-
model, tokenizer = load("Alejandroolmedo/DeepScaleR-1.5B-Preview-
|
43 |
|
44 |
prompt="hello"
|
45 |
|
|
|
26 |
|
27 |
I simply converted it to MLX format with a quantization of 8-bits for better performance on Apple Silicon Macs (M1,M2,M3,M4 Chips).
|
28 |
|
29 |
+
# Alejandroolmedo/DeepScaleR-1.5B-Preview-8bit-mlx
|
30 |
|
31 |
The Model [Alejandroolmedo/DeepScaleR-1.5B-Preview-8bit-mlx](https://huggingface.co/Alejandroolmedo/DeepScaleR-1.5B-Preview-8bit-mlx) was converted to MLX format from [agentica-org/DeepScaleR-1.5B-Preview](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview) using mlx-lm version **0.20.5**.
|
32 |
|
|
|
39 |
```python
|
40 |
from mlx_lm import load, generate
|
41 |
|
42 |
+
model, tokenizer = load("Alejandroolmedo/DeepScaleR-1.5B-Preview-8bit-mlx")
|
43 |
|
44 |
prompt="hello"
|
45 |
|