john smith
kth8
AI & ML interests
None yet
Recent Activity
new activity
3 days ago
google/gemma-3n-E4B-it-litert-preview:gemma 3n , It's good
new activity
7 days ago
ResembleAI/chatterbox:latency - how to get sub 200ms ultra low latency as mentioned
new activity
9 days ago
google/gemma-3n-E4B-it-litert-preview:Would there ever be a GGUF for the gemma-3n models ?
Organizations
None yet
kth8's activity
gemma 3n , It's good
1
#23 opened 4 days ago
by
techdef215

latency - how to get sub 200ms ultra low latency as mentioned
1
#14 opened 7 days ago
by
saketfractal
Would there ever be a GGUF for the gemma-3n models ?
๐
24
9
#11 opened 12 days ago
by
bdutta
Optimal Sampling Settings
#1 opened 30 days ago
by
kth8
Update tokenizer_config.json
2
#1 opened about 1 month ago
by
kth8
Recommended Sampling Parameters
8
#2 opened about 2 months ago
by
kth8
Access Rejected
3
#62 opened about 2 months ago
by
ansenang

Ollama version
2
#53 opened 2 months ago
by
IrIA-EU

[MODELS] Discussion
๐
โค๏ธ
38
787
#372 opened over 1 year ago
by
victor

Request: Create distill of Mistral Small 24B
3
#128 opened 4 months ago
by
Kenshiro-28
Hardware requirements?
๐
8
29
#19 opened 5 months ago
by
JohnnieB
Continuous output
โ
1
8
#1 opened 8 months ago
by
kth8
MMLU-Pro benchmark
5
#13 opened 8 months ago
by
kth8
Add large-v3-turbo model
โค๏ธ
2
6
#17 opened 8 months ago
by
kth8
GGUF conversion
๐
4
11
#3 opened 9 months ago
by
compilade

The 8B Version Works Better
๐
2
2
#44 opened 9 months ago
by
gr0010
CORRECTION: THIS SYSTEM MESSAGE IS ***PURE GOLD***!!!
๐
๐ค
17
16
#33 opened 9 months ago
by
jukofyork
