Fine-tuned version of Qwen3-4B.
from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("TensorLabsAI/julia-alpha") tokenizer = AutoTokenizer.from_pretrained("TensorLabsAI/julia-alpha")
Chat template
Files info
Base model