--- library_name: transformers tags: [custom_generate] --- ## Description Example repository used to document `generate` from the hub. ⚠️ this custom generation method has an impossible requirement and is meant to crash. If you try to run it, you should see something like ``` ValueError: Missing requirements for `transformers-community/custom_generate_bad_requirements`: foo (installed: None) bar==0.0.0 (installed: None) torch>=99.0 (installed: 2.6.0+cu126) ``` ## Base model `Qwen/Qwen2.5-0.5B-Instruct` ## Model compatibility Most models. More specifically, any `transformer` LLM/VLM trained for causal language modeling. ## Additional Arguments `left_padding` (`int`, *optional*): number of padding tokens to add before the provided input ## Output Type changes (none) ## Example usage ```py from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct") model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct", device_map="auto") inputs = tokenizer(["The quick brown"], return_tensors="pt").to(model.device) gen_out = model.generate(**inputs, custom_generate="transformers-community/custom_generate_bad_requirements", trust_remote_code=True) # You should get an exception regarding missing requirements ```