Model keeps repeating the prompt – how can I avoid this?

#9
by sunnyanna - opened

Hi, I'm using the naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B model for inference via the transformers library.

I applied the chat template using tokenizer.apply_chat_template() with add_generation_prompt=True, and I can generate outputs successfully using model.generate(). However, I noticed that the model keeps repeating the full prompt—including the system and user messages—in its response, instead of generating only the assistant reply.

Here's a minimal example:

chat = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is quantum mechanics?"}
]
inputs = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt").to(device)
output_ids = model.generate(**inputs, max_new_tokens=512)
decoded = tokenizer.decode(output_ids[0])
print(decoded)
HyperCLOVA X org
edited 2 days ago

Hi! Great question!
This behavior is expected due to how model.generate() works by default.
When using model.generate(), the output will include the input_ids unless you explicitly slice them out. This is a common pattern when using Hugging Face models.
Here’s a more complete example that extracts only the newly generated tokens by slicing out the input portion:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B"
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="cuda")
tokenizer = AutoTokenizer.from_pretrained(model_name)

chat = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is quantum mechanics?"}
]

inputs = tokenizer.apply_chat_template(chat, return_dict=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
input_ids = inputs["input_ids"]
input_length = input_ids.shape[1]

output_ids = model.generate(
    **inputs,
    max_new_tokens=512,
    stop_strings=["<|stop|>", "<|endofturn|>"],
    tokenizer=tokenizer
)

# Decode the full output (including prompt)
print("## Full output:")
print(tokenizer.decode(output_ids[0], skip_special_tokens=False))

# Decode only the newly generated part
print("\n## Assistant's reply only:")
print(tokenizer.decode(output_ids[0][input_length:], skip_special_tokens=True))

This way, you’ll get only the assistant’s reply without the repeated prompt.
Let me know if you have any more questions!

Sign up or log in to comment