unable to run demo.py, plz help

#2
by rdhoundiyal - opened

Hi,
I am getting error: TypeError: Qwen2_5_VLProcessor.init() got multiple values for argument 'image_processor'
Traceback (most recent call last):

TypeError: Qwen2_5_VLProcessor.init() got multiple values for argument 'image_processor'
Traceback (most recent call last):
File "/workspace/OSWorld-G/demo.py", line 171, in
main()
File "/workspace/OSWorld-G/demo.py", line 101, in main
llm = LLM(model="xlangai/Jedi-7B-1080p")

XLang NLP Lab org

Hi, thank you for your message!
We haven’t been able to reproduce the error on our end yet. It’s possible that the issue might be related to the version of packages you're using including transformers.
Could you please try upgrading your transformers package by running the following command:

pip install --upgrade transformers

If the issue still persists after the upgrade, please share the versions of your packages by running the following code:

import transformers
import torch
import vllm
print(f"transformers version: {transformers.__version__}")
print(f"torch version: {torch.__version__}")
print(f"vllm version: {vllm.__version__}")

For reference, here is the combination of versions that is working on our side:

  • transformers version: 4.52.4
  • torch version: 2.6.0+cu124
  • vllm version: 0.8.3
    Let us know how it goes. We’re happy to help further if the problem continues!
XLang NLP Lab org

By the way, we’ve made some minor updates to the repo, but they’re unrelated to the issue you encountered. Feel free to reach out if you have any further questions!

HI, thanks, now its work, I also need this llm = LLM(model="path/to/your/model", max_model_len=8192) , as I was running on 12gb nvidia3060. its working now.

XLang NLP Lab org

Awesome! Happy to hear it's working now. Feel free to reach out if you hit any other issues!

Sign up or log in to comment