runtime error
Exit code: 1. Reason: py3-none-any.whl size=108745 sha256=971641606bb74ed79eb1134b818f9c23f966daf879b4854b0a3bd10dec3d5964 Stored in directory: /home/user/.cache/pip/wheels/5c/09/7e/7154d95be3598ce30655fa68ff29097a307324e0430625e3c6 Successfully built autoawq Installing collected packages: zstandard, transformers, autoawq Attempting uninstall: transformers Found existing installation: transformers 4.51.3 Uninstalling transformers-4.51.3: Successfully uninstalled transformers-4.51.3 Successfully installed autoawq-0.2.8 transformers-4.47.1 zstandard-0.23.0 tokenizer_config.json: 0%| | 0.00/1.52k [00:00<?, ?B/s][A tokenizer_config.json: 100%|██████████| 1.52k/1.52k [00:00<00:00, 13.1MB/s] tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s][A tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 74.1MB/s] tokenizer.json: 0%| | 0.00/1.80M [00:00<?, ?B/s][A tokenizer.json: 100%|██████████| 1.80M/1.80M [00:00<00:00, 3.52MB/s] special_tokens_map.json: 0%| | 0.00/437 [00:00<?, ?B/s][A special_tokens_map.json: 100%|██████████| 437/437 [00:00<00:00, 2.24MB/s] config.json: 0%| | 0.00/888 [00:00<?, ?B/s][A config.json: 100%|██████████| 888/888 [00:00<00:00, 8.46MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 17, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3669, in from_pretrained hf_quantizer.validate_environment( File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/quantizer_awq.py", line 71, in validate_environment raise RuntimeError( RuntimeError: GPU is required to run AWQ quantized model. You can use IPEX version AWQ if you have an Intel CPU
Container logs:
Fetching error logs...