runtime error

Exit code: 1. Reason: 4-of-00004.safetensors: 3%|β–Ž | 73.4M/2.16G [00:01<00:55, 37.5MB/s] model-00004-of-00004.safetensors: 17%|β–ˆβ–‹ | 377M/2.16G [00:02<00:11, 150MB/s]  model-00004-of-00004.safetensors: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1.08G/2.16G [00:04<00:03, 349MB/s] model-00004-of-00004.safetensors: 69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 1.49G/2.16G [00:05<00:01, 358MB/s] model-00004-of-00004.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 2.16G/2.16G [00:06<00:00, 355MB/s] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last): File "/home/user/app/app.py", line 273, in <module> model = LlavaForConditionalGeneration.from_pretrained(MODEL_PATH, torch_dtype="bfloat16", device_map=0) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 279, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4400, in from_pretrained ) = cls._load_pretrained_model( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4793, in _load_pretrained_model caching_allocator_warmup(model_to_load, expanded_device_map, factor=2 if hf_quantizer is None else 4) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 5795, in caching_allocator_warmup device_memory = torch.cuda.mem_get_info(index)[0] File "/usr/local/lib/python3.10/site-packages/torch/cuda/memory.py", line 836, in mem_get_info return torch.cuda.cudart().cudaMemGetInfo(device) File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 449, in cudart _lazy_init() File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 372, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx Loading checkpoint shards: 0%| | 0/4 [00:01<?, ?it/s]

Container logs:

Fetching error logs...