Spaces:
Runtime error
Runtime error
Loads and samples video frames with accurate timestamps
#2
by
juvix
- opened
Loads and samples video frames with accurate timestamps
Sends a precise multi-task prompt to LLaVA (frame analysis + transcription)
Extracts and formats the output cleanly into visual events and speech
Uses a default prompt to auto-analyze the uploaded video
juvix
changed pull request title from
Loads and samples video frames with accurate timestamps Sends a precise multi-task prompt to LLaVA (frame analysis + transcription) Extracts and formats the output cleanly into visual events and speech Uses a default prompt to auto-analyze the uploaded video
to Loads and samples video frames with accurate timestamps
you are a scholar and a gentleman, and you have all my thanks ๐๐ป๐
Tonic
changed pull request status to
merged
Container logs:
===== Application Startup at 2025-06-05 14:31:23 =====
Collecting flash-attn
Downloading flash_attn-2.7.4.post1.tar.gz (6.0 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 6.0/6.0 MB 76.7 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Requirement already satisfied: torch in /usr/local/lib/python3.10/site-packages (from flash-attn) (2.7.0)
Requirement already satisfied: einops in /usr/local/lib/python3.10/site-packages (from flash-attn) (0.8.1)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (3.18.0)
Requirement already satisfied: typing-extensions>=4.10.0 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (4.13.2)
Requirement already satisfied: sympy>=1.13.3 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (1.14.0)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (3.4.2)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (3.1.6)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (2024.12.0)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.6.77 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (12.6.77)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.6.77 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (12.6.77)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.6.80 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (12.6.80)
Requirement already satisfied: nvidia-cudnn-cu12==9.5.1.17 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (9.5.1.17)
Requirement already satisfied: nvidia-cublas-cu12==12.6.4.1 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (12.6.4.1)
Requirement already satisfied: nvidia-cufft-cu12==11.3.0.4 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (11.3.0.4)
Requirement already satisfied: nvidia-curand-cu12==10.3.7.77 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (10.3.7.77)
Requirement already satisfied: nvidia-cusolver-cu12==11.7.1.2 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (11.7.1.2)
Requirement already satisfied: nvidia-cusparse-cu12==12.5.4.2 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (12.5.4.2)
Requirement already satisfied: nvidia-cusparselt-cu12==0.6.3 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (0.6.3)
Requirement already satisfied: nvidia-nccl-cu12==2.26.2 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (2.26.2)
Requirement already satisfied: nvidia-nvtx-cu12==12.6.77 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (12.6.77)
Requirement already satisfied: nvidia-nvjitlink-cu12==12.6.85 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (12.6.85)
Requirement already satisfied: nvidia-cufile-cu12==1.11.1.6 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (1.11.1.6)
Requirement already satisfied: triton==3.3.0 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (3.3.0)
Requirement already satisfied: setuptools>=40.8.0 in /usr/local/lib/python3.10/site-packages (from triton==3.3.0->torch->flash-attn) (65.5.1)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.10/site-packages (from sympy>=1.13.3->torch->flash-attn) (1.3.0)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/site-packages (from jinja2->torch->flash-attn) (2.1.5)
Building wheels for collected packages: flash-attn
Building wheel for flash-attn (setup.py): started
Building wheel for flash-attn (setup.py): finished with status 'done'
Created wheel for flash-attn: filename=flash_attn-2.7.4.post1-py3-none-any.whl size=217423 sha256=28e9668ba45815c7e7edbffd64fe959eade529c3fef67ba72bda90b57c48fb69
Stored in directory: /home/user/.cache/pip/wheels/59/ce/d5/08ea07bfc16ba218dc65a3a7ef9b6a270530bcbd2cea2ee1ca
Successfully built flash-attn
Installing collected packages: flash-attn
Successfully installed flash-attn-2.7.4.post1
[notice] A new release of pip is available: 25.0.1 -> 25.1.1
[notice] To update, run: /usr/local/bin/python -m pip install --upgrade pip
Failed to import llava_llama from llava.language_model.llava_llama. Error: Could not import module 'LlamaModel'. Are this object's requirements defined correctly?
Failed to import llava_qwen from llava.language_model.llava_qwen. Error: Could not import module 'LlamaModel'. Are this object's requirements defined correctly?
Failed to import llava_mistral from llava.language_model.llava_mistral. Error: Could not import module 'MistralModel'. Are this object's requirements defined correctly?
Failed to import llava_mixtral from llava.language_model.llava_mixtral. Error: Could not import module 'MixtralModel'. Are this object's requirements defined correctly?
Traceback (most recent call last):
File "/home/user/app/app.py", line 13, in <module>
from llava.model.builder import load_pretrained_model
File "/usr/local/lib/python3.10/site-packages/llava/__init__.py", line 1, in <module>
from .model import LlavaLlamaForCausalLM
ImportError: cannot import name 'LlavaLlamaForCausalLM' from 'llava.model' (/usr/local/lib/python3.10/site-packages/llava/model/__init__.py)
Collecting flash-attn
Downloading flash_attn-2.7.4.post1.tar.gz (6.0 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 6.0/6.0 MB 71.7 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Requirement already satisfied: torch in /usr/local/lib/python3.10/site-packages (from flash-attn) (2.7.0)
Requirement already satisfied: einops in /usr/local/lib/python3.10/site-packages (from flash-attn) (0.8.1)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (3.18.0)
Requirement already satisfied: typing-extensions>=4.10.0 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (4.13.2)
Requirement already satisfied: sympy>=1.13.3 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (1.14.0)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (3.4.2)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (3.1.6)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (2024.12.0)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.6.77 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (12.6.77)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.6.77 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (12.6.77)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.6.80 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (12.6.80)
Requirement already satisfied: nvidia-cudnn-cu12==9.5.1.17 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (9.5.1.17)
Requirement already satisfied: nvidia-cublas-cu12==12.6.4.1 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (12.6.4.1)
Requirement already satisfied: nvidia-cufft-cu12==11.3.0.4 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (11.3.0.4)
Requirement already satisfied: nvidia-curand-cu12==10.3.7.77 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (10.3.7.77)
Requirement already satisfied: nvidia-cusolver-cu12==11.7.1.2 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (11.7.1.2)
Requirement already satisfied: nvidia-cusparse-cu12==12.5.4.2 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (12.5.4.2)
Requirement already satisfied: nvidia-cusparselt-cu12==0.6.3 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (0.6.3)
Requirement already satisfied: nvidia-nccl-cu12==2.26.2 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (2.26.2)
Requirement already satisfied: nvidia-nvtx-cu12==12.6.77 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (12.6.77)
Requirement already satisfied: nvidia-nvjitlink-cu12==12.6.85 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (12.6.85)
Requirement already satisfied: nvidia-cufile-cu12==1.11.1.6 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (1.11.1.6)
Requirement already satisfied: triton==3.3.0 in /usr/local/lib/python3.10/site-packages (from torch->flash-attn) (3.3.0)
Requirement already satisfied: setuptools>=40.8.0 in /usr/local/lib/python3.10/site-packages (from triton==3.3.0->torch->flash-attn) (65.5.1)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.10/site-packages (from sympy>=1.13.3->torch->flash-attn) (1.3.0)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/site-packages (from jinja2->torch->flash-attn) (2.1.5)
Building wheels for collected packages: flash-attn
Building wheel for flash-attn (setup.py): started
Building wheel for flash-attn (setup.py): finished with status 'done'
Created wheel for flash-attn: filename=flash_attn-2.7.4.post1-py3-none-any.whl size=217423 sha256=8be2cbb2c6f97bb41df4673f08f9c20609f9728f16e349ffa511de147a515247
Stored in directory: /home/user/.cache/pip/wheels/59/ce/d5/08ea07bfc16ba218dc65a3a7ef9b6a270530bcbd2cea2ee1ca
Successfully built flash-attn
Installing collected packages: flash-attn
Successfully installed flash-attn-2.7.4.post1
[notice] A new release of pip is available: 25.0.1 -> 25.1.1
[notice] To update, run: /usr/local/bin/python -m pip install --upgrade pip
Failed to import llava_llama from llava.language_model.llava_llama. Error: Could not import module 'LlamaModel'. Are this object's requirements defined correctly?
Failed to import llava_qwen from llava.language_model.llava_qwen. Error: Could not import module 'LlamaModel'. Are this object's requirements defined correctly?
Failed to import llava_mistral from llava.language_model.llava_mistral. Error: Could not import module 'MistralModel'. Are this object's requirements defined correctly?
Failed to import llava_mixtral from llava.language_model.llava_mixtral. Error: Could not import module 'MixtralModel'. Are this object's requirements defined correctly?
Traceback (most recent call last):
File "/home/user/app/app.py", line 13, in <module>
from llava.model.builder import load_pretrained_model
File "/usr/local/lib/python3.10/site-packages/llava/__init__.py", line 1, in <module>
from .model import LlavaLlamaForCausalLM
ImportError: cannot import name 'LlavaLlamaForCausalLM' from 'llava.model' (/usr/local/lib/python3.10/site-packages/llava/model/__init__.py)
would you mind fixing this too ? normally it's easy ;-)
hey there, i reverted the commit but the door is open for you to contribute if the erros above are resolved :-) so dont be shy to push and write your good name here :-)