davidberenstein1957's picture
Add files using upload-large-folder tool
93a1fa9 verified
|
raw
history blame
3.81 kB
metadata
library_name: transformers
tags:
  - pruna-ai

Model Card for PrunaAI/test-tiny-random-llama4-smashed

This model was created using the pruna library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.

Usage

First things first, you need to install the pruna library:

pip install pruna

You can then load this model using the following code:

from pruna import PrunaModel

loaded_model = PrunaModel.from_hub("PrunaAI/test-tiny-random-llama4-smashed")

After loading the model, you can use the inference methods of the original model.

Smash Configuration

The compression configuration of the model is stored in the smash_config.json file.

{
    "batcher": null,
    "cacher": null,
    "compiler": null,
    "pruner": null,
    "quantizer": null,
    "max_batch_size": 1,
    "device": "cpu",
    "save_fns": [],
    "load_fns": [
        "transformers"
    ],
    "reapply_after_load": {
        "pruner": null,
        "quantizer": null,
        "cacher": null,
        "compiler": null,
        "batcher": null
    }
}

Model Configuration

The configuration of the model is stored in the config.json file.

{
    "config": {
        "architectures": [
            "Llama4TextModel"
        ],
        "attention_bias": false,
        "attention_chunk_size": 8192,
        "attention_dropout": 0.0,
        "attn_scale": 0.1,
        "attn_temperature_tuning": 4,
        "bos_token_id": 200000,
        "cache_implementation": "hybrid",
        "eos_token_id": [
            200001,
            200007,
            200008
        ],
        "floor_scale": 8192,
        "for_llm_compressor": false,
        "head_dim": 8,
        "hidden_act": "silu",
        "hidden_size": 16,
        "initializer_range": 0.02,
        "interleave_moe_layer_step": 1,
        "intermediate_size": 32,
        "intermediate_size_mlp": 64,
        "max_position_embeddings": 10485760,
        "model_type": "llama4_text",
        "moe_layers": [
            0,
            1,
            2,
            3,
            4
        ],
        "no_rope_layers": [
            1,
            1,
            1,
            0,
            1
        ],
        "num_attention_heads": 10,
        "num_experts_per_tok": 1,
        "num_hidden_layers": 5,
        "num_key_value_heads": 2,
        "num_local_experts": 4,
        "output_router_logits": false,
        "pad_token_id": 200018,
        "rms_norm_eps": 1e-05,
        "rope_scaling": {
            "factor": 8.0,
            "high_freq_factor": 4.0,
            "low_freq_factor": 1.0,
            "original_max_position_embeddings": 8192,
            "rope_type": "llama3"
        },
        "rope_theta": 500000.0,
        "router_aux_loss_coef": 0.001,
        "router_jitter_noise": 0.0,
        "tie_word_embeddings": false,
        "torch_dtype": "float32",
        "transformers_version": "4.51.3",
        "use_cache": true,
        "use_qk_norm": true,
        "vocab_size": 202048
    }
}

🌍 Join the Pruna AI community!

Twitter GitHub LinkedIn Discord Reddit