ReplaceMe: Training-Free Transformer Pruning via Layer Removal & Linear Transformations

arXiv License

ReplaceMe Logo

Model Description

ReplaceMe is a novel method for transformer model compression that enables training-free block/layer pruning while maintaining model performance through linear transformations. The approach:

  • Identifies and removes block of layers
  • Applies mathematically-derived transformations to preserve information flow
  • Requires no fine-tuning or retraining
  • Works with standard transformer architectures (The LTs are merged with the original model weights)

Key Features

  • πŸš€ Zero-Training Pruning: Remove layers without any fine-tuning
  • 🧠 Performance Preservation: <8% accuracy drop in most cases
  • ⚑ Instant Speedup: less blocks -> faster inference + less memory
  • πŸ”Œ Plug-and-Play: Works with existing HuggingFace models

πŸ”₯ Performance Comparison of Pruning Methods (Llama 2 7B, 25% Compression)

Method Approach num_pruned_layers Dataset State race 🏁 winogrande 🎲 piqa 🧠 boolq ❓ openbookqa πŸ“– sciq πŸ”¬ lambada_openai πŸ¦™ ppl Avg-acc πŸ“Š
acc acc acc_norm acc acc_norm acc_norm acc
Llama 3 70B (baseline) - - - - 0.470 0.834 0.822 0.875 0.426 0.942 0.759 2.731 0.750
ReplaceMe* (LS) βœ… LS 20 slim_orca no training 0.455 0.792 0.777 0.874 πŸ† 0.404 πŸ† 0.894 0.535 9.277 0.724
ReplaceMe (Ours) βœ… Cosine 20 slim_orca no training 0.467 πŸ† 0.792 πŸ† 0.779 πŸ† 0.872 0.394 0.918 πŸ† 0.634 πŸ† 5.232 πŸ† 0.727 πŸ†

Key:

  • πŸ† Best performance in column
  • βœ… Training-free (our methods)

Metrics Explained:

  • Bold: Best training-free results
  • All numbers are accuracy scores

πŸ”₯ Our training-free methods achieve 96.6% of baseline performance while other approaches require expensive retraining!

Installation

pip install replaceme
# or
git clone https://github.com/mts-ai/ReplaceMe
cd ReplaceMe
pip install -e .

Basic Usage

# LSTSQ method (recommended)
run_replaceme --config ./reproduce/Replace_Me_pipeline_lstsq.yaml

# Cosine similarity method
run_replaceme --config ./reproduce/Replace_Me_pipeline_cosine.yaml

There are many parameters you can play with, visit our repo and dscover πŸ”₯πŸ”₯

Load Model

As we said we are merging the LTs with the original transformer architecture so you just do it as usual

## EXAMPLE
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "MTSAIR/Llama3-53B-ReplaceMe"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "What is ReplaceME pruning method?!"
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

output = model.generate(
    **model_inputs,
    max_new_tokens=512
)
response = tokenizer.batch_decode(output, skip_special_tokens=True)[0]

Citation

If you use ReplaceMe in your research, please cite our paper:

@article{shopkhoev2025replaceme0,
  title   = {ReplaceMe: Network Simplification via Layer Pruning and Linear Transformations},
  author  = {Dmitriy Shopkhoev and Ammar Ali and Magauiya Zhussip and Valentin Malykh and Stamatios Lefkimmiatis and Nikos Komodakis and Sergey Zagoruyko},
  year    = {2025},
  journal = {arXiv preprint arXiv: 2505.02819}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including MTSAIR/Llama3-53B-ReplaceMe