Words in Motion: Extracting Interpretable Control Vectors for Motion Transformers
Abstract
Motion forecasting models can be controlled with textual inputs by mapping motion features to human-interpretable natural language, which reveals the arrangement of hidden states in relation to these features.
Transformer-based models generate hidden states that are difficult to interpret. In this work, we analyze hidden states and modify them at inference, with a focus on motion forecasting. We use linear probing to analyze whether interpretable features are embedded in hidden states. Our experiments reveal high probing accuracy, indicating latent space regularities with functionally important directions. Building on this, we use the directions between hidden states with opposing features to fit control vectors. At inference, we add our control vectors to hidden states and evaluate their impact on predictions. Remarkably, such modifications preserve the feasibility of predictions. We further refine our control vectors using sparse autoencoders (SAEs). This leads to more linear changes in predictions when scaling control vectors. Our approach enables mechanistic interpretation as well as zero-shot generalization to unseen dataset characteristics with negligible computational overhead.
Community
Published as a conference paper at ICLR 2025.
Paper: https://arxiv.org/abs/2406.11624
Video: https://youtu.be/SO8DXN8ocdg
Repo: https://github.com/KIT-MRT/future-motion
OpenReview: https://openreview.net/forum?id=J9eKm7j6KD
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper