HiPPO-Prophecy: State-Space Models can Provably Learn Dynamical Systems in Context
Abstract
A novel weight construction for State Space Models (SSMs) enables them to predict the next state without parameter fine-tuning by extending the HiPPO framework to approximate derivatives of input signals.
This work explores the in-context learning capabilities of State Space Models (SSMs) and presents, to the best of our knowledge, the first theoretical explanation of a possible underlying mechanism. We introduce a novel weight construction for SSMs, enabling them to predict the next state of any dynamical system after observing previous states without parameter fine-tuning. This is accomplished by extending the HiPPO framework to demonstrate that continuous SSMs can approximate the derivative of any input signal. Specifically, we find an explicit weight construction for continuous SSMs and provide an asymptotic error bound on the derivative approximation. The discretization of this continuous SSM subsequently yields a discrete SSM that predicts the next state. Finally, we demonstrate the effectiveness of our parameterization empirically. This work should be an initial step toward understanding how sequence models based on SSMs learn in context.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DISCO: learning to DISCover an evolution Operator for multi-physics-agnostic prediction (2025)
- Free Parametrization of L2-bounded State Space Models (2025)
- Fast Training of Recurrent Neural Networks with Stationary State Feedbacks (2025)
- A Low-complexity Structured Neural Network to Realize States of Dynamical Systems (2025)
- Learning with Imperfect Models: When Multi-step Prediction Mitigates Compounding Error (2025)
- A Survey on Structured State Space Sequence (S4) Models (2025)
- Dynamically Learning to Integrate in Recurrent Neural Networks. (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper