Research Note: Markov-Switching State-Space Models for Neural Decoding
Published:
This note documents a direction explored in my weekly reports: using Markov-switching state-space models (SSMs) as a controllable alternative to heavy end-to-end deep pipelines for neural decoding tasks.
Motivation
Three practical constraints motivate this direction:
- training cost can be high in large neural decoders,
- throughput can be limited for rapid experimentation,
- cross-dataset benchmarking needs models with transparent assumptions.
Base SSM
A standard linear SSM writes:
\[\dot{x}(t)=A(t)x(t)+B(t)u(t),\qquad y(t)=C(t)x(t)+D(t)u(t).\]In discrete form (VAR-style latent dynamics):
\[y_t=Cx_t+w_t,\qquad x_t=\sum_{i=1}^p A_i x_{t-i}+v_t.\]Regime Switching Extension
With latent regime \(S_t\), both transition and observation structure can switch:
\[y_t=C_{S_t}x_{t,S_t}+w_t,\qquad x_{t,j}=\sum_{i=1}^p A_{i,j}x_{t-i,j}+v_{t,j},\;1\le j\le M.\]Interpretation:
- each regime has its own linear dynamics,
- regime transitions are Markovian,
- observations are generated by the currently active regime.
Why This Is Useful for EEG/MEG
Neural recordings are often non-stationary across time, subject condition, and device context. Regime switching gives a principled way to model piecewise-stable dynamics without forcing a single global linear model.
Current Practical Goals
- Reduce computational cost compared with heavier baselines.
- Maintain competitive decoding quality on open EEG/MEG benchmarks.
- Improve interpretability by exposing regime-specific dynamics.
This is still a working direction, but it has strong potential as a bridge between statistical rigor and deployment-friendly decoding.