Author Correction: Foundation Model of Neural Activity Predicts Response to New Stimulus Types

Author Correction: Foundation Model of Neural Activity Predicts Response to New Stimulus Types

Nature – Health Policy
Nature – Health PolicyApr 8, 2026

Why It Matters

Accurate methodological disclosure is essential for reproducibility and for other labs to build on the neural‑activity foundation model, reinforcing trust in AI‑driven neuroscience research.

Key Takeaways

  • Perspective MLP hidden size corrected to 16 dimensions.
  • Model runs as four‑head ensemble averaging predictions.
  • Modulation now uses only treadmill velocity and pupil radius.
  • Conv‑LSTM hidden states 6‑dim; CvT‑LSTM hidden states 16‑dim.
  • Core feedforward uses ELU in Conv‑LSTM, GELU in CvT‑LSTM.

Pulse Analysis

The emergence of foundation models that predict neural responses marks a turning point for computational neuroscience, mirroring the impact of large language models in AI. By training on massive datasets of visual stimuli and behavioral metrics, these models promise to decode how the brain integrates sensory input with internal states. However, the utility of such models hinges on transparent reporting of architecture, training regimes, and preprocessing steps—details that enable other researchers to validate findings and extend the work.

The recent correction in Nature addresses several oversights in the original methods description. It reveals that the perspective module’s hidden representation is 16 dimensions rather than the reported eight, and that the system operates as a four‑head ensemble, averaging standardized log‑responses across parallel networks. Moreover, the modulation module now ingests only treadmill velocity and pupil radius, simplifying the behavioral input space. Adjustments to hidden‑state sizes—six dimensions for Conv‑LSTM and sixteen for CvT‑LSTM—along with clarified non‑linearities (ELU vs. GELU) and the inclusion of a spatial grid in certain Conv‑LSTM variants, provide a more accurate blueprint for replication. Correcting equation formatting further eliminates ambiguity for developers implementing the model.

Beyond fixing documentation, this amendment underscores a broader cultural shift toward rigorous reproducibility in AI‑enhanced neuroscience. Precise architectural disclosures allow cross‑lab benchmarking, facilitate integration with emerging datasets, and accelerate the translation of model insights into clinical contexts such as visual prosthetics or neuropsychiatric diagnostics. As foundation models become more prevalent, the community’s commitment to meticulous reporting will be a decisive factor in turning theoretical breakthroughs into reliable, real‑world applications.

Author Correction: Foundation model of neural activity predicts response to new stimulus types

Comments

Want to join the conversation?

Loading comments...