We Can Now Simulate a Human Brain, Scientists Show
Why It Matters
A scalable human‑brain simulation could accelerate fundamental neuroscience and AI breakthroughs, but without accurate wiring data and ethical safeguards, its scientific value and societal impact remain uncertain.
Key Takeaways
- •New method enables billions of neurons on exascale supercomputers.
- •Parallel GPU allocation reduces data shuffling bottleneck dramatically.
- •Simulating full human brain still lacks accurate connectome mapping.
- •Prior Human Brain Project failed due to unclear scientific goals.
- •Ethical concerns arise if simulated brain attains consciousness.
Summary
The video discusses a breakthrough paper claiming that the next generation of exascale supercomputers will be capable of simulating a full human brain. By leveraging a novel parallel‑GPU architecture, researchers say they can allocate hundreds of thousands of neurons to each GPU and then interconnect the units, sidestepping the massive data‑movement bottleneck that limited earlier attempts.
The authors estimate that a single NVIDIA A100 can handle roughly 225,000 neurons, while the upcoming JUPITER system in Germany could support about 800,000 neurons per GPU, reaching close to 20 billion neurons—approximately a quarter of the human brain’s 80 billion cells. This follows a progression from worm (≈300 neurons) to fruit‑fly (≈140 k neurons) and mouse (≈70 million neurons) simulations, illustrating a rapid scaling curve enabled by hardware advances.
The paper explicitly states, “network sizes of 2×10^10 neurons can be reached with our approach on the upcoming exascale supercomputer JUPITER.” Researchers who previously contributed to the EU’s Human Brain Project note that the earlier effort floundered because it lacked clear scientific objectives and a detailed connectome. Without a complete map of human neural wiring, any full‑brain model would remain a speculative approximation.
If realized, such simulations could transform neuroscience, offering a testbed for theories of cognition and a new substrate for AI research. However, the absence of a true connectome, challenges in training the model, and profound ethical questions about consciousness and rights underscore that technical feasibility alone does not guarantee practical or moral viability.
Comments
Want to join the conversation?
Loading comments...