Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors

TitleNeuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors
Publication TypeJournal Article
Year of Publication2008
AuthorsHines, M. L., Eichner Hubert, and Schürmann Felix
JournalJournal of computational neuroscience
Volume25
Pagination203–210
KeywordsComputer modeling, Computer simulation, Load balance, Neuronal networks, Parallel simulation
Abstract

Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.

Full Text

Preprint available as splitcell.pdf 
Load balance is important for maximizing speedup when simulating neural networks on parallel hardware. With NEURON, load balance can be achieved by splitting cells into subtrees that are solved on different processors with no change in accuracy, stability, or computational effort; interprocessor communication costs are minimal.