NEURON on GPUs?

General issues of interest both for network and
individual cell parallelization.

Moderator: hines

Post Reply
catubc
Posts: 21
Joined: Mon Apr 07, 2014 6:46 pm

NEURON on GPUs?

Post by catubc »

Hi everyone

Is this project MIA? Last link I found dates back to 2014:

http://bitbucket.org/nrnhines/nrngpu

Just getting bit impatient with 1 node jobs sitting in cluster queues for many hrs.... Fantasizing about taking matters into my own hands :)

Thanks!
catubc
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: NEURON on GPUs?

Post by hines »

The original attempt has been abandoned due to the difficulty of managing two very different code bases. The present attempt is in the context of CoreNEURON which will become a plugin to NEURON but presently
requires NEURON to write a model data file which is then read by CoreNEURON and simulated (7 fold memory savings). All computations, including tree solver and NET_RECEIVE, are now on the gpu and only
spike exchange is handled by the cpu. Works very nicely for fixed step spike coupled networks and gap junctions are coming very soon. At that point it will be available as opensource, hopefully by the end of the summer.
For large memory bandwidth limited models, the speedup over a single core is about a factor of 10.
catubc
Posts: 21
Joined: Mon Apr 07, 2014 6:46 pm

Re: NEURON on GPUs?

Post by catubc »

Thanks Michael.

It sounds like this is partly developed on the Bluebrain work you've done already. The speedup is a bit unclear. Will an arbitrarily sized network speed up 10x with an ~500-1000core GPU?

More practically, higher-end GPUs now have 3000-3500 cores - so will it be possible to assign each cell from a 3000 cell network to each of the ~3000 cores? Even if that scales as sqrt(#cores) will be a speed up of 10-50 times, so we can all now run small networks on our desktops. Would be amazing!

Thanks for the amazing work (also to Ted!).

catubc
sgratiy
Posts: 41
Joined: Tue Mar 30, 2010 5:33 pm

Re: NEURON on GPUs?

Post by sgratiy »

@ Michael
Could you clarify, what do you mean by
large memory bandwidth limited models
?
Post Reply