How to build network model for coreNeuron 

General issues of interest both for network and
individual cell parallelization.

Moderator: hines

zyc
Posts: 20
Joined: Sun Feb 19, 2017 9:15 pm

Re: How to build network model for coreNeuron 

Post by zyc »

I built a stimulus model by NMODL and I want to run simulation on coreNeuron. It run well on CPU but cannot run on GPU. I found the reason is that I defined all the variables in PARAMETER block. After I move two variables into ASSIGNED block, it can run on GPU. Could you please tell me why it happens and what's the difference between variables in PARAMETER block and ASSIGNED block? Thank you so much!
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: How to build network model for coreNeuron 

Post by hines »

If PARAMETER names are not explicitly declared RANGE in the NEURON block they are treated as global variables. This generates race conditions if you
modify those PARAMETERs in the mod file. ASSIGNED names are by default treated as RANGE variables.
I would expect that the mod2c translator generated some messages warning that the mod file was not thread safe.
bremen
Posts: 45
Joined: Mon Apr 24, 2017 8:15 am
Location: Italy

Re: How to build network model for coreNeuron 

Post by bremen »

Hello.

I have done the installation of Coreneuron, following the tutorial on the official github, and, when i run ringtest, everything is fine.
Then i modified ringtest, with one of my models, and tested it in parallel.
With NEURON and MPI on 4 cores, it runs correctly and the results are valid.

Then i exported it with "pc.nrnbbcore_write", compiled the new mods, recompiled Coreneuron and tried to run it with "mpirun -n 4 ./coreneuron_x86/bin/coreneuron_exec -e 400 -d test/ -mpi"

This is the error i obtain: nrn_setup.cpp:1150: Assertion '(ix >= 0) && (ix < nt.end)' failed.

The line "pc.psolve()" is commented and "pc.nrnbbcore_write" is called after "h.stdinit()".
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: How to build network model for coreNeuron 

Post by hines »

I'd like to try to reproduce that problem. Can you send me (michael.hines@yale.edu) a zip file with all the files needed to generate the dat directory.
I will also need the launch command for nrniv. Finally, what is the version of NEIURON you are using.
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: How to build network model for coreNeuron 

Post by hines »

The problem was that cdp5.mod was making use of the "diam" variable
and nrnbbcore_write and coreneuron did not know how to handle that special case.
The fix has been pushed to NEURON and CoreNEURON.
They can be obtained from
http://github.com/nrnhines/nrn.git (master branch)
and
http://github.com/nrnhines/coreneuron.git (diam branch)
zyc
Posts: 20
Joined: Sun Feb 19, 2017 9:15 pm

Re: How to build network model for coreNeuron 

Post by zyc »

Hello, is coreNeuron able to use two or more GPUs? I have tried to use two threads to run ringtest and it generate two network files after nrnbbcore_write(). However, when I use two threads to run this model by coreNeuron, only one GPU is used. Could you please tell me is coreNeuron able to use multi GPUs ? Thank you.
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: How to build network model for coreNeuron 

Post by hines »

Yes. In the sense that if there are one or more gpus on a compute node and one mpi rank on a compute node, then each rank will naturally use a different gpu.
However at present multiple ranks and threads on a compute node will use only one gpu. I believe it will be straightforward to extend this behavior so that
multiple ranks/threads will be associated with available gpus on a compute node in a round robin fashion. ie. all we need is the quantity
ngpus = acc_get_num_devices( acc_device_nvidia ); // number of gpus on a node
and then for each thread/rank an appropriate
acc_set_device_num( igpu, acc_device_nvidia );
bremen
Posts: 45
Joined: Mon Apr 24, 2017 8:15 am
Location: Italy

Re: How to build network model for coreNeuron 

Post by bremen »

Hi.

I have expanded my test network with three different cell types (python/Neuron implementation), linked to specific parts of the gidlist list.

In pseudo code:
for i in gidlist:
if gid < x:
append.Celltype1
elif gid > x and <y:
append.Celltype2
else:
append.Celltype3

I have no issues recording spike times with MPI and NEURON 7.5 but when i do the simulation with MPI and Coreneuron, it saves the spike times only for Celltype3 and empty files for all the others.
If the conditions are two than It saves the latest, like the last condition overwrite all the previous.

Is there a better way to assign celltypes to specific GIDs?
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: How to build network model for coreNeuron 

Post by hines »

I can't diagnose the cause of the problem from the information you have given me. Please send a zip file of all the needed NEURON code with instructiona about how to run and I will take a look at it.
bremen
Posts: 45
Joined: Mon Apr 24, 2017 8:15 am
Location: Italy

Re: How to build network model for coreNeuron 

Post by bremen »

I have done more testing. The code of my previous post is fine.
The problem was that, the other two models, generated no spikes in Coreneuron.

In both models i have tables in the mods and i just discovered that, if not specified in the code, they are active by default.
This explaine the absence of spontaneous activity and the empty spike time files since the models are unable to work in the same way as in NEURON.
Well... this terminate my interest in Coreneuron.
Post Reply