How to build network model for coreNeuron 

General issues of interest both for network and
individual cell parallelization.

Moderator: hines

zyc
Posts: 20
Joined: Sun Feb 19, 2017 9:15 pm

How to build network model for coreNeuron 

Post by zyc »

I am a beginner of coreNeuron. I have tested the "ring network" of coreNeuron. Now I want to modify the size and the structure of the network, but I don't know how these network files(.dat file) are built by neuron so I am not able to modify the network. I wonder how can I use neuron to build the network for coreNeuron, namely how can I use neuron to generate the .dat files read by coreNeuron.
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: How to build network model for coreNeuron 

Post by hines »

The present usage mode of CoreNEURON is to develop the model as a normal NEURON network model in which all real cells are associated with global identifiers That is run in the usual NEURON fashion at least through setup and initialization and then
ParallelContext.nrnbbcore_write("folder") is called (a previous cvode.cache_efficient(1) invocation is required) which writes all the data describing the model and which is read by CoreNEURON to do the simulation.
If you have the latest ringtest github repository, be sure to look at the README.md file for complete instructions.

nrniv -python ringtest.py --help
will display the parameters that define the size and structure of the model.
zyc
Posts: 20
Joined: Sun Feb 19, 2017 9:15 pm

Re: How to build network model for coreNeuron 

Post by zyc »

Thank you so much! I have found the ringtests on github, but when I use ringtest.py to build model, it shows that 'hoc.HocObject' has no attribute 'nrnbbcore_write'. Could you please tell me what's the version of neuron do you use? Thank you.
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: How to build network model for coreNeuron 

Post by hines »

It is best to use the git or hg repository version. ie
git clone http://github.org/nrnhines/nrn
or
git clone http://bitbucket.org/nrnhines/nrn
(for interviews, replace the last nrn with iv)

If those are inconvenient because you don't have autotools to allow ./build.sh to successfully run, then you can use
http://www.neuron.yale.edu/ftp/neuron/v ... 510.tar.gz
which is ready to run ./configure
zyc
Posts: 20
Joined: Sun Feb 19, 2017 9:15 pm

Re: How to build network model for coreNeuron 

Post by zyc »

Thank you. I have tried to install the neuron on github, but there raised a error while compiling, which is "./.libs/liboc.so: undefined reference to `__pgdbg_stub'". How can I deal with this? 
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: How to build network model for coreNeuron 

Post by hines »

That seems like a pgi compiler problem. I would try building with gcc.

I do not experience your error, but I have so far been unable to build using
PGI Community Edition Version 16.10 (released November 14, 2016)
and cannot seem to work around errors like
/bin/sed: can't read /usr/lib64/librdmacm.la: No such file or directory
libtool: error: '/usr/lib64/librdmacm.la' is not a valid libtool archive
on my ubuntu 16.4 machine.
zyc
Posts: 20
Joined: Sun Feb 19, 2017 9:15 pm

Re: How to build network model for coreNeuron 

Post by zyc »

Thank you. I installed Neuron successfully. But there is another error when I run coreNeuron with GPU. The error is "call to cudaGetSymbolAddress returned error 13: Other". I use the following commands to compile coreNeuron:

module purge
module load /home/zyc/pgi/modulefiles/pgi64/16.10 /home/zyc/pgi/modulefiles/openmpi/1.10.2/2016
export CC=mpicc
export CXX=mpicxx
cmake -D CUDA_CUDART_LIBRARY=/home/zyc/pgi/linux86-64/2016/cuda/7.5/lib64 .. -DCMAKE_C_FLAGS:STRING="-acc -Minfo=acc -Minline=size:200,levels:10 -O3 -DSWAP_ENDIAN_DISABLE_ASM -DDISABLE_HOC_EXP -Mcuda=7.5" -DCMAKE_CXX_FLAGS:STRING="-acc -Minfo=acc -Minline=size:200,levels:10 -O3 -DSWAP_ENDIAN_DISABLE_ASM -DDISABLE_HOC_EXP -Mcuda=7.5" -DCOMPILE_LIBRARY_TYPE=STATIC -DCMAKE_INSTALL_PREFIX=/home/zyc/GPU-Neuron -DCUDA_HOST_COMPILER=`which gcc` -DCUDA_PROPAGATE_HOST_FLAGS=OFF -DENABLE_SELECTIVE_GPU_PROFILING=ON -DENABLE_OPENACC=ON
make -j 12
make install

There is a cuda already installed on the machine(the path is /usr/local/cuda), and the version is 8.0. I can install coreNeuron successfully, but I can't run it. Could you please tell me what's wrong with it?

I have also experienced the error "can't read /usr/lib64/librdmacm.la: No such file or directory". I solved it by installing libdmacm and add it to environment path. I hope it will give you some help.
pkumbhar
Posts: 13
Joined: Fri Mar 11, 2016 5:57 am

Re: How to build network model for coreNeuron 

Post by pkumbhar »

Code: Select all

The error is "call to cudaGetSymbolAddress returned error 13:"
Above CUDA error code means "One or more of the parameters passed to the API call is not within an acceptable range of values."

I am wondering if there is some mix of CUDA versions and PGI compiler (if you have multiple cuda version installed).

On our cluster I ran coreneuron on gpu. The complete script to build neuron, coreneuron and run ringtest on GPU is located https://gist.github.com/pramodk/921be20 ... 09ba18af88. We will update the ringtest instructions for GPU.

Relevant CoreNEURON compilation command on our cluster is:

Code: Select all

module purge
module load pgi/pgi64/16.5 pgi/mpich/16.5
module load cuda/7.0
export CC=mpicc
export CXX=mpicxx

cd $SOURCE_DIR/ringtest
mkdir -p coreneuron_x86 && cd coreneuron_x86
cmake $BASE_DIR/sources/CoreNeuron -DADDITIONAL_MECHPATH=`pwd`/mod -DCMAKE_C_FLAGS:STRING="-O2" -DCMAKE_CXX_FLAGS:STRING="-O2" -DCOMPILE_LIBRARY_TYPE=STATIC -DCUDA_HOST_COMPILER=`which gcc` -DCUDA_PROPAGATE_HOST_FLAGS=OFF -DENABLE_SELECTIVE_GPU_PROFILING=ON -DENABLE_OPENACC=ON
make VERBOSE=1
(note that some extra c/c++ flags in your cmake command are not necessary, we will update the instructions)

We have PGI 16.5 version of compiler which is configured to use CUDA 7.0 :

Code: Select all


$ pgcc -show
...........
USECUDAROOT         =/gpfs/bbp.cscs.ch/apps/viz/tools/pgi/16.5/linux86-64/2016/cuda/7.0
...........
Also, pgcc compiler shows CUDA 7.0 as default target (and hence I don't have to specify any additional compiler flags) :

Code: Select all

$ man pgcc
.......
 -ta=target
           cuda7.0 (default) cuda7.5
                Use the CUDA 7.0 (default) or 7.5 toolkit to build the GPU code.
But If I load newer CUDA 7.5 module, then I need to add following C/C++ flags to CMake configure command :

Code: Select all

-DCMAKE_C_FLAGS:STRING="-O2 -ta=tesla:cuda7.5" -DCMAKE_CXX_FLAGS:STRING="-O2 -ta=tesla:cuda7.5"
If you still see the issue, let us know. We will check how the build process could be simplified to detect inconsistencies.
pkumbhar
Posts: 13
Joined: Fri Mar 11, 2016 5:57 am

Re: How to build network model for coreNeuron 

Post by pkumbhar »

Note that step-by-step tutorial (draft version) to use CoreNEURON with NEURON can be found here : https://github.com/nrnhines/ringtest
zyc
Posts: 20
Joined: Sun Feb 19, 2017 9:15 pm

Re: How to build network model for coreNeuron 

Post by zyc »

Thank you. Threre are still some errors. Must I use the cuda in pgi? I don't have the modulefile of cuda, so I add the cuda to environment path. When I try to compile with cuda in pgi, there raised some errors like "undefined reference to `cudaMemGetInfo' ". I  set the environment variables about cuda as follow:

Code: Select all

export CUDA_ROOT_DIR=/home/zyc/pgi/linux86-64/2016/cuda/7.5
export PATH=$CUDA_ROOT_DIR/bin:$PATH
export PGI_OPTL_INCLUDE_DIRS="$CUDA_ROOT_DIR/include"
export PGI_OPTL_LIB_DIRS="$CUDA_ROOT_DIR/lib64"
export LD_LIBRARY_PATH=/home$CUDA_ROOT_DIR/lib64:$LD_LIBRARY_PATH
Then I try to compile coreNeuron:

Code: Select all

module purge
module load /home/zyc/pgi/modulefiles/pgi64/16.10 /home/zyc/pgi/modulefiles/openmpi/1.10.2/2016
export CC=mpicc
export CXX=mpicxx
cmake .. -DCUDA_CUDART_LIBRARY=/home/zyc/pgi/linux86-64/2016/cuda/7.5/lib64 -DCMAKE_C_FLAGS:STRING="-O2 -ta=tesla:cuda7.5" -DCMAKE_CXX_FLAGS:STRING="-O2 -ta=tesla:cuda7.5" -DCOMPILE_LIBRARY_TYPE=STATIC -DCMAKE_INSTALL_PREFIX=/home/zyc/GPU-Neuron -DCUDA_HOST_COMPILER=`which gcc` -DCUDA_PROPAGATE_HOST_FLAGS=OFF -DENABLE_SELECTIVE_GPU_PROFILING=ON -DENABLE_OPENACC=ON
make
Could you please tell me what's wrong with it? Thank you!
pkumbhar
Posts: 13
Joined: Fri Mar 11, 2016 5:57 am

Re: How to build network model for coreNeuron 

Post by pkumbhar »

> Must I use the cuda in pgi? I don't have the modulefile of cuda, so I add the cuda to environment path.

I mean same CUDA Toolkit version. Note that PGI compiler provides CUDA runtime but not CUDA Development Toolkit. So after setting all those environmental variables, coreneuron build might be using other CUDA Toolkit installation on your system (if nvcc is in PATH). Sorry for all this confusion.

In order to avoid these issues, we added new flag to disable use of CUDA in CoreNEURON. (CUDA is only required if you are using nrnRandom123 streams). Please pull the latest changes from github repository and then try building as:

Code: Select all

cmake .. -DCMAKE_C_FLAGS:STRING="-O2" -DCMAKE_CXX_FLAGS:STRING="-O2" -DCOMPILE_LIBRARY_TYPE=STATIC -DENABLE_OPENACC=ON -DENABLE_CUDA_MODULES=OFF
I have updated GPU build section of the tutorial.
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: How to build network model for coreNeuron 

Post by hines »

With regard to building NEURON with the pg compiler, I was successful with the following. Python had several problems that I worked around witt the configure options
--with-nrnpython=dynamic --disable-rx3d --disable-pysetup
Another issue had to do with some pg distributed libtool *.la files that had incorrect paths and could be worked around simply by removing them. For convenience I installed the pg compilers in a place owned
by me.

https://www.pgroup.com/products/community.htm
tar xf pgilinux-2016-1610-x86_64.tar.gz
./install
1 Single system install
Installation directory? [/opt/pgi] /home/hines/soft/pgi
Do you wish to update/create links in the 2016 directory? (y/n) y
Do you want to install Open MPI onto your system? (y/n) y
Do you want to enable NVIDIA GPU support in Open MPI? (y/n) y
Do you wish to generate license keys or configure license service? (y/n) n
Do you want the files in the install directory to be read-only? (y/n) y

export PGI=~/soft/pgi/linux86-64/2016
export PATH=$PGI/bin:$PGI/mpi/openmpi/bin:$PATH

sudo apt-get install librdmacm-dev
rm $PGI/mpi/openmpi/lib/*.la

mkdir ~/neuron/nrnpgi
cd ~/neuron/nrnpgi
../nrn/configure --prefix=`pwd` --with-paranrn \
--with-nrnpython=dynamic --disable-rx3d --disable-pysetup\
CC=pgcc CXX=pgc++

Note: --disable-pysetup avoids failure in setup.py due to a bunch of
pgcc-Error-Unknown switch
Note: --with-nrnpython=dynamic avoids problem
/usr/bin/install: cannot stat '.libs/libnrnpython.lai': No such file or directo$
zyc
Posts: 20
Joined: Sun Feb 19, 2017 9:15 pm

Re: How to build network model for coreNeuron 

Post by zyc »

Thank you so much, I can successfully use coreNeuron on my machine with your patient help. I have another question. Could you please tell me how can I set the stim for a cell when I do simulation on coreNeuron?
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: How to build network model for coreNeuron 

Post by hines »

Presently, the entire model, including stimuli, are declared in the files written by NEURON. As there is no interpreter in CoreNEURON, any changes to model parameters can only be accomplished with
c/c++ code added to the executable, or else by having NEURON write another set of data files. A future goal is to treat CoreNEURON as a NEURON library which will then support changes to the model
in the way NEURON has always done.
zyc
Posts: 20
Joined: Sun Feb 19, 2017 9:15 pm

Re: How to build network model for coreNeuron 

Post by zyc »

Thank you!
Post Reply