Page 1 of 1

Segmentation fault on set_maxstep()

Posted: Mon Mar 21, 2011 12:20 pm
by grnavigator
Hi all.
We are trying to convert an older model to work in parallel. The network consists of 4 neurons connected to each other. The connections from the source to the target cells is performed by
1) assigning a gid to each cell via ParallelContext.set_gid2node(gid, ParallelContext.id)
2) connecting the source cell to a nil-connected netcon
3) calling ParallelContext.cell(source-cell-gid, netcon, 1)
4) in the machine containing the target neuron, doing ParallelContext.gid_connect(source-cell-gid, ampa), where ampa is a POINT_PROCESS

I would like to ask a few questions:

1) stdinit() does not exist in neuron version 7.1 that we use, what is the function to initialize the simulation? (we are calling finitialize())

2) The model crashes with a segmentation fault at ParallelContext[0].set_maxstep(10) when run in parallel. I have figured out that if i remove all the gid_connect() calls it can work, but nothing further than that. Also if i dont initialize the model with finitialize(), the call to set_maxstep() will hang forever. Could it have to do with the fact that the gid_connect() targets are POINT_PROCESSes? I have checked that the POINT processes actually exist as targets in the machine when they are being gid_connected()

I would appreciate any insight about why this is happening.
Thanks

Re: Segmentation fault on set_maxstep()

Posted: Mon Mar 21, 2011 9:40 pm
by ted
grnavigator wrote:We are trying to convert an older model to work in parallel. The network consists of 4 neurons connected to each other.
Doesn't sound like enough of a network to require or benefit from distributing over multiple processors with MPI, without also requiring multisplit to achieve balance. If you have multicore machines and the model cells are sufficiently complex, multithreaded multisplit simulation might be helpful, and could be done with much less effort on your part (unless something in your model is inherently not threadsafe, in which case you're stuck). Another alternative, if you need to execute many runs, is bulletin-board-style parallelization, which requires relatively minor changes to serial source code.

But in case none of these alternatives is possible . . .
stdinit() does not exist in neuron version 7.1 that we use
Really? There is no stdrun.hoc in nrn/lib/hoc ? Or there is, but it doesn't define a proc stdrun.hoc()? Here's what it looks like in the most recent 7.2

Code: Select all

proc stdinit() {
        cvode_simgraph()
        realtime = 0
        setdt()
        init()
        initPlot()
}
and here it is from 5.6, which was current at the time of publication of The NEURON Book

Code: Select all

proc stdinit() {
        realtime = 0
        startsw()
        setdt()
        init()
        initPlot()
}
Of all these calls, the one that matters in the absence of a GUI is the call to init(), which was

Code: Select all

proc init() {
  finitialize(v_init)
  fcurrent()
}
until recent drafts of v. 7.2, which omit the call to fcurrent().
The model crashes with a segmentation fault at ParallelContext[0].set_maxstep(10) when run in parallel. I have figured out that if i remove all the gid_connect() calls it can work
It can work without any connections between spike sources and spike targets?
if i dont initialize the model with finitialize(), the call to set_maxstep() will hang forever
Good, since the model would not have been initialized.
Could it have to do with the fact that the gid_connect() targets are POINT_PROCESSes?
Not if the source code for the targets contains NET_RECEIVE blocks.
I have checked that the POINT processes actually exist as targets in the machine when they are being gid_connected()
That's good.

On the off chance that you're running into a bug in 7.1, is there any way you could give the most recent alpha version of 7.2 a try? Or, better yet, the latest development code from the mercurial repository?

Re: Segmentation fault on set_maxstep()

Posted: Tue Mar 22, 2011 12:55 pm
by grnavigator
Ted,
Thanks for your prompt response. It helped a lot to fix our problems. Turns out the problem was that some delays in the NetCons were smaller than the dt value and set_maxstep() would hang. The model is indeed small now, but the plan is to expand it to hundreds of neurons in a large cluster. Thanks again for your help
- George

Re: Segmentation fault on set_maxstep()

Posted: Tue Mar 22, 2011 3:35 pm
by hines
stdinit() from {load_file("nrngui.hoc")} wraps the call to finitialize(v_init)
along with some housekeeping for plotting. You can get by with a direct
call to finitialize().

If an interprocessor NetCon.delay is < dt then you should get an error message like:
0 nrniv: mindelay is 0 (or less than dt for fixed step method)
when pc.psolve(tstop) is called. pc.max_step(10) should not hang. Since it does hang with
your model, can you send all the hoc,ses,mod files to me in a zip file so I can reproduce the error
(how many processors are you using and what is your launch command?)
and either fix the bug or provide a suitable error message? Send to michael dot hines at yale dot edu