Parallel network & parameter space exploration

General issues of interest both for network and
individual cell parallelization.

Moderator: hines

Post Reply
duboismathieu

Parallel network & parameter space exploration

Post by duboismathieu »

Dear neuron forum,

I would like to reproduce an experiment experiment described in "Signal Propagation and Logic Gating in Networks of Integrate-and-Fire Neurons" Tim P. Vogels and L. F. Abbott, The Journal of Neuroscience, 2005, 25(46) (see http://www.jneurosci.org/cgi/content/ab ... 5/46/10786) which consist in observing self-sustained activity in a neural network for different values of excitatory and inhibitory weights (AMPA_GMAX and GABA_GMAX). During an initial phase, the network is driven by excitatory spike trains (with weight = AMPA_GMAX).

Here is what I would like to do:

Code: Select all

create_net()
create_stim()
loop over AMPA_GMAX
 set_stimulation_weight(AMPA_GMAX)
 loop over GABA_GMAX
  set_net_weights(AMPA_GMAX, GABA_GMAX)
  init()
  run()
  save_spiketrains()
 end loop
end loop
As one can see, it involves a lot of simulation so the idea to use a parallel network came.

The same network was used in "Simulation of networks of spiking neurons: A review of tools and strategies" Brette et al, (see http://arxiv.org/abs/q-bio.NC/0611089 and http://senselab.med.yale.edu/ModelDb/Sh ... odel=83319) to compare different simulators and to show how to make parallel simulations with neuron. Unfortunately it use only one value for each weight.

So I started to modify this code but I there are a few things unclear to me. My code is very messy and complex so I don't think it's useful to send it.

Did anyone know how to do this properly?
I realize it is a pretty general question but I need more understanding of how to parallelize networks.

Thanks in advance.
Mathieu

P.S.: I don't have MPI-related problems since I can run the original version in parallel
ted
Site Admin
Posts: 6299
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Post by ted »

there are a few things unclear to me
What?
duboismathieu

Post by duboismathieu »

Hello Ted,

My problem is that after each run (one value of excitatory and inhibitory conductances) I want to record the spike trains of each neuron on the master.

I have tested with the following simple code (only one loop for excitatory conductance - remember that it is adapted from NEURON_benchmark)

Code: Select all

// Create the cells, then connect them.
create_net()  // in common/net.hoc
// Randomized spike trains driving excitatory synapses.
create_stim(run_random_low_start_)   // in common/netstim.hoc

// A few last items for performance reports, e.g. set up spike time recording
finish_setup()  // in common/init.hoc

for (AMPA_GMAX=AMPA_GMAX_MIN; AMPA_GMAX<AMPA_GMAX_MAX; AMPA_GMAX+=AMPA_GMAX_STEP) {
	if (LOOP_DEBUG) printf("Host %d: entering main loop; AMPA_GMAX=%g\n", pnm.myid, AMPA_GMAX)

	set_net_weights(AMPA_GMAX, GABA_GMAX)
	set_stim_strength(AMPA_GMAX)

	// Parallel run to tstop.
	prun()  // in common/perfrun.hoc

	// Only the "master" cpu does this.
	if (pc.id == 0) {
		print "RunTime: ", runtime
		// Send requests to the workers and handle the results they send back.
 		collect_results()  // in common/init.hoc
	}

	if (LOOP_DEBUG) printf("Host %d: end of main loop\n", pnm.myid)
}
// Up to this point, all CPUs have executed the same code,
// except for taking different branches depending on their value of pc.id,
// which ranges from 0 to pc.nhost-1.

// Only the master (pc.id == 0) returns from pc.runworker().
// All other CPUs ("workers") now wait for messages.
{pc.runworker()}

// Send all workers a QUIT message; those NEURON processes exit.
// The master waits until all worker output has been transferred to it.
{pc.done()}
The collect_results() procedure uses pnm.gatherspikes() to collect the spikes. But the master node never returns from this procedure. For instance with two hosts (mpirun -np 2...) I get the following message:

Code: Select all

Host 1: end of main loop
Host 1: entering main loop; AMPA_GMAX=0.003
Host 1: set_net_weights(0.003, 0.067)
Host 1: 50 excitory synapses
Host 1: 0 inhibitory synapses
Host 1: end of set_net_weights(0.003, 0.067)
Host 1: set_stim_strength(0.003)
Host 1: end of set_stim_strength(0.003)
Host 0: collect_results()
The problem as I see it is that host 1 finished the loop and jumps to the next iteration before asked to gatherspikes().

So I think I should synchronize all hosts at the end of prun() and then, in the master, call gatherspikes().

If that's right, how to do it?

Thanks again,
Mathieu

P.S.: I don't think it is related but I use mpi on a single processor machine for testing purpose.

P.S.2: I have cleaned up "my code" so it is closer to the original NEURON_benchmark.

P.S.3: In case it is not evident I am a beginner in parallel programming (though I read Neuron's manual and several articles...)
hines
Site Admin
Posts: 1687
Joined: Wed May 18, 2005 3:32 pm

Post by hines »

collect_results() cannot be called before pc.runworker() because it makes use
of the bulletin board style of communication. Unfortunately, one cannot switch between
bulletin board style and "mpi communication between co-equal processes" style. In your case the loop of {run ; save to file} is best carried out by staying in the mpi communication style and serializing the file writing part. See bottom of page 16 of
http://www.neuron.yale.edu/ftp/ted/neur ... _press.pdf
If you have a recent version of NEURON you can simplify by using the ParallelNetManager
serialize idiom as in

Code: Select all

proc spikeout() {local i   localobj f
  f = new File($s1)
  for pnm.serialize() {
    if (pc.id == 0) { f.wopen() }else{ f.aopen() }
    for i=0, pnm.spikevec.size - 1 {
      f.printf("%g %d\n", pnm.spikevec.x[i], pnm.idvec.x[i]
    }
  }
  pnm.spikevec.resize(0)  pnm.idvec.resize(0)
}
You will need to give a different file name arg for each call to spikeout in your loop
so as not to overwrite the file created by the previous iteration.
hines
Site Admin
Posts: 1687
Joined: Wed May 18, 2005 3:32 pm

Post by hines »

Sorry, I made a syntax error in the previous message and left out the file close. The
spikeout code is actually

Code: Select all

proc spikeout() {local i   localobj f
  f = new File($s1)
  for pnm.serialize() {
    if (pc.id == 0) { f.wopen() }else{ f.aopen() }
    for i=0, pnm.spikevec.size - 1 {
      f.printf("%g %d\n", pnm.spikevec.x[i], pnm.idvec.x[i])
    }
  }
  f.close()
  pnm.spikevec.resize(0)  pnm.idvec.resize(0)
}
duboismathieu

Post by duboismathieu »

Hi,

As usual,the forum is of great help! And I understand the topic of parallel networks a bit better.

Thanks a lot.

Maybe I could ask another question. I would also like to record the statistics from the host CVode and host pc (more or less like in the original print_spike_stat_info() ).

Code: Select all

		spstat = new Vector()

		// CVode stats
		cvode.spike_stat(spstat)
	
		// Append pc stats
		i = spstat.size
		spstat.resize(i + 4)
		spstat.x[i] = pnm.pc.spike_statistics(&spstat.x[i+1], &spstat.x[i+2],&spstat.x[i+3])
I have made the following, loosely modelled after the procedure you provided

Code: Select all

	for pnm.serialize() {
		spstat = new Vector()

		// CVode stats
		cvode.spike_stat(spstat)
	
		// Append pc stats
		i = spstat.size
		spstat.resize(i + 4)
		spstat.x[i] = pnm.pc.spike_statistics(&spstat.x[i+1], &spstat.x[i+2],&spstat.x[i+3])

		f = new File(filename)
		if (pnm.pc.id == 0) {
			f.wopen()
			spstat.printf(f)
		} else {
                       // Re-read the file and add the host spstat
			f.ropen()
			spstat_ = new Vector()
			spstat_.scanf(f)
			spstat.add(spstat_)
			f.close()

                        // Re-write the file
			f.wopen()
			spstat.printf(f)
		}
		f.close()
	}
While this solution seems to work, it is ugly.

So I was wondering if there is a better way to collect the spike statistics from each host and sum them in a vector on the master.

Thanks in advance,
Mathieu
Post Reply