ParallelPython and Neuron

When Python is the interpreter, what is a good
design for the interface to the basic NEURON
concepts.

Moderator: hines

Post Reply
arb
Posts: 19
Joined: Mon Jul 02, 2007 6:18 am
Location: Humboldt-University, Berlin

ParallelPython and Neuron

Post by arb »

Does anyone has experience with ParallelPython and Neuron?

Im very much using the python interface for neuron and I want now to start some huge calculation now for which I need parallel computing ...
For python there is a wonderful package that provides parallel computing in a very simple way (http://www.parallelpython.com) ...

In the parallel function one need to import neuron on each processor.
The script looks like this:

Code: Select all

import pp

def test():
    import neuron
    h = neuron.h
    
    h.load_file("stdrun.hoc")
    
    h("create soma")
    h("access soma")
    
    h.init()
    h.run()
    
    h("voltage = v")
    return h.voltage

ppservers = ("192.168.1.2",)
job_server = pp.Server(ppservers=ppservers)

results = []
for i in range(4):
    results.append(job_server.submit(test))

for f in results:
    print f()

job_server.print_stats()
And the final complete output is:

Code: Select all

-65.0
-65.0
-65.0
-65.0
Job execution statistics:
 job count | % of all jobs | job time sum | time per job | job server
         4 |        100.00 |       0.3083 |     0.077083 | local
Time elapsed since server creation 0.176651000977
The parallel simulation seems to work.

The problem is, that when neuron is loaded the print output goes somewhere, but no neuron output appears (i.e the version number...) ..

I want to see error messages..

When I make my simulation more complex, e.g. load a cell morphology and mechanisms, something does not work, but I cannot find the mistake, because neuron does not show any error messages.

I would like to redirect the output to a file.. Is this possible with the import neuron?
Or any better ideas?

Thank you very much,
Armin
arb
Posts: 19
Joined: Mon Jul 02, 2007 6:18 am
Location: Humboldt-University, Berlin

Re: ParallelPython and Neuron

Post by arb »

On the parallelpython webpage they suggest to do something like this for external programmes to solve the problem with output from different processes, :

def test():

from subprocess import Popen, PIPE
p = Popen(['ls', '-l'], stdout=PIPE, stderr=PIPE)
return p.stdout.read()

this gives me indeed the list of files in my current directory...

Can one neuron tell to somehow do the same??

(see http://www.parallelpython.com/component ... opic,186.0)

Thank you,
Armin
emuller
Posts: 15
Joined: Thu Mar 02, 2006 5:26 am
Location: Lausanne

Re: ParallelPython and Neuron

Post by emuller »

You might also try the interactive distributed features of Ipython (formerly Ipython1, http://ipython.scipy.org/moin/)

I have looked at ParallelPython before, but I find ipython1 more powerful.

Also, it might be interesting to know that ipython supports mpiexec to execute python on each slave node, so that an MPI enabled neuron module can communicate with the other nodes as simulate time via MPI. For this to work, importing the neuron module in python (as apposed to nrniv -python) must be MPI aware. A patch was required to enable this feature, and it will be available in the svn trunk shortly.
arb
Posts: 19
Joined: Mon Jul 02, 2007 6:18 am
Location: Humboldt-University, Berlin

Re: ParallelPython and Neuron

Post by arb »

Thank you -- a system based on mpi seems to be the better way to do for the future.. I will give ipython a try..

However I have been able to use parallel python with neuron.. The nice thing is that you could use it on every computer where you can install normal neuron and python (even within www). No Mpi is required..
The trick is to use the parallel python job-function to execute another python script.. All parameters are saved to a job-file. This subpython script does the job and loads neuron. All neuron output is saved to a temporary file and so on..
This works very nice and is very very simple to install and to implement..

This would be something like this for the job function:

Code: Select all

def fitsweep(dir, job, arguments):

    from subprocess import Popen
    import os
    import pickle
    
    os.chdir(dir)
       
    datafile = "./tmp/job%d.dat"%job
    outfile = "./tmp/out%d.dat"%job
    
    f = open(datafile, mode = 'wb')
    pickle.dump( [job, arguments], f, -1)
    f.close()
    
    #################################
    print "Starting job %d"%job

    fout = open(outfile, mode = 'w')
    p = Popen(['python', 'fitsweep.py', datafile], stdout=fout, stderr=fout)
    p.wait()
    fout.close()
    
    fout = open(outfile, mode = 'r')
    print fout.read()
    fout.close() 
    
    f = open(datafile, mode = 'rb')
    data = pickle.load(f)
    f.close()
    
    # delete the communicationfiles
    os.remove(datafile)
    os.remove(outfile)

    return data
and for the sub python script:

Code: Select all

# load the parameters
import sys
import pickle

datafile = sys.argv[1]

f = open(datafile, 'rb')
[job, arguments] = pickle.load(f)
[sweep, sampint, tstop, current, voltage, jobtrials] = arguments
f.close()

import numpy as n
import neuron
h = neuron.h
Works fine..
Armin
Post Reply