This exercise could be done at several levels of complexity. Most challenging and rewarding, but also most time consuming, would be for you to develop all code from scratch. But learning by example also has its virtues, so we provide complete serial and parallel implementations that you can examine and work with.
The following instructions assume that you are using a Mac or PC, with at least NEURON 7.1 under UNIX/Linux, or NEURON 7.2 under macOS or MSWin. For UNIX, Linux, or macOS, be sure MPICH 2 or OpenMPI is installed. For Windows, be sure Microsoft MPI is installed. If you are using a workstation cluster or parallel supercomputer, some details will differ, so ask the system administrator how to get your NEURON source code (.py, .ses, .mod files) to where the hosts can use them, how to compile .mod files, and what commands are used to manage simulations.
The next step is to create a program that performs serial execution of multiple simulations, i.e. executes them one after another. In addition to generating simulation results, it is useful for this program to report a measure of computational performance. For this example the measure will be the total time required to run all simulations and save results. The simulation results will be needed to verify that the parallel implementation is working properly. The performance measure will help us gauge the success of our efforts, and indicate whether we should look for additional ways to shorten run times.
The final step is to create a parallel implementation of the batch program. This should be tested by comparing its simulation results and performance against those of the serial batch program in order to detect errors or performance deficiencies that should be corrected.
In accordance with this development strategy, we provide the following three programs. For each program there is a brief description, plus one or more examples of usage. There are also links to each program's source code and code walkthroughs, which may be helpful in completing one of this exercise's assignments.
Finally, there is a fourth program for plotting results that have been saved to a file, but more about that later.
python -i initonerun.py
onerun(x)
onerun(0.3)
python initbatser.py
, orpython -i initbatser.py
to see the graph.
python initbatpar.py
mpiexec -n N python initbatpar.py
2. Compare results produced by serial and parallel simulations, to verify that parallelization hasn't broken anything. For example:
python initbatser.py mv fi.dat fiser.dat python initbatpar.py mv fi.dat finompi.dat mpiexec -n 4 python initbatpar.py mv fi.dat fimpi4.dat cmp fiser.dat finompi.dat cmp fiser.dat fimpi4.datInstead of cmp, MSWin users will have to use fc in a "Command Prompt" (
cmd
) window.
3. Evaluate and compare performance of the serial and parallel programs.
Here are results of some tests I ran:
NEURON 7.5 (266b5a0) 2017-05-22 under Windows Subsystem for Linux on a quad core desktop. initbatser 6.16 s initbatpar without MPI 6.03 with MPI speedup n = 1 6.03 1 (performance baseline) 2 3.41 1.77 3 2.62 2.30 4 2.05 2.94
4. Make a copy of initbatpar.py and edit it,
inserting print
calls that
reveal the sequence of execution, i.e. which processor is doing what.
These statements should report whatever you think would
help you understand program flow.
Here are some suggestions for things you might want to report:
fi
).
After inserting the print calls, change NRUNS to 3 or 4, then run a serial simulation and see what happens.
Next run parallel simulations with -n 1, 2, 3 or 4 and see what happens. Do the monitor reports make sense?
5. Examine an f-i curve from data saved to one of the dat files.
python -i initplotfi.pythen use its file browser to select one of the dat files.
Examine initplotfi.py to see how it takes advantage of procs that are built into NEURON's standard run library (UNIX/Linux users see nrn/share/nrn/lib/hoc/stdlib.hoc, MSWin users see c:\nrn\lib\hoc\stdlib.hoc).