Memory problem

When Python is the interpreter, what is a good
design for the interface to the basic NEURON
concepts.

Moderator: hines

Post Reply
lb5999
Posts: 56
Joined: Mon Oct 11, 2010 9:12 am

Memory problem

Post by lb5999 »

Hi NEURON Forum,

I am simulating a model of ~1500 cells (each having 20-30 variables) for 90 secs, and then vector recording the membrane potential and calcium in each cell at a timestep of 0.1ms.

The simulations run for ~1 hour, and then the python process gets killed. If you look at top, %MEM gets to about 99% before it terminates. I am using the following code to save the recorded vectors for each cell into an array:

Code: Select all

vecvm = []
vecca = []
for i in range(ncells):
    vecvm.append(h.Vector())
    vecca.append(h.Vector())
    vecvm[i].record(cell[i].soma(0.5)._ref_v)
    vecca[i].record(cell[i].soma(0.5)._ref_ca)
...
h.run()
...
for i in range(ncells):
    vecvm[i] = np.array(vecvm[i])
Where each cell is from a cell template.

All we are interested in (for the time being) is the calcium and voltage time-series for each cell. Any reason why this might be happening and how to work around this? We were hoping to double (or more) our simulation time, but it looks like we are at our memory limit.

Cheers,
Linford
ramcdougal
Posts: 267
Joined: Fri Nov 28, 2008 3:38 pm
Location: Yale School of Public Health

Re: Memory problem

Post by ramcdougal »

At

(90 s) * (1000 ms/s) * (10 timesteps/ms) * (2 values/timestep/cell) * (1500 cells) * (8 bytes/value),

you're looking at 21.6 Gigabytes of data. If you're not preallocating with Vector.buffer_size, the story gets worse, since every time the vector fills up, it creates a new one (twice the size of the previous one) and copies the data over: thus you're looking at briefly 3x the memory requirements during the resize events.

What can we do then?

Simple: do your integration in multiple phases: dump values to disk every 10 simulated seconds (or so), discard the in-memory copies (by .resize'ing them to 0), do an h.continuerun(next_time_point), and repeat. This way you can run as long as you like with no memory issues.

If you want unified vectors for analysis, load the data later on a per-vector basis. Each one of them will be small enough to easily fit into memory.
lb5999
Posts: 56
Joined: Mon Oct 11, 2010 9:12 am

Re: Memory problem

Post by lb5999 »

Great, thanks ramcdougal!
ted
Site Admin
Posts: 6289
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Memory problem

Post by ted »

Do yourself a big favor by testing whatever you implement "on the small" before you try it on your big problem. A simulation of 2 or 3 ms total model time, consisting of 2 or 3 shorter segments, should be sufficient.

Watch out for duplication between the last datum saved in segment i and the first datum saved in segment i+1 (most easily detected by checking the vector that captures time). If it happens, simply discard each vector's final element before saving to the output file(s).

Resizing the vectors to 0 at the end of each simulation segment is unnecessary, and wastes time (forces resizing each vector multiple times in each segment to accommodate accumulating data). Also, it may not even work properly because NEURON's standard run system maintains vector recording counters that keep track of where new data are being written. Instead, just reset the counters by calling h.frecord_init() before starting each new segment, and the new data will overwrite previous data in situ without requiring any memory reallocation.
Post Reply