Making a timer dummy cell for benchmarking

NMODL and the Channel Builder.
Post Reply
bcumming
Posts: 5
Joined: Tue Nov 07, 2017 12:42 pm

Making a timer dummy cell for benchmarking

Post by bcumming »

I would like to make a special "artificial cell" for benchmarking the spike/event delivery subsystems in simulators.

The artificial cell takes two parameters:
1. spike frequency: used to generate a poisson sequence of spikes
2. integration speed: describes how quickly a cell is updated relative to real time. i.e.
a value of 1 means one cell operates at real time, and a value of 10 means that one
cell can be integrated 10 times faster than real time.

This type of cell would make it easy to write benchmarks that investigate strong/weak scaling of
models with user-supplied spiking rates and cell overheads.

How would you go about implementing the second feature, i.e. setting the integration speed?

I can imagine putting something like the following in a VERBATIM block in NMODL:

Code: Select all

// get starting time
start = clock();

// generate your spikes in here, so that the time taken to generate
// them is counted in the elapsed time interval

// convert dt from ms to clock ticks
ticks = CLOCKS_PER_SEC*dt*1e-3 / rate;

// wait until the number of ticks have elapsed.
while (clock()-start < ticks );
This approach would work best if dt is a full min_delay, to reduce the timer overheads.
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Making a timer dummy cell for benchmarking

Post by hines »

ARTIFICIAL_CELL does not fit will with the concept of a ratio between integration time and real time since there is no integration per (fixed) time step but only computation on each incoming event. At the no incoming event extreme, there is no computation. However, if you replace
ARTIFICIAL_CELL with POINT_PROCESS and insert them all into a single compartment, eg. acell_home, then you can put your timer
into any of a number of blocks that are called on every time step. e.g BEFORE STEP. Note that with no events, the total time per
POINT_PROCESS would be governed by your timer and the overhead of acell_home. The latter should not be much.
bcumming
Posts: 5
Joined: Tue Nov 07, 2017 12:42 pm

Re: Making a timer dummy cell for benchmarking

Post by bcumming »

Thanks for the feedback.

Because there are no numeric stability concerns, could we force the cell to take large dt steps, which would minimise the impact of overheads that are outside the wait call?
bcumming
Posts: 5
Joined: Tue Nov 07, 2017 12:42 pm

Re: Making a timer dummy cell for benchmarking

Post by bcumming »

Hi Michael. With your advice, I made the following point process:

Code: Select all

NEURON {
    POINT_PROCESS bench
    RANGE first, frequency, rate
}

PARAMETER {
    frequency = 100 (Hz)
    rate = 1      : 1-> realtime, 0.1 -> 10x faster than realtime
    first = 0 (ms)
}

VERBATIM
#include <time.h>
ENDVERBATIM

ASSIGNED {
    spike_interval
}

STATE {}

INITIAL {
    spike_interval = 1000/frequency
    net_send(first, 42)
}

BREAKPOINT {
    VERBATIM
        struct timespec s__, e__;
        clock_gettime(CLOCK_MONOTONIC_RAW, &s__);

        /* number of nanoseconds to wait */
        /* factor of 1e6 converts ms to ns */
        long long interval_ns__ = dt*rate*1e6;

        clock_gettime(CLOCK_MONOTONIC_RAW, &e__);
        long long elapsed_ns__ = (e__.tv_sec - s__.tv_sec) * 1000000000 + (e__.tv_nsec - s__.tv_nsec);

        /* busy wait until elapsed time in ns has passed */
        while (elapsed_ns__<interval_ns__) {
            clock_gettime(CLOCK_MONOTONIC_RAW, &e__);
            elapsed_ns__ = (e__.tv_sec - s__.tv_sec) * 1000000000 + (e__.tv_nsec - s__.tv_nsec);
        }
    ENDVERBATIM
}

NET_RECEIVE(w) {
    : flag==42 implies a self-event, so go ahead and generate a spike along
    : with the next wake up call.
    if (flag==42) {
        net_send(spike_interval, 42)
        net_event(t)
    }
}
Then I build a small network with cells constructed on the python side as follows:

Code: Select all

soma = h.Section(name='soma', cell=self)
source = h.bench(self.soma(0.5))
source.rate = 0.1                                                                                                                                                                                                   
source.first = 0 
source.frequency = 20
Small networks of cells like this has less than 1% deviation in runtime from the expected time to solution of : num_cells*rate*tstop

Now to test it for larger networks.
Post Reply