This note is primarily for others who may read this thread.
Using NMODL for "data processing on the fly" raises several potential issues.
1. In this particular application, the sequence of execution of NMODL-specified mechanisms is important. Clearly all mechanisms that WRITE ica must be executed before undercai is executed (otherwise some currents may have values from the current time step while others will have values from the previous time step). The default execution sequence follows alphabetical order by mechanism names. Since this particular mechanism's name is "undercai", the programmer must make sure that all mechanisms that WRITE ica have names that appear before "undercai" in an alphabetized list.
2. undercai will produce correct results only if (1) fixed time step integration is used, and (2) dt is 0.1 ms. It is guaranteed to produce incorrect results if dt has some other value or if adaptive integration is used. This will be a problem for those models that require small dt or adaptive integration to achieve stability or sufficient accuracy. Minor revisions to the NMODL code (e.g. replacing embedded magic numbers with PARAMETERs) would allow users to use undercai with other values of dt and calculate averages over different intervals without having to re-edit and recompile the NMODL code, but reconciling dt with the number of samples to be averaged would be a task left to the end-user.
3. Instead of using a conditional statement to check the current value of t, it would be simpler (and automatically avoid roundoff problems) to just keep a count of how many cai values have been added; then when the count reaches the desired value, calculate the average and reset the count to 0.
An alternative to "data processing on the fly" would be to recast the problem as a function fitting task that compares the values of a function f() with the experimentally observed values and uses that error measure to adjust the model parameters. This would make the problem tractable for a Function Fitness generator controlled by the MRF. Properly done, it would be able to handle simulations that use any fixed time step, or even adaptive integration. And it would not require any NMODL code with VERBATIM blocks.
All the magic would be in the function f(), which in pseudocode would look like this:
Code: Select all
// expects one argument: a time value that is equal to
// one of the times at which the experimental data were sampled
// assumes the existence of two Vectors: tvec and caivec
// which are used to hold the desired sample times
// and the model-generated corresponding values of cai
// on first entry, caivec may contain all zeroes
func f() { localobj tmpcaivec, tmptvec
if the model's parameters have been changed {
run a simulation that captures the time course of cai and t to tmpcaivec and tmptvec
use tmpcaivec and tmptvec to fill caivec with values
that are averaged and sampled at the times specified by tvec
}
return the value in caivec that corresponds to argument $1
}
One could even do the cai averaging with an NMODL-based mechanism that (1) calculates a proper integral of cai between sampling times and (2) works even with adaptive integration.
While this would be quite elegant and robust, it would require extra programming. That might be justified depending on the project; for the present example, the "processing on the fly" approach does the job well and is safe in the hands of its developer.