After succesfully playing with a simple white noise, now I want to model an Ornstein-Uhlenbeck process, which in the practice is a low-pass filtered white noise (isn't it?). The idea is to have a variable n defined by:
Code: Select all
dn=(-n+µ)*dt/tc+s*dWt
Assuming µ=0 and s=1/tc, this can be written as
Code: Select all
dn/dt=(-n+zt)/tc
I put that into a .mod file that looks like
Code: Select all
TITLE Ornstein-Uhlenbeck process
NEURON {
SUFFIX OU
RANGE i, D, tau, bias
NONSPECIFIC_CURRENT i
}
UNITS { (mA) = (milliamp) }
PARAMETER {
bias = 0 (mA/cm2)
D = 0 (/ms)
tau = 1 (ms)
}
ASSIGNED {
i (mA/cm2)
noise (mA/cm2)
}
STATE { n (mA/cm2) }
BREAKPOINT {
SOLVE kin
i = bias + n
}
DERIVATIVE kin {
noise = 1(mA/cm2) * normrand(0,D)
n' = (-n + noise)/tau
}
a) it only works with variable time step; under constant time step I always get a "division by zero" error or a "machine round-off" error.
b) sometimes, when running a long simulation (500-1000 seconds) the program exits without any warning or message. The only thing left is a nrniv.exe.stackdump file. This tends to happen more often when D>0.01, which actually doesn't cause too much noise in the voltage trace. Besides, the simulation can proceed without problem for 200+ seconds (yes, 2e5 ms) and then exit without any warning.
What can be possibly wrong with that? I'll appreciate any kind of comments regarding my approach.
By the way, I'm using Windows XP in a Pentium4 machine.
Regards.