replicate Waxman's model

The basics of how to develop, test, and use models.
Post Reply
Sun Xiaoqing
Posts: 35
Joined: Wed Aug 28, 2019 1:18 am

replicate Waxman's model

Post by Sun Xiaoqing » Thu Dec 26, 2019 10:24 am

Merry Chrismas!

I'm working on replicating Waxman's model in Waxman, S. G., and Brill, M. H., 1978, Conduction through demyelinated plaques in multiple sclerosis, Computer simulations of facilitation by short internodes, J.Neurol Neurosurg. Psychiatry, 41: 408–416.

In Waxman's paper, it is stated that they used Crank-Nicholson method with an integration time increment Δt=5 us and a length increment Δx=200 um. which refers to the model done in Moore(1978) I construct a model according to their method and Moore(78) in Neuron's ModelDB, but I got different results compared with the paper. Since I'd like to study the delay time at the node with different short internode length, the results have to match with Waxman's paper.

The requirements of the model:
  • Crank-Nicholson method with an integration time increment Δt=5 us and a length increment Δx=200 um
  • fibres were 11 nodes in length
  • stimulating with a twice-threshold current for 200 us at the beginning of the
    internode before node 1.
  • Rate constants were adjusted to 20°C.
In order to use the Crank-Nicholson method

Code: Select all

load_file("nrngui.hoc")
load_file("cable.hoc")
load_file("init.ses")

secondorder = 2

len = 0
But the figure didn't match. I've checked several times and I really couldn't state the problem. I've zipped my file (for the first two figures)and sent to your email,I know it's Chrismas and the New Year is coming, I'm sorry to bother you, but I really appreciate it if you could give me some suggestions, thank you!

Sun Xiaoqing
Posts: 35
Joined: Wed Aug 28, 2019 1:18 am

Re: replicate Waxman's model

Post by Sun Xiaoqing » Mon Dec 30, 2019 8:58 am

I wonder if I use a finer scale (Δx=10um, n_seg=200, compared with Waxman: Δx=200um, n_seg=10),apply Crank-Nicholson method with an integration time increment Δt=5 us, then I should get a more accurate result, is that right?

Thank you.

ted
Site Admin
Posts: 5690
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: replicate Waxman's model

Post by ted » Tue Dec 31, 2019 2:51 pm

First let me apologize for this delayed reply.
I've zipped my file (for the first two figures)and sent to your email
I received your email message, but it had no attachment. Try again?

The model used by Waxman et al. 1978 is based on what they used for
Brill, Waxman, Moore, and Joyner (1977)
Conduction velocity and spike configuration in myelinated fibres: computed dependence on internode distance.
J. Neurology, Neurosurgery, and Psychiatry. 40: 769-774

ModelDB has a re-implementation, by Michael Hines for NEURON, of the Brill et al. model; its accession number is 9848. I downloaded that and found that, while it reproduced the qualitative results in Figure 1 of the Brill paper, the reproduction was not quantitatively exact.

The same is true for Figure 2. The README file of ModelDB entry 9848 mentions reasons why Figure 2 might be difficult to reproduce exactly with NEURON, but what accounts for the inability to exactly reproduce the results in Figure 1?

The source code for ModelDB entry 9848 is rather complex and very clever, since it was designed to demonstrate qualitative reproduction of the results in Figure 1 and 2. To avoid that complexity and make it easier to examine the differences between Figure 1 and the NEURON model's results, I decided to re-implement the model myself.

Immediately I discovered that the model specification in Brill et al. was incomplete. For one thing, the authors changed the actual structure of the model, depending on internode length L. Quoting from the bottom of page 770 to the top of 771

"For an internodal length greater than 1000 um, a steady velocity of propagation obtained within two nodes of either end, and for these computations we used only 12 nodes. As L was decreased from this value, the number of nodes was increased in each case (up to 50 for an L of 25 um) so as to obtain a sufficient region of uniform velocity of propagation."

But they don't say how many nodes (and internodes) were actually used for models with L of 50, 100, or 200.

Brill et al. don't describe the stimulus they used. Waxman and Brill 1978 used "a twice-threshold current for 200 us" so maybe Brill et al. did the same. But internode length affects the amount of current required to trigger a spike. Did Brill et al. readjust the stimulus current amplitude every time they changed internode length? They don't say.

Brill et al. used a 5 us integration time step, so the precision of their measurements of ICT were limited to 5 us. Internode conduction time decreases with decreasing internode length, and for the model they
used, ICT is only about 5-6 us for a 25 um internode and about 20-25 us for a 200 um internode. There is no way they could have obtained the results shown in Fig. 1B unless they estimated threshold crossing times by applying linear interpolation to plots of v vs. t. That would be OK from a technical standpoint, because using the Crank-Nicholson method to compute v produces results that have second order accuracy in time, which means that values of v at times that lie between multiples of dt can be estimated by linear interpolation, and such interpolated values themselves have second order accuracy in time. BUT Brill et al. say nothing about using interpolation!

I should also mention the parameter that Brill et al. (and Waxman and Brill 1978) call "axoplasmic resistance per unit axon length" and represent as ra (see the Table on page 770). In a footnote on page 770, Brill et al. say it was "Calculated from specific axoplasmic resistance of 100 ohm-cm." You might think that this meant you should set Ra in your NEURON model to 100. But no, that would be incorrect. Why?
ra = 4*Ra / (PI * diam^2)
where ra is "resistance per unit length of axon" and Ra is "cytoplasmic resistivity." Solving for Ra:
Ra = rax * PI * diam^2 / 4
For rax = 1.26e8 ohm/cm, and diam = 10 um, the actual equivalent value of Ra to five decimal places is 98.960 ohm cm. If your own NEURON implementation set Ra to 100 ohm cm, that might contribute to differences between your model's results and the results published by Waxman and Brill. It wouldn't account for differences between Fig. 1 of Brill et al. and the results produced by Michael Hines's re-implementation of that work, because Hines set Ra to 98.96... ohm cm.

What about the fact that Brill et al. used a "modified" Crank-Nicholson integration method? It seems unlikely that the algorithm was identical to what NEURON uses when secondorder == 2. And did it make the same assumptions about boundary conditions between adjacent nodes and internodes that NEURON does?

I see that in Michael Hines's re-implementation for NEURON, the parameters used to reproduce Fig. 1 qualitatively are stimulus 10 nA pulse for 0.1 ms, dt == 0.01 (not 0.005), and ICT is calculated from interpolated threshold crossing times. Also, his code uses 50 nodes and 49 internodes, regardless of internode length. Even so, the results are qualitatively very similar to the published Fig. 1. My own
re-implementation, which also uses 50 nodes and 49 internodes, automatically adjusts stimulus amplitude to 2 x threshold every time internode length is changed. Also, I measured the ICT for the internode halfway down the length of the model axon, but didn't bother to extrapolate the exact time
at which the spike arrived at the two ends of that internode (just used a couple of APCount to detect the spike). For internode lengths > 200 um, the results I got for ICT and conduction velocity were very close to those produced by Michael's implementation.

So there are a lot of questions about what Brill et al. actually did, and neither Michael Hines nor I were able to reproduce their results exactly. But does it matter? Remember what Richard Hamming wrote:
The purpose of computing is insight, not numbers. Frankly, I think it's pretty amazing that the results generated by the code that Brill et al. used are so close to those that are generated by the NEURON re-implementations of their model. In particular, the NEURON re-implementations produce results that verify the qualitative features that underlie the conclusions reported by Brill et al..

If I were launching a new study based on what Waxman and Brill did, here's what I'd do. I'd keep the basic outline of the Brill et al. and Waxman and Brill models--axon diameter, node lengths, internode lengths, membrane properties of the nodes and of the myelinated and demyelinated regions. I'd definitely set Ra to 100 ohm cm, because the footnote on page 770 proves that's the value they intended to use. And instead of setting internode nseg to 10, I'd use an odd value--because you're using NEURON, not whatever FORTRAN code that Brill et al. were using--and base nseg on the d_lambda rule.

Also, I'd abandon Crank-Nicholson and use adaptive integration (cvode) instead. Why? Because adaptive integration makes it much easier to get high quality estimates of internode conduction time. In my own re-implementation, which used Crank-Nicholson with dt = 5 us, I used APCounts to detect the times of "threshold crossings" at the 0 and 1 ends of an internode. That was easy, but precision was limited by dt. Michael Hines's implementation also used Crank-Nicholson, but used Vector recording to capture the actual time course of v at the 0 and 1 ends, then used linear interpolation to refine the threshold crossing times. That produced much more accurate internode conduction times, but complicated his program.

The smart way to detect threshold crossing times is to use a pair of NetCons--one at the 0 end of the internode, and the other at the 1 end--and then use the NetCon class's record() method to detect the threshold crossing times. Read about NetCon.record() here
https://www.neuron.yale.edu/neuron/stat ... Con.record

"How does that help? Cvode doesn't force the simulator to find the exact time at which v crosses the 'spike time' threshold."

True. By default, detection of threshold crossings happens at time step boundaries. However, if you call the Cvode class's
condition_order() method with an argument of 2, threshold crossing times will be found by linear interpolation within the integration step in which the crossing happened. So just call

cvode.condition_order(2)

before you run the first simulation, and every run after that will use linear interpolation to produce refined estimates of threshold crossing times. Assuming that you are using adaptive integration, of course. You'll find documentation of cvode's condition_order method in the Programmer's Reference documentation of cvode.

ted
Site Admin
Posts: 5690
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: replicate Waxman's model

Post by ted » Wed Jan 01, 2020 11:00 am

On re-reading Brill et al. 1977 I saw that my interpretation of "internodal conduction time" was incorrect. They define it as "the difference in times at which the rising phase crossed 50 mV at two consecutive nodes." This means that spike time measurement must be made at the centers of two consecutive nodes. I revised my own program accordingly, and found that this made its results closer to those produced by Michael Hines's implementation; the remaining differences are smaller than 1% and probably reflect the fact that my code reports times to the nearest multiple of 5 us, whereas Hines's code uses interpolation to find threshold crossings.

ted
Site Admin
Posts: 5690
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: replicate Waxman's model

Post by ted » Wed Jan 01, 2020 11:43 am

I wonder if I use a finer scale (Δx=10um, n_seg=200, compared with Waxman: Δx=200um, n_seg=10),apply Crank-Nicholson method with an integration time increment Δt=5 us, then I should get a more accurate result, is that right?
That's not going to resolve the differences between their results and the results you're getting.

The design and implementation of the models used by Brill and Waxman were constrained by the limitations of the computer hardware and software that were available to them. Also, they don't provide critical details of their methods--in particular,
1. how they deterimined spike threshold
2. whether they adjusted stimululs intensity every time they changed a model parameter that might affect spike threshold
and, perhaps most important,
3. the fact that their calculations of conduction velocity are based on time measurements that MUST have used interpolation. If you want immediate, incontrovertable evidence of that, Waxman and Brill state on p.409 that "We used an integration time increment dt= 5 us", and then on p.410 they say that "Internodal conduction time proximal to the site of demyelination was 102 us". If dt is 5 us, all noninterpolated measurements are at integer multiples of 5 us, so it is impossible for any conduction time to be 102 us. The only sensible interpretation is that they must have used interpolation to determine spike latencies.

As I mentioned in yesterday's post on this forum, I would abandon their spatial discretization scheme and use the d_lambda rule instead. There's nothing magical about 200 um or 10 compartments per internode. Use nseg values that are determined on a rational basis--which for NEURON means an odd value of nseg, applying the d_lambda rule, and verifying adequacy of spatial discretization by comparing results after tripling nseg and repeating a simulation. Why triple nseg? Because NEURON uses the central difference approximation to the second spatial derivative, which has second order spatial error. Making the spatial grid 3 times finer will reduce spatial error by a factor of 9--that is, tripling nseg reduces spatial error by almost an order of magnitude. If a comparison simulation with tripled nseg produces results that are nearly identical to the results generated before nseg was tripled, then the spatial grid is adequate. Chopping things into smaller pieces will only increase run times and can contribute to accumulation of roundoff error.

Also, instead of fixed time step integration with Crank-Nicholson, I'd use adaptive integration, and I'd use NetCon.record with cvode.condition_order(2) so that spike propagation latencies are interpolated automatically for you.

As far as my questions about stimulus intensity are concerned, I suspect that it will adequate to manually adjust stimulus intensity to 2 x threshold for a normal (not-demyelinated) fiber, and just use that same intensity for all simulations.

Post Reply