I was looking into different models from model database of neuron. I have seen that although we have very good and complete chemical reaction models of the AMPA, NMDA, GABA A,B available, most of the authors try to neglect it and just use the expsyn, exp2syn or alpha function synapses built-in to the neuron. As an example I have modified the Jarskey 2005 model to use chemical reaction AMPA kainite synapses and noticed that there is some minor differences in the results. However, I tuned synapses to produce the same peak EPSP amplitude but I needed more synapses to reproduce the results.
My principal motive for using such synapses is the fact that some voltage gated channels like voltage-gated sodium channels are sensitive to the slop of the rising phase of EPSP. My question is:
1) why most authors neglects these kind of synapses?
2) Using these detailed synapses has a positive or negative impact on the model’s validity?
Detailed Modeling of Synapses
Detailed Modeling of Synapses
Last edited by Keivan on Fri Oct 30, 2009 2:37 am, edited 2 times in total.
-
- Site Admin
- Posts: 6305
- Joined: Wed May 18, 2005 4:50 pm
- Location: Yale University School of Medicine
- Contact:
Re: Detailed Modeling of Synapses
The premise is faulty. How do you know that anybody "try to neglect"? Most, if not all, probably consciously decided that such details are largely irrelevant to the hypotheses that motivated their models.Keivan wrote:I was looking into different models from model database of neuron. I have seen that although we have very good and complete chemical reaction models of the AMPA, NMDA, GABA A,B available, most of the authors try to neglect it
Any change to the implementation of a computational model is likely to have some effect on the numerical results. The question is whether such differences mean anything.As an example I have modified the Jarskey 2005 model to use chemical reaction AMPA kainite synapses and noticed that there is some minor differences in the results.
already took care of question 1My question is:
Details matter only if they are essential to a hypothesis. The premise that underlies the use of computational modeling for the sake of understanding is that a model should include representations of only those aspects of a real system that one judges to be essential for some property of that system. Including more details for the sake of more detail does not make a model easier to understand, nor does it make the model more useful for hypothesis testing.. . .
2) Using these detailed synapses has a positive or negative impact on the model’s validity?
Well, if that's essential to your hypothesis, go ahead and include detailed synaptic mechanisms. But to discover what "extra" you get from such details, you'll also have to build models that use simplified synaptic mechanisms.My principal motive for using such synapses is the fact that some voltage gated channels like voltage-gated sodium channels are sensitive to the slop of the rising phase of EPSP.
Re: Detailed Modeling of Synapses
I think for most authors it is enough to reproduce their experimental result with the model and report in their article that we also did modeling so our work is very good. It seems they do modeling only to facilitate the publication of the results. If you look into the published articles (combined modeling and experiment), the modeling is, almost always, the second part of the article. There are few articles that used the modeling as the motivation of doing the experiment. But I think models are useful devices that can lead the experiments.The premise is faulty. How do you know that anybody "try to neglect"? Most, if not all, probably consciously decided that such details are largely irrelevant to the hypotheses that motivated their models.
From the book "THE COGNITIVE NEUROSCIENCE OF MEMORY - ENCODING AND RETRIEVAL" - chapter 12 "Linking memory and prediction: Hebbian models of perceptual learning in animals and humans" by Lisa M. Saksida & James L. McClellandComputational modeling has the potential to be an invaluable tool in understanding the relationship between brain and behavior because it address not only what functions are performed by brain regions, but how they are performed (Rolls & Treves, 1997). Although most researchers have a theoretical framework that informs and motivates their experiments, the particular mechanisms that could yield predicted results are often not made explicit. Computational modeling can be particularly useful in that it brings forward such implicit assumptions. Because of this, models can be important tools for the analysis of data, and can help in development of clearly motivated experiments with clear predictions. Furthermore, computational models can provide a common language to help with communication across researchers, thereby reducing the perennial problem of ambiguity of terminology.
in this context I think details are necessary part of a model that tries to predict sth. I think this is the logic behind the blue brain project. isn't it?
-
- Site Admin
- Posts: 6305
- Joined: Wed May 18, 2005 4:50 pm
- Location: Yale University School of Medicine
- Contact:
Re: Detailed Modeling of Synapses
These statements may be true but they do not bear on the role of detail in models. Neither does the quote from Saksida and McClelland. By the way, quoting a pair of cognitive scientists in order to support the notion of biological details in models should produce just a slight frisson of cognitive dissonance.Keivan wrote: think for most authors it is enough to reproduce their experimental result with the model and report in their article that we also did modeling so our work is very good. It seems they do modeling only to facilitate the publication of the results. If you look into the published articles (combined modeling and experiment), the modeling is, almost always, the second part of the article. There are few articles that used the modeling as the motivation of doing the experiment. But I think models are useful devices that can lead the experiments.
By the way, do you include any of these complexities in your models, and if not, aren't you just being arbitrary?
--the stochastic and quantal nature of synaptic transmission
--variation of quantal size
--retrograde signaling at synapses
--glial uptake and release of transmitters
--second messengers
--electrodiffusion
--the very irregular geometry of neurons (there are no circles, spheres, cylinders, cones, or even smooth surfaces in the brain)
Are you familiar with the work of Abbott, Ermentrout, Kopell, Rall, Rinzel, Rubin? (just to name a few--apologies to the rest). First thing they do is throw out details. Then they come up with fresh insights and predictions. More insights and predictions than result from most detailed models.I think details are necessary part of a model that tries to predict sth.
I can't speak for the Blue Brain project. However, Einstein seemed to have the right idea when he intimated that hypotheses and theories should be simple, but not too simple. So, long before, did William of Ockham (sometimes called Occam).I think this is the logic behind the blue brain project. isn't it?
Re: Detailed Modeling of Synapses
Calm down please.
1) I just want to say that there is some oversimplification exist in articles. some times we used to some common simplifications and forget to reevaluate them in new situations.
2) If I was sure about what I am saying then I did not decide to consult with experts. I just want to organize my mind about this topic. I do not blame any body. I am thinking of an additive approach in modeling (like other area of science): start from the simplest possible form and add more complexities to it to reach a systematic model that can adapt itself to future demands and can explain future findings or works as is in vivo condition.
as an example look at the Golding 2005 and Jarsky 2005 articles. Both of them are supervised by Nelson Spruston in the same lab. Golding adjusted the passive parameters of the model with experimental results. Jarsky used the same cell created by Golding to show that how distal and proximal dendrites collaborate with each other but decided to do not use the Golding et al adjusted passive parameters for that reconstructed morphology. When I look into these two articles I find a big "?" in my mind. What's happening here? If they did sth important why they do not use their own findings?
I am just trying to find the correct approach to science. I am really confused. these things are unanswered questions for me.
What I am trying to say is that each lab works on a topic. they start a subject and each work or progress is a backbone for the future works. why this does not happening in modeling (at least in modeling dendrites)?
--how you would simplify resonance, excitatory effect of GABA inhibition, effect of inhibition on reducing threshold, excitatory effect of H-current combined with M-current on PSPs, and a lot more things I do not remember right know?
--there are a lot people, simplify things just because they are not familiar with the details.
--there was a period of time in neuroscience that we do not know a lot things about details, that time doing imaginary simplifications was logical but what about know?
you may think this topic is entering the realm of "apples vs. oranges" which you do not like. If you want we can stop it here. If not we can continue, because this is a basic problem I have. but answer me in your good mood. :)
I think you answered my second question. but In the first question I wanted to know is there a methodological problem with chemical reaction models in synaptic modeling? If I use them should I tell the reason during publication process? Is it strange?
1) I just want to say that there is some oversimplification exist in articles. some times we used to some common simplifications and forget to reevaluate them in new situations.
2) If I was sure about what I am saying then I did not decide to consult with experts. I just want to organize my mind about this topic. I do not blame any body. I am thinking of an additive approach in modeling (like other area of science): start from the simplest possible form and add more complexities to it to reach a systematic model that can adapt itself to future demands and can explain future findings or works as is in vivo condition.
as an example look at the Golding 2005 and Jarsky 2005 articles. Both of them are supervised by Nelson Spruston in the same lab. Golding adjusted the passive parameters of the model with experimental results. Jarsky used the same cell created by Golding to show that how distal and proximal dendrites collaborate with each other but decided to do not use the Golding et al adjusted passive parameters for that reconstructed morphology. When I look into these two articles I find a big "?" in my mind. What's happening here? If they did sth important why they do not use their own findings?
I am just trying to find the correct approach to science. I am really confused. these things are unanswered questions for me.
What I am trying to say is that each lab works on a topic. they start a subject and each work or progress is a backbone for the future works. why this does not happening in modeling (at least in modeling dendrites)?
this is the relation of that quotation with my idea about detailed modeling: If models were the motivation for experiment and if scientist used models as a common language between them, there would be a common model that evolving and becoming more detailed and complete as the research progressing. which is not the situation in modeling right know.These statements may be true but they do not bear on the role of detail in models. Neither does the quote from Saksida and McClelland. By the way, quoting a pair of cognitive scientists in order to support the notion of biological details in models should produce just a slight frisson of cognitive dissonance.
How we can be sure these thing are unnecessary when we did not include them in the model and evaluate their role systematically? However, I agree with you that there must be a hypothesis behind every detail we include in the model.By the way, do you include any of these complexities in your models, and if not, aren't you just being arbitrary?
--the stochastic and quantal nature of synaptic transmission
--variation of quantal size
--retrograde signaling at synapses
--glial uptake and release of transmitters
--second messengers
--electrodiffusion
--the very irregular geometry of neurons (there are no circles, spheres, cylinders, cones, or even smooth surfaces in the brain)
what makes you feel that I am not? but I think they simplify thing without experimental or computational documents some times. + in Data mining it is believe that our brain can consider a maximum 0f 7 component together. this is the idea behind modeling.Are you familiar with the work of Abbott, Ermentrout, Kopell, Rall, Rinzel, Rubin?
--how you would simplify resonance, excitatory effect of GABA inhibition, effect of inhibition on reducing threshold, excitatory effect of H-current combined with M-current on PSPs, and a lot more things I do not remember right know?
--there are a lot people, simplify things just because they are not familiar with the details.
--there was a period of time in neuroscience that we do not know a lot things about details, that time doing imaginary simplifications was logical but what about know?
you may think this topic is entering the realm of "apples vs. oranges" which you do not like. If you want we can stop it here. If not we can continue, because this is a basic problem I have. but answer me in your good mood. :)
I think you answered my second question. but In the first question I wanted to know is there a methodological problem with chemical reaction models in synaptic modeling? If I use them should I tell the reason during publication process? Is it strange?
Do you believe that this should be topic of an article before I can use it?Well, if that's essential to your hypothesis, go ahead and include detailed synaptic mechanisms. But to discover what "extra" you get from such details, you'll also have to build models that use simplified synaptic mechanisms.
-
- Posts: 86
- Joined: Thu May 22, 2008 11:54 pm
- Location: Australian National University
Re: Detailed Modeling of Synapses
Just to throw my 2 cents in, in the one model I have tried to get published, I 'neglected' diffusional models of the synapse, simply because they added (essentially) nothing, and would have added a huge amount of computational complexity. The IPSCs I have recorded from my cells could nearly be perfectly modelled by an instantaneous jump in conductance, and a biexponential decay. What would my model gain by using more complicated models if in the end it would just reproduce the simple model?
A similar question could be raised "why don't computational modelers use a dt of 0.1 nanoseconds" or "why don't they use spatial discretization of 0.1 micrometers?". Again, it is probably not necessary, and doesn't change the outcome.
The important thing is people need to carefully consider whether these things could change the outcome of a model.
Indeed, it reminds me of something a wise modeler once told me "if you make your model as complicated as the real thing, then you might as well just work with the real thing". One of the advantages of a model is it's simplicity.
A similar question could be raised "why don't computational modelers use a dt of 0.1 nanoseconds" or "why don't they use spatial discretization of 0.1 micrometers?". Again, it is probably not necessary, and doesn't change the outcome.
The important thing is people need to carefully consider whether these things could change the outcome of a model.
Indeed, it reminds me of something a wise modeler once told me "if you make your model as complicated as the real thing, then you might as well just work with the real thing". One of the advantages of a model is it's simplicity.
-
- Site Admin
- Posts: 6305
- Joined: Wed May 18, 2005 4:50 pm
- Location: Yale University School of Medicine
- Contact:
Re: Detailed Modeling of Synapses
This requires a direct reply. Do not falsely impute motives to others. You did this at the beginning of this discussion by making an unfounded assertion about unspecified others. You just did it again, but this time about me. Basta.Keivan wrote:Calm down please.
Good questions. You might as the authors themselves.as an example look at the Golding 2005 and Jarsky 2005 articles. Both of them are supervised by Nelson Spruston in the same lab. Golding adjusted the passive parameters of the model with experimental results. Jarsky used the same cell created by Golding to show that how distal and proximal dendrites collaborate with each other but decided to do not use the Golding et al adjusted passive parameters for that reconstructed morphology. When I look into these two articles I find a big "?" in my mind. What's happening here? If they did sth important why they do not use their own findings?
Scientific progress in any field is more like the zigzag flight of a moth than the more or less straight line trajectory of a neutron. This is especially true in neuroscience.What I am trying to say is that each lab works on a topic. they start a subject and each work or progress is a backbone for the future works. why this does not happening in modeling (at least in modeling dendrites)?
I have to ask because I can't read minds.what makes you feel that I am not?Are you familiar with the work of Abbott, Ermentrout, Kopell, Rall, Rinzel, Rubin?
This begs the question: can you give a specific example of an instance of an unwarranted simplification in the work of any of these individuals, or anyone else who has published work in theoretical or computational neuroscience?but I think they simplify thing without experimental or computational documents some times.
Not particularly.I wanted to know is there a methodological problem with chemical reaction models in synaptic modeling?
It is good practice to explain the objective basis for nontrivial decisions made in the design, performance, and interpretation of experiments, whether they be "wet lab" or computational experiments.If I use them should I tell the reason during publication process?
I'm not sure I understand what this question means.Do you believe that this should be topic of an article before I can use it?Well, if that's essential to your hypothesis, go ahead and include detailed synaptic mechanisms. But to discover what "extra" you get from such details, you'll also have to build models that use simplified synaptic mechanisms.
Re: Detailed Modeling of Synapses
I asked my questions from Nelson Spruston Before I start this topic. he answered my question, today:1) why most authors neglects these kind of synapses?
2) Using these detailed synapses has a positive or negative impact on the model’s validity?
the short answer is that most people prefer simpler models unless there is a compelling reason to add the complexity.
excuse me, I just wanted to be friendly with you, nothing else.This requires a direct reply ...
also, if I question lack of some detail in their experiments, does not mean that I do not respect them. Besides, this happened because I'm not a native English speaker and some times I choose wrong words. I'm really learning from you when you answer my questions.
good idea.Good questions. You might as the authors themselves.
It has a long story I will answer that later (but soon).This begs the question: can you give a specific example of an instance of an unwarranted simplification in the work of any of these individuals, or anyone else who has published work in theoretical or computational neuroscience?
-
- Site Admin
- Posts: 6305
- Joined: Wed May 18, 2005 4:50 pm
- Location: Yale University School of Medicine
- Contact:
Re: Detailed Modeling of Synapses
Nelson's answer to your question was very interesting. He's an experimentalist, and probably knows more about the detailed anatomical and biophysical properties of neurons than most computational modelers do. His target audience is composed primarily of other experimentalists who may or may not be engaged in computational modeling.