Programming Praxis' stop criteria

Using the Multiple Run Fitter, praxis, etc..
mart

Programming Praxis' stop criteria

Post by mart »

In order to restrict the amount of the error so that the Praxis stops the fitting when it reaches that amount of error, two strategies can be followed:

(1) To modify the 4th argument that is to be fed into the function fit_praxis()
(2) To modify the procedure after_quad() in the file "nrn/lib/hoc/mulfit/fitparm.hoc"
http://www.neuron.yale.edu/phpBB/viewto ... axis#p7787

The second option is only valid when using the Multiple Run Fitter, as this is my case I would like to focus this post in the details of such a strategy.
As ted kindly suggested three changes are necessary. Two are to be made to the copy of fitparm.hoc that you put in your project directory.

1. At the end of proc after_quad()
change

Code: Select all

  nquad += 1
}
to

Code: Select all

  nquad += 1
  if (opt.minerr<QUITVAL) stop_praxis()
}
2. At the top of the MulfitPraxWrap template change

Code: Select all

begintemplate MulfitPraxWrap
to

Code: Select all

begintemplate MulfitPraxWrap
external QUITVAL
The third change is to your own hoc code. Before creating the multiple
run fitter insert a statement of the form

Code: Select all

QUITVAL = somenumberyoulike
If we consider the problem of local minima as an obstacle in the path to the amount of error defined by us (this is QUITVAL), then it would be very useful to make our Multi Run Fitter able to alternate "Optimize" rounds and "Randomize with factor" events in an automatic way (without user's intervention).

Any ideas to achieve this ???

Thanks a lot for reading and posting in this forum.
ted
Site Admin
Posts: 6286
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Programming Praxis' stop criteria

Post by ted »

nrn/lib/hoc/mulfit/mulfit1.hoc contains a template that defines the MulRunFitter class. That template contains proc randomize(), which is ordinarily accessed via the GUI by clicking on a button that is labeled "Randomize with factor". I suspect that calling this proc from hoc will do what you want, but first you must make it public. To do that, copy mulfit1.hoc to your project's directory, then edit that copy and change
public optsave, optrestore
to
public optsave, optrestore
public randomize
It will then be callable from hoc as MulRunFitter[0].randomize() (assuming that you are using only one MulRunFitter).

If you want to be able to use hoc to change the randomization factor ranfac from its default value (2), you'll have to expose it too. As you might guess, that's probably doable by changing
public randomize
to
public randomize, ranfac
so you can assign it a new value like so
Then the hoc statement
MulRunFitter[0].ranfac = 1.5
should change ranfac to 1.5 etc..

As always, it is a good idea to test these recommendations to make sure they work.
mart

Re: Programming Praxis' stop criteria

Post by mart »

I am not qiute sure about what did I do wrong, but the "QUITVAL" strategy didn't work.

My understanding of Brent's algorithm is pretty basic, having said "I am just guessing", I would claim that the stardard MRF's criteria imposes an end to the optimization process when it is established that the free parameters are always converging to the same values for a given minimum error.

This minimum, can be an absolute minimum or a local minimum... a problem we will always have to face that is inherent to Bren't algorithm. But let's forget about local minima and focus on a more simple goal: How to force the optimization process to continue, even when it has been established that the free parameters are converging over and over towards the same values ?

-Having modified the procedure "after_quad()" the optimization stops after ~100 runs (point that I intuitively associate with the standard praxis' stop criteria -although I am not qiute sure about what those criteria are-), the error I get is far larger than the one specified by "QUITVAL" and if I click in "Optimize" again the error gets slightly smaller (I have clicked again in "Optimize" up to 4 times, and the improvements become more subtle each time).

-I didn't even try to implement "randomize" events in between "Optimize" rounds because it is clear to me that I am not able to overcome the standard routine that imposes an end to the optimization process.

Is there any automatized solution able to achieve the same effect* of clicking over and over in "Optimize" after the end of every otimization round ???
(*here effect means getting slight improvements in the error)

Is there a way to keep the optimization process endlessly running ???
(...or until manual Stop)

Once I get some feedback on how to do this, I will be very happy with trying to automatize the alternation of "randomize" events and "Optimize" rounds. Afterwards I will check whether this actually helps to jump out from local minima, and for sure I will share what I consider the best solution so far.

Thanks for reading and posting in this forum.
mart

Re: Programming Praxis' stop criteria

Post by mart »

Has anybody tried to overcome the default criteria for praxis to stop an optimization procedure?
Did anybody manage to keep the optimization endlessly running unless a user-defined amount of error is reached?

I didn't... so far I can only manually click again in optimize when the procedure stops by default criteria. I usually do this over and over until I start to get very small decrements in the error, then is time to think that something is wrong with my model or that I am stuck in a local minimum... either the case I try to jump out of such situations by manually re-adjusting the model and manually starting successive optimization rounds again.

I have to say that this strategy works very well, my model improved enormously, but I would really like to be able to overcome praxis stop criteria in order to automatize all this to some extent.

(A couple of short questions)
What are the units for the error reported my MultiRunFitter?
I found some information about it in this post viewtopic.php?f=23&t=3075 :
"The error value is the square norm between data and dependent variable treated as continuous curves. In other words, it is the sum of the squared errors"

Does this mean that the difference between every point in the dependent variable and its corresponding point in the generator is computed, squared, and then all these individual point-to-point differences are averaged ?
(This would be Mean Square Error, and not Sum of the Squared Errors... there is something in here that I don’t understand)

In my case, the generators are voltage traces and the variable to fit is the membrane voltage of my model.
In principle, My error value should be given in millivolts, right?
(If the error value corresponds to Mean Square Error it cannot be given in millivolts, it is way too large. Easy to see by eye that the the average absolute deviation between model and data at any point cannot be 8.1mV -the value I am getting- )

Thank for reading and posting in this forum,
Happy modelling, and good luck to everyone!
ted
Site Admin
Posts: 6286
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Programming Praxis' stop criteria

Post by ted »

mart wrote:What are the units for the error reported my MultiRunFitter?
Any given generator will report its error in units foo^2 where foo are the units of the variable and "standard" that are compared in that generator. If several generators are active and any generator is comparing a variable and standard whose units differ from those of any other generator, the total error calculated by the MRF will have nonsense units.
I found some information about it in this post viewtopic.php?f=23&t=3075 :
"The error value is the square norm between data and dependent variable treated as continuous curves. In other words, it is the sum of the squared errors"

Does this mean that the difference between every point in the dependent variable and its corresponding point in the generator is computed, squared, and then all these individual point-to-point differences are averaged ?
"Sum of squared errors," with no mention of averaging, seems to have a clear interpretation: sum of squared errors. Easy enough to check if you cook up a standard and some data that differs by, say, a constant offset.
mart

Re: Programming Praxis' stop criteria

Post by mart »

Ok, understood.
Now, what what about overcoming praxis' default stop criteria?

As I explained before, instead of defining a fixed number of "quads before return" I would really like to keep the optimization process going through an infinite number of rounds until certain error is reached. If the error is never reached (for instance because of falling into a local minimum), it doesn't really matter... I'd keep MRF running (let's say) overnight, then I'd stop the optimization, and I'd manually readjust the parameters with the aim of jumping out of the local minimum. Alternating this with automatic "randomize" events would be luxury...

But still, I am not even able to achieve the first step, I proceeded exactly as you suggested and it didn't really work, since I am not getting any error message, it is difficult for me to guess where the problem comes from. After implementing the QUITVAL-strategy I can only see that the optimization stops after some hundreds of quads (200 on average), presumably when praxis "somehow" detects that the free parameters are always converging to the same "optimal" values.

I know, my explanation is very vague... sorry about that, I don't know what other providable information is relevant in this context.

Thanks a lot ted,

Best wishes,
Ulisses
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Programming Praxis' stop criteria

Post by hines »

Experimenting with first arg of attr_praxis might solve the problem.
http://neuron.yale.edu/neuron/static/ne ... ttr_praxis

This function is called by the MultipleRunFitter

Code: Select all

        attr_praxis(1e-4, .5, 0)
But I believe you can manually call it after the fitter pops up and your changes will persist.
mart

Re: Programming Praxis' stop criteria

Post by mart »

Dear Michael L. Hines,

I tried again the "QUITVAL" strategy and I didn't achieve what I wanted, this is to keep the optimization process running until the minimum error = QUITVAL or until I manually stop it, actually nothing changed after using such a strategy. Then I tried playing around with the first argument of "attr_praxis()" and, although I didn't achieve what I wanted I saw some changes. Using progressively smaller "tolerance" values leads to a higher number of runs before stop, this happens up to a certain threshold below which smaller values do not further increase the number of rounds before stop. Modifying only the first argument of "attr_praxis" has the same effect as implementing both strategies at the same time "attr_praxis" + "QUITVAL" which makes me think that something is wrong with the "QUITVAL" strategy because it seems to be not functional at all.

Do you have any other idea?

Thanks a lot for the help provided so far.
Best wishes,
Ulisses
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Programming Praxis' stop criteria

Post by hines »

Looking at the implementation in src/scoplib/praxis.c I see the comment:
/* T0 IS A TOLERANCE. PRAXIS ATTEMPTS TO RETURN PRAXIS=F(X) */
/* SUCH THAT IF X0 IS THE TRUE LOCAL MINIMUM NEAR X, THEN */
/* NORM(X-X0) < T0 + SQUAREROOT(MACHEP)*NORM(X). */
/* MACHEP IS THE MACHINE PRECISION, THE SMALLEST NUMBER SUCH THAT */

/* 1 + MACHEP > 1. MACHEP SHOULD BE 16.**-13 (ABOUT */
/* 2.22D-16) FOR REAL*8 ARITHMETIC ON THE IBM 360. */

I guess you are asking for praxis to never return if you request a value for the function which is less than the minimum.
attr_praxis does accept a tolerance of 0.0 but it will continue to return when it finds itself in where, wherever it checks, it sees that the value is not less to within the machine precision.
All I can suggest is to call fit_praxis from within a loop where you decide what to do next when it returns.
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Programming Praxis' stop criteria

Post by hines »

This is beside the point, given your question, but I usually have better luck with praxis when I search in log space with respect to the args.
mart

Re: Programming Praxis' stop criteria

Post by mart »

Thanks for the hints,

I understand that by looking at the commented lines in "src/scoplib/praxis.c" you get the basics on "attr_praxis" as summarized in the programmatic analysis documentation of NEURON's website: http://neuron.yale.edu/neuron/static/ne ... ttr_praxis

I read it, and I understand most of it but, for some reason (at least in my computer Debian7 amd64) modifying the arguments of "attr_praxis" does not have the expected results.
A tolerance value of 0.0 is supposed to keep the Optimization process continuously running, well, this does not happen, my MultiRunFitter returns after roughly 170 runs, no matter how small is the tolerance value (even if it is 0).

To write a script that "does wht I want" and then calls "fit_praxis" sounds like a great idea. To have something like that running in my computer could be very useful. I could for instance start a MultiRunFitter session for which the "praxis' by-default stop criteria" would set a point of return, but I would prevent this return by imposing my own stop criteria or, even better, I would ask for a "randomize" event and then I would start another optimization round... by alternating "randomize" events with optimization rounds it might be possible to overcome the problem of local minima to some extent.

My problem is that I really don't know where to start with this script.
When I am supposed to load this script? At the end of my main .hoc file (the one that loads nrngui.hoc, my raw-model, the "parameters.hoc" file and finally the "session.ses" file) ? How should I specify my own stop criteria along this script (something like

Code: Select all

if (opt.minerr<QUITVAL) stop_praxis()
) ? How to tell praxis to go back to the optimization process ?

Can you please provide an example of such a script, so that I can better orientate myself?

All the best,
Ulysses
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Programming Praxis' stop criteria

Post by hines »

Let me ask a question first. Do you consider QUITVAL to be greater than the minimum of the function you are trying to minimize. Also, what value for
the function is fit_praxis finding as what it thinks is the minimum.
mart

Re: Programming Praxis' stop criteria

Post by mart »

Dear Michael L. Hines,

First of all, thanks a lot for your quick reply.

Let's see if I understood correctly yout questions.

The minimum error reported by MultiRunFitter is "11", if I am not wrong, this value is the square difference between data and dependent variable treated as continuous curves. Since the variable to fit is Vm and my generators (experimental data) are voltage traces recorded in current-clamp mode, the error should be given in square millivolts. I hope that "minerr" makes reference to the concept of minimum error that I have in mind, and that it has the same units because I set my QUITVAL to "1.5" expecting the optimization to not to stop until I get an error value smaller than 1.5 (square-mV)
what value for
the function is fit_praxis finding as what it thinks is the minimum.
I am not sure about where to find the information to properly answer your question.

From this line in my "mulfit.hoc file"

Code: Select all

minerr = fit_praxis(opt.start.size, "call_opt_efun", &opt.start.x[0],"after_quad()\n")
I can try to guess that "call_opt_efun" is getting as a function to be minimized the difference between data and dependent variable.

Best wishes,
Ulysses
hines
Site Admin
Posts: 1682
Joined: Wed May 18, 2005 3:32 pm

Re: Programming Praxis' stop criteria

Post by hines »

You answered my questions. You want fit_praxis to return when f(args) <= 1.5 and praxis is returning because it found what it thinks
is the minimum, f(args) = 11

Whether or not 11 is truly the local minimum, praxis is stuck at the bottom of what it thinks the function looks like at that local minimum.
Praxis makes some effort to solve the resolution valley problem where the minimum is along some slowly changing
space curve and when you get off the curve the function rises very steeply. I believe It is pointless to try to force praxis to continue
when its return criterion is satisfied. The only thing to do is start again with a new starting position and see if it finds the same mimimum.
If the function looks like a flat table with a drain. praxis will not be able to find the drain as every point on the table is a local minimum.
In that case you need to think in terms of simulated annealing or genetic algorithms.

I would recommend you experiment with a simple quadratic function in order to know the error units for that function. You can then
transform them into the units you prefer.

Anyway, looking at src/scopmath/praxis.c, I see that doublereal praxis(...) returns when 'goto L400' is executed and so you might experiment
by commenting out the first two cases of that. Then praxis will never return until you set the global variable stoprun from within your
function
mart

Re: Programming Praxis' stop criteria

Post by mart »

Dear Michael L. Hines,

Your last message was really helpful.
Now I see that there is little to do when praxis takes a set of parameters as the ones putting the function in the local minimum.
I tried to use as much exponential parameters as possible (my model has 4 free parameters, half of them are exp) and, of course, I tried to start the optimization procedure in different points within the space of parameters... very different combination of those 4 parameters gave me quite good minimum errors, actually quite close to the one reported in my last message. Up to now, 11 square millivolts, seems to be by best.

The thing is that, to explore the space of parameters by hand is something that can only be done in a very crude way. I don't have an infinite amount of time to be spent in this part of my project (I am actually an experimentalist and, as you have already guessed my knowledge on computational modelling is very limited).

I did read about simulated annealing and genetic algorithms. Unfortunately my knowledge on these two approaches does not go far beyond Wikipedia-level. I think I do understand the basic philosophy behind them and I would really like to use them in NEURON but I don't know where to start. Is any implementation of these two methods available? Is there something already written or the only way is to start from scratch?

Thanks a lot,
Ulysses
Post Reply