Using MRF to match values rather than curves

Using the Multiple Run Fitter, praxis, etc..
Post Reply
shailesh
Posts: 104
Joined: Thu Mar 10, 2011 12:11 am

Using MRF to match values rather than curves

Post by shailesh »

I recently went through the two tutorials for MRF (detailed and well laid out - perfect for beginners) and had one general query:
Most (all?) of the fitting seems to be of the type "minimize the total deviation of the simulated curves from the experimental curves". I was wondering if and how we could optimize with respect to emergent model parameters such as Input Resistance, frequency of bursting etc.

In other words, I would like a custom function in my program to evaluate a certain model quantity (e.g. Rin) and then have the MRF use this function, at each run, to match a specified value (say, Rin should be 100 MOhm). How can I go about doing this using the MRF?

The second tutorial did have a function for evaluating the input resistance. But, I believe, it was never used as a fitness criteria via the MRF but rather invoked manually to compare the values when required.
shailesh
Posts: 104
Joined: Thu Mar 10, 2011 12:11 am

Re: Using MRF to match values rather than curves

Post by shailesh »

I think I got the general idea on how to approach this from another post (viewtopic.php?f=23&t=501). Based on that, I have defined a function which computes the difference between my required value of Rin (say) and the model's run value of Rin and added to it MRF as a "Fitness Primitive":

Code: Select all

target_Rin = 10	//MOhm

objref rinObj
rinObj = new Impedance()

func rin() {
	init()
	cell rinObj.loc(0.5)
	rinObj.compute(0)	// DC Mode, Freq = 0
	model_Rin = rinObj.input(0.5)
	
	error = abs(target_Rin - model_Rin)
	return error
}
Would this be a good enough error value to return to the MRF or can I improve it in someway?
ted
Site Admin
Posts: 6287
Joined: Wed May 18, 2005 4:50 pm
Location: Yale University School of Medicine
Contact:

Re: Using MRF to match values rather than curves

Post by ted »

I'd use the square of the difference because, as long as the difference is a continous function of the adjusted parameters, the derivatives of the squared difference will be continuous near and at the minimum.
shailesh
Posts: 104
Joined: Thu Mar 10, 2011 12:11 am

Re: Using MRF to match values rather than curves

Post by shailesh »

Thanks for pointing that out. After seeing your post I tested some variations of the error function and I observed the following (might be handy for others starting with the Multiple Run Fitter):

1> The error function should return a value >= 0. If the above rin() is used to return just the difference (without the abs()) then MRF cannot perform the optimization.
2> Using the above rin() as it is (absolute value of difference) did optimize the model taking 119 runs
3> Following your suggestion, if I compute the square of the difference, then it optimizes within just 54 runs!
(Initial values, parameter domains always remaining the same)

So yes certainly the squared difference is better than the absolute difference as a return value for the error function. But then I was wondering why it turns out so... I found relevant discussions on the Internet which shed some light on the topic: "Mean Squared Error (MSE) vs Mean Absolute Error (MAE)"

> MAE is clearly a linear score that averages the magnitude of the error with the same weight across all points. In contrast, MSE measures the residuals by assigning a larger “weight” to outliers. If being off by 10 is just twice as bad as being off by 5, then MAE is more appropriate - else prefer MSE. Also, MSE is better than MAE because the latter is discontinuous.

So, using MSE we are effectively amplifying the errors from bad fits and moving away from them faster towards better fits.

But a question that naturally follows is: why not go for higher powers? Why limit ourselves to squares? I found an interesting explanation here: https://www.khanacademy.org/math/probab ... ssion-line... see under the comments:

"the reason we choose squared error instead of 3rd or 4th power or 26th power of the error is because of the nice shape that squared errors will make when we make a graph of the squared error vs m and b. The graph will make a 3-d parabola with the smallest square error being at our optimally chosen m and b. Since this graph has only 1 minimum value it is really nice since we can always find this minimum, and the minimum will be unique. If we use higher exponents it would be harder to find the minimum value(s), and we could find possibly non unique minimums or only local minimums (values that look good compared to the neighbouring values but not the absolute best). So, in summary we used squared error because it gives us a minimum that is easy to find and is guaranteed to be the only minimum"

That seems to sum it up. Though I would like to see the above graphically. If I do get down to it, will come back and post.
Post Reply