Roundoff errors can potentially affect all floating point calculations that involve a nonzero fractional part, especially when binary arithmetic is used with decimal numbers--because anything that isn't an integer power of 1/2 is represented in binary with a nonfinite string of 0s and 1s (financial calculations use "binary coded decimal" encodings that, at the cost of increased storage requirement, allow exact representation of decimal numbers). So unless your particular installation of NEURON used a BCD math library, you will definitely find similar slippage with dt = 0.01.
Adaptive integration is shorter (the main reason why I use the term), and more descriptive than "variable time step integration" when it comes to describing what NEURON's adaptive integrators can do (they change both dt and the order of integration as necessary to satisfy an error criterion).
normrand works with fixed or variable time step integration. The chief problem with using random sequences to emulate noise during adaptive integration is the fact that, if a new value is drawn at every time step, the power spectrum of the noise will vary (more power at higher frequency when dt short, and more power at lower frequency when dt is long). That would be just silly, because the statistical properties of the noise signal would not be under the explicit control of the modeler, but instead would be an accident of the stability and accuracy of the set of equations that is being solved.
A possible workaround is to implement a noise mechanism that picks a new number at a predetermined, fixed interval. It is possible to implement such a mechanism, but the integrator will be re-initialized every time a new sample is drawn. This imposes a computational cost, so if the noise bandwidth must be high (the samples must be drawn frequently), you're better off just using fixed time step integration because the adaptive integrator will never be able to increase dt to the point where it can save run time. However, your code suggested that you were OK with new random samples at 0.1 ms intervals, which is 10 x your fixed dt, so maybe you could get a small speedup (maybe twice as fast).
But the ability to use adaptive integration isn't reason to revise the mechanism. The reason is to eliminate the roundoff error problem, and you can get that benefit merely by using self-events to control the sampling times.
can I ask what you mean by Mean should be nA2?
That's my mistake--a mismatch between what was in my head and what I typed. Mean of course should be in nA. Variance should be in nA2. (now you see one of the reasons why I check my code for units inconsistencies)