In all cases the standard deviation of "daily" average was comparable to the to the average share value, which I'm not exactly sure how to interpret. Does it even make any sense?

Wow, real difficulties, hash rates etc.... I applaud you. Kudos. I'm way too lazy to do that. I just use floating point shares to make calculations easier. It means that I can only get comparative efficiencies easily, if I want real world stuff I have to go through and change values which slows everything down.

Anyway, if mean ~ sd it usually indicates a wide spread of values (depending on median). I just ran byteHopper to get some results to post using 3 pools, hash speed ratios of 0.01, 0.1 and 0.1 and no hop off point. The first is over twenty weeks, the second daily estimate over twenty days, and the final one over one day.

slow pool:

mean = 1.673466 1.422362 3.225511

sd = 0.5091813 0.7734413 5.912161

median = 1.841146 1.51612 1.590691

mean = 1.706857 1.747548 4.391248

sd = 0.2108296 0.668324 4.405089

median = 1.701362 1.634362 2.275186

fast pool

mean = 1.573833 1.700709 2.903397

sd = 0.09222334 0.2546248 2.972965

median = 1.576645 1.745717 1.701818

combined:

mean = 1.646658 1.644637 3.895539

sd = 0.1879099 0.4111877 5.067339

median = 1.616157 1.587286 1.874025

As you can see, as sampling intervals decrease the standard deviation (and hence variance) increases - slowly for the slow pools and more rapidly for the fast pools. If you want to model the variation you get as a miner, then use days as a time interval. If you want as close to a theoretically correct result as possible, use more. I usually use about a thousand loops of a million rounds for that.

Now, a request: chartporn please!