So Gigavps and anyone else: Did we ever decide on what a good share target was? 20? 24?
I'd say 10 shares per minute would be fine.
I was at 8 and that was too low... I moved it up to 16 and that still seems too low for some people. Of course, when the ASICs are out, I think 10 is absolutely reasonable, if the minimum hashrate on a given unit is 4.5 GH/s. Right now, though, I think 10 might be too low for GPU miners... not from a technical perspective, but from an emotional one: it drives their variance up too high for comfort is the feeling I get from people.
I think we might need to go back to the drawing board and shoot for a variable difficulty based on server load, vs a getwork target... though that adds quite a bit of complexity. I'm not sure what metric would be the best to account for server load, as there are many other factors that come into play just looking at the system load in top or some such.
I have a very general question about variable difficulty which has bugged me.
I've asked before and I think the answer was experiment and see how you make out. I did some of that and the earnings have steadily been dropping due to increased hash rates, bad luck, and increased difficulty. So... it is hard to tell the results of testing.
So the question is...
If you have multiple devices... should each device have its own worker or should all of your devices share a worker?
If the devices are FPGA/GPU - should you use a different approach than when people receive ASIC hardware?
Well, the FPGA/GPU vs ASIC question really needs to be asked as at what GH/s speed does the getwork target make the most sense... so if you combine all your units into one worker, then a lower getwork target makes more sense, since your variance will "apparently" be reduced by the higher hashrate. If you split them all up, a higher target is better, for the same reason. It's all about perception for the most part... over a long enough period, it doesn't really matter from a functional standpoint.