I use Slush's pool, so this question may belong in that thread. However, if it is common to all mining programs, or if it is on the client side (my end) then I suppose it deserves its own thread.
It is related to the pool, I'll add next server today, which fix the problem.
I actually thought of this - in my new quest of mining, I've had a pretty fairly sizeable percentage of "invalid/stale" blocks - like, one invalid for every 4 or 5 good ones. And I do not have a slow or latent connection, it's fiber and I get excellent pings just about everywhere. I also have 50+ connections in Bitcoin at any given time.
There is absolutely no way that each of those times, someone solved a block a fraction of a second before me. Perhaps each time Bitcoin receives a transaction, that too invalidates all getworks in progress, but that's just a hunch, not a proven fact.
The thought occurred to me that there is a certain percentage of hashes that are wasted simply because the work changed since the time it was grabbed. I could probably mitigate that simply by requesting work more often.
But if I were running a pool, and even five to ten percent of my output was lost to invalid or stale blocks, all while I still paid "shares" on difficulty 1 hashes I received that couldn't be redeemed if they were valid... that would mean a total net loss, never mind variation.
So, the way I see it, it's reasonable to reject difficulty 1 hashes if they would also be rejected if they happen to meet the actual difficulty to form a block. If that's a problem with the Bitcoin client, then it's probably affecting everybody the same, not just me, not just the pool.