I've seen this kind of question before
With the type of getwork that just gets one blockheader the asic has to search 2^32 nonces and finds on average 1 nonce worth a share.
The getwork uses about 1kB data. Sending the nonce back to the pool uses less iirc, about 1/4kB.
For 1500 GHash/s this gives 1500G / 2^32 is about 350kB/s pool->client and 90kB/s client->pool.
For getwork there is already rollntime that tells the client it can change the time part in the blockheader.
So with a rollntime with value 10 the clients has 10 times more work so the 350kB/s pool-client is reduced to 35kB/s.
If the client->pool data gets to much, it is possible to change the difficulty 1 for a valid share to something bigger.
That way the client finds less shares, but the shares are worth more so on average it is still the same.
There are at least 2 new protocol proposals, but I can't remember their names so I can't find them right now.
One of them lets the client calculate the header itself iirc and has some higher difficulty for shares and something better than the current longpoll the prevent stales.
The other I haven't read the text but I think it has something just like that.