Thanks for your suggestions, overall significant improvement!
These are my current parameters:
--gpu-engine 300-1105,300-1145,300-1150,300-1200 --gpu-powertune 5 --gpu-memdiff -150,-150,-150,-150
--gpu-engine 300-1150 --gpu-powertune 5 --gpu-memdiff -150,-150,-150,-150 --temp-target 70 --temp-cutoff 80 --auto-gpu --auto-fan
GPU 0: 73.0C 5220RPM | 614.0/631.3Mh/s | A:0 R:0 HW: 0 U:0.00/m I:10
GPU 1: 73.0C 5072RPM | 645.6/664.8Mh/s | A:0 R:0 HW: 0 U:0.00/m I:10
GPU 2: 73.0C 5085RPM | 644.5/666.5Mh/s | A:0 R:0 HW:60 U:0.00/m I:10
GPU 3: 69.0C 4113RPM | 678.8/678.7Mh/s | A:0 R:0 HW: 0 U:0.00/m I:10
Rate has been as high as 2.9Ghash/s but I'm just cruising right now until my high volume case fans arrive in the mail. That will ideally bring me down a few degrees and then I can push a little more.
A couple more questions: on the --gpu-engine option, my GPUs never used the lower end of the range selected. Would there by any reason not to give a single value e.g. --gpu-engine 1150?
Also curious about how cgminer prioritizes command line input vs. config file. It seems if config file exists, then command line parameters are ignored... ? Easy workaround, but maybe I am doing something wrong?
And a random thought: I haven't taken a peek into the source, but cgminer seems to have low level control over the GPUs. Would it be trivial to output real time GPU power consumption?
Note that I wrote cgminer, so much of the GPU code is designed around the way I use it on my own hardware
Oh yes, I did know this! I am grateful to get such helpful input! I have been mining solo but may go pool to get some BTC to tip with in the meantime!!!