I have a rig running windows 10 made up of 4 rx 580's, 5 rx 570's(all modded for max hashrate) and 4 gtx 1060's on the asrock h110, 8 gb ram. When I mine the Pirl currency using claymore I get between 30 and 32 mh/s per amd gpu. When I mine Ethereum using claymore(with exact same settings) for some reason I get 22-24 mh/s per amd gpu(about 8 mh/s lower per card than it should be). The Nvidia gpu's give me the same hashrate for both Pirl and Ethereum.
I have tried mining with 1 amd gpu at a time but still get 22-24 mh/s. I thought it might be that I didn't have enough virtual memory so I bumped it up to 64 gb but still no change. I have tried both crimson 11.1 and adrenaline 12.1 amd drivers(currently using newest amd driver adrenaline 12.1). The rig is stable mining Ethereum running all 13 gpu's hitting 300 mh/s at 1500 watts from the wall. Mining Pirl with all 13 gpu's gives me 370 mh/s at 1650 watts from the wall. A power usage difference of about 150 watts
For some reason it seems like the amd gpu's are being throttled when mining Ethereum. Causing them to hit a lower hash rate and use less power. I haven't changed any settings in Claymore to over/underclock the gpu's so its really odd that this would be happening. If I mine Ethereum using the exact same settings that I use for mining Pirl I still get the 8 mhs difference per card even though both are running using the same EthDcrMiner64.exe.
Anybody experienced/experiencing a similar issue? I have a second smaller rig with 2 rx 580's and it is getting 31 mh's per card mining Ethereum with claymore using the same settings as the rig thats only getting 22-24. This problem has me totally confused.
Any ideas or thoughts would be a big help.
Maybe the drivers, or the mixture of GPUs? That would be my guess if otherwise a system with 2 GPUs produces 31mh per card (which is quite normal for this card with modded Bios. I have a rig with 10 RX580 GPUs which I could only get working with drivers 17.11.3 and 17.11.4 but not with the AMD blockchain drivers....