Eth mining uses the ethash algorithm, which does a keccak hash (essentially sha3-512), 64 x 128-byte random DAG reads, then another keccak hash. The keccak hashing is GPU intensive, while the DAG reads are memory intensive. All the public miners use a similar opencl implementation, and so have similar GPU/MEM requirements. For AMD cards with a 256-bit memory interface (R9 380, Rx 470/480...), that ratio works out to 0.56. So with a 1500Mhz memory clock, the optimal core clock is 840Mhz. Increasing the core clock beyond 840 will not increase hashrate, and ends up using more power. Similarly, a Rx 470 with a 1750Mhz memory clock should have a core clock of 980Mhz.
One additional condition is that the GPU needs enough compute units so that it can saturate the memory bandwidth. With the publicly available miners, that minimum is around 22-24 compute units, so cards like the R7 370 do not max out their memory bandwidth.
This ratio is not a fundamental limit of the ethash algorithm, so a kernel with a highly optimized keccak implementation (such as Wolf's private kernel) likely has a lower ratio and a lower minimum number of compute units required for optimal performance. This means you won't see any miner get more than 28Mh from a Rx 470 at 1750Mhz, someone could release a miner that gets its maximum hashrate with a core clock of much lower than 980Mhz, and therefore reduces power consumption.
Can confirm, a lot of trial and error over weeks got me to 1050 core/1870 mem as the most efficient for my RX 470's and it fits the 0.56 ratio.