Well, guys and gals, - as the time of GPU mining seems to be close to its logical end and first ASIC miners makes their way to the first happy customers, I've decided to publish a small research about the factors that really affect the modern GPUs mining performance.

I've had an idea that on a such a class of task suitable for good parallelization, like SHA256-based bitcoin mining algorithm, only two GPU parameters should affect the total GPU chip speed. I've made an assumption that these parameters are:

- number of GPU specific processors, so called 'shaders' (shader units);

- a clock frequency the shader units above mentioned, able to use.

So, it is possible to evaluate a (total computing power)

**P**of every GPU by a synthetic number - a multiplication of (number of shader units)

**N**by a (clock frequency of GPU at 100 % load)

**F**:

**P = N x F**

- For example, the AMD Radeon HD 6670 has 480 shaders working at 800 MHz (0.8 gigaherz) clock frequency in a full-speed mode, i.e. at 100 % GPU load.

So, the computational power for Radeon HD 6670 may be evaluated by the formulae (480 shaders x 0.8 GHz) = 384 'theoretical' units of computing power.

- In comparison, the AMD Radeon HD 7970, one of the most powerful GPUs on the modern market, has 2048 shaders working at 925 MHz (0.925 gigaherz) clock frequency at full GPU load.

And the power for Radeon HD 7970 may be evaluated as (2048 shaders x 0.925 GHz) = 1894 units of computational power, so AMD Radeon HD 7970 looks to be 5 times more powerful hardware solution than AMD Radeon HD 6670.

The next question is - how these 'theoretical units of power' mentioned above can be conmpared with a real GPU productivity on such a task as, for example, bitcoin mining?

The answer is simple - to draw a simple 2D-plot, where vertical axis will represent the real computing power of GPU on the (for example) task of bitcoin mining, in megahashes per second (M, MHps) while the horizontal axis will represent the theoretical computing power of GPU calculated by the method mentioned above.

So, for every modern GPU we can get a simple pair numbers and use these numbers as 2D-plot coordinates - theoretical power P and real power M.

Here the final table:

and the plot:

as we can see from the graph above, the plot of M versus P is pretty linear that really shows the direct dependence between the 'theoretical computing power' calculated and the real-world power, measured in nature experiments in (megahashes per second) on a real hardware.

As a result, now you can evalute the mining speed / performance of every GPU of AMD Radeon 6xx0, 7xx0 and may be probably of forthcoming 8xx0 series, via a simple formulae of linear equation:

**M (MHps) = 0.2935 x (Shaders x Frequency In GHz)**

where:

- (

**M**) - BTC mining speed in Megahashes per second;

- (

**Shaders x Frequency**) is a multiplication of number of shaders at GPU and their stock frequency in gigaherz (GHz);

- (

**0.2935**) is a scale factor.

That's all, folks - and happy mining!