Bitcoin Forum
April 25, 2024, 03:31:12 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: The limiting factor in CPU mining?  (Read 2949 times)
Tukotih (OP)
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
May 17, 2011, 01:34:35 PM
 #1

I'm somewhat aware of the reason why GPU-mining is so much more efficient than CPU-mining but I just wonder whether it is impossible to make CPU mining more efficient.
So, where is the limiting factor in CPU-mining? Does the instructions per clock make it impossible to get a higher hash-rate with a CPU than a GPU? Isn't there any way to run several hashes through the same instruction?

I mean, the CPU is able to do a lot more complex instructions than a GPU, is it impossible to use this as an advantage?

Sorry if my question is stupid or illogic. But I can't stop thinking about it if I don't get an answer.

Donations always appreciated: 1GkRu9rZxk5iMRzsrcZxZ3BUHV1SWNZ9RB
IMPORTANT! Switch from deepbit: http://forum.bitcoin.org/index.php?topic=8653.0
1714015872
Hero Member
*
Offline Offline

Posts: 1714015872

View Profile Personal Message (Offline)

Ignore
1714015872
Reply with quote  #2

1714015872
Report to moderator
1714015872
Hero Member
*
Offline Offline

Posts: 1714015872

View Profile Personal Message (Offline)

Ignore
1714015872
Reply with quote  #2

1714015872
Report to moderator
Be very wary of relying on JavaScript for security on crypto sites. The site can change the JavaScript at any time unless you take unusual precautions, and browsers are not generally known for their airtight security.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714015872
Hero Member
*
Offline Offline

Posts: 1714015872

View Profile Personal Message (Offline)

Ignore
1714015872
Reply with quote  #2

1714015872
Report to moderator
dikidera
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
May 17, 2011, 01:37:57 PM
 #2

The CPU has 1/2/3/4/8 cores. The GPU has 1000+ stream processors.
Tukotih (OP)
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
May 17, 2011, 01:40:23 PM
 #3

The CPU has 1/2/3/4/8 cores. The GPU has 1000+ stream processors.
Oh, that clears up a lot for me.
However, the question has not yet been answered.

Donations always appreciated: 1GkRu9rZxk5iMRzsrcZxZ3BUHV1SWNZ9RB
IMPORTANT! Switch from deepbit: http://forum.bitcoin.org/index.php?topic=8653.0
dikidera
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
May 17, 2011, 01:46:02 PM
 #4

You would have to overclock the CPU to get additional performance, but other than that...no, you can't make it faster even if you optimize the max of your program.
phelix
Legendary
*
Offline Offline

Activity: 1708
Merit: 1019



View Profile
May 20, 2011, 10:59:28 PM
 #5

+1 to the question

I am wondering about this, too.


what are the advantages of a cpu if the gpu has a thousand processors?
Turix
Member
**
Offline Offline

Activity: 76
Merit: 10



View Profile WWW
May 20, 2011, 11:24:05 PM
 #6

+1 to the question

I am wondering about this, too.


what are the advantages of a cpu if the gpu has a thousand processors?

The stream processors on the GPU are not fully fledged cores and often run on reduced instruction sets or they simply have a different architecture which is designed for working with graphical elements (vectors, floating point maths, etc etc) and they simply don't operate as well as a CPU core (and quite often have a much higher error rate, anyone who has run a BOINC project on a GPU have noticed this) at certain tasks.

That being said, there has been a lot of work recently on CPU/GPU hybrids and I believe the first generation of them are starting to kick off this year commercially.

Edit: Not to mention the lower clock speed of the stream processors, a GPU core will usually run around 1Ghz give or take depending on the card, where as modern CPUs are running between 2-4Ghz and you can do some really silly things with the old single cores (8Ghz P4s spring to mind).

YinCoin YangCoin ☯☯First Ever POS/POW Alternator! Multipool! ☯ ☯ http://yinyangpool.com/ 
Free Distribution! https://bitcointalk.org/index.php?topic=623937
Bwincoin - 100% Free POS. BSqnSwv7xdD6UEh8bJz8Xp6YcndPQ2JFyF
gigitrix
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500



View Profile
May 20, 2011, 11:54:19 PM
 #7

GPUs are designed for graphics. Graphics are pure maths. Bitcoins are pure maths (SHA256 hashing). And thus they excel at this task also, along with some other scientific/researchey stuff.
CPUs are designed to do pretty much everything and will never be able to (nor is it desired that they) compete when it comes to high performance computing such as this scenario.
Basiley
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
May 21, 2011, 09:44:21 AM
 #8

The CPU has 1/2/3/4/8 cores. The GPU has 1000+ stream processors.
they had 1000+/5 1000+/4 "stram processors" in VLIW5 and VLIW4 cases, but they super-scalar, which mean they VERY computation-efficient. esp VLIW5 GPU's.
Hawkix
Hero Member
*****
Offline Offline

Activity: 531
Merit: 505



View Profile WWW
May 22, 2011, 09:21:57 PM
 #9

Latest Intel i-cores support a specialized instructions to speed up AES cipher. Well, if the Intel's engineers add a special instruction(s) for speeding up the SHA256, that would change the game a lot again!

Donations: 1Hawkix7GHym6SM98ii5vSHHShA3FUgpV6
http://btcportal.net/ - All about Bitcoin - coming soon!
phelix
Legendary
*
Offline Offline

Activity: 1708
Merit: 1019



View Profile
May 23, 2011, 08:05:32 AM
 #10

Latest Intel i-cores support a specialized instructions to speed up AES cipher. Well, if the Intel's engineers add a special instruction(s) for speeding up the SHA256, that would change the game a lot again!

+1 that is very interesting

from this benchmark I would estimate the possible gain to a factor of 10. Which still would not make them competitive...
interfect
Full Member
***
Offline Offline

Activity: 141
Merit: 100


View Profile
May 23, 2011, 08:35:43 AM
 #11

+1 to the question

I am wondering about this, too.


what are the advantages of a cpu if the gpu has a thousand processors?

The individual cores on the CPU are faster than GPU cores, but mostly what the CPU has going for it is compatibility. All PC CPUs use the x86 architecture; Windows and other programs written for x86 will run on any CPU. GPUs use various proprietary architectures, which is why games need to recompile their shaders for your GPU on startup. Also, the current PC architecture requires a CPU to be "in charge" of the system, while the GPU is a co-processor: it doesn't get to decide, say, that the system should turn off now, or that the sound card should play music. Finally, parallel programmers are tremendously bad at their jobs. Most of the time they cop out and write single-core programs. So the extra cores of the GPU wouldn't help because the programmers only bother using one.
Basiley
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
May 23, 2011, 10:11:04 AM
 #12

Latest Intel i-cores support a specialized instructions to speed up AES cipher. Well, if the Intel's engineers add a special instruction(s) for speeding up the SHA256, that would change the game a lot again!

imagine what happen when Windows 8 out as well as future Linux kernels.
both with promised native GPGPU support.
and "performance gain" in that case was bigger than one digit can reflect. let alone "ease to use" and etc aspects.
urtur
Full Member
***
Offline Offline

Activity: 211
Merit: 100



View Profile
May 23, 2011, 12:46:17 PM
 #13

imagine what happen when Windows 8 out as well as future Linux kernels.
both with promised native GPGPU support.
and "performance gain" in that case was bigger than one digit can reflect. let alone "ease to use" and etc aspects.

Performance of the network - yes.
Performance of already running machines - not necessarily. A new OS may even decrease it.
acamus
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
May 24, 2011, 01:08:17 AM
 #14

using a cpu to generate coins is like using a car to drive 3 feet getting in and out and shutting the car on and off in stead of just stepping there. thousands of stream processors that can step but cant drive is just what youre doing with the gpu
Basiley
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
May 24, 2011, 03:22:27 AM
Last edit: May 24, 2011, 03:43:00 AM by Basiley
 #15

GPU had SIMD[esp true for AMD] arch and quite good with workload w implicit paralelism.
also they super-scalar and can do lot funny things per/clock[esp AMD and esp 58xx]. yep, like CPU but with way more feats "in silicon", rather than microcode.
and also onboard memory is  about 12x times faster. (256xbit bus x 4000MHz-5000MHz Eff clocks x lower[usually]latency ==LOT more bandwith that 128bit bus x1333 Hz].
imagine what happen when Windows 8 out as well as future Linux kernels.
both with promised native GPGPU support.
and "performance gain" in that case was bigger than one digit can reflect. let alone "ease to use" and etc aspects.

Performance of the network - yes.
Performance of already running machines - not necessarily. A new OS may even decrease it.
bigger GPGPU market mean more $/BTC in it and thus more R&D in improving both SW and HW.
but basically im mean more # of useful GPGPU applications[in both meanings] for consumers/corporate/investors.
error
Hero Member
*****
Offline Offline

Activity: 588
Merit: 500



View Profile
May 24, 2011, 03:24:29 AM
 #16

using a cpu to generate coins is like using a car to drive 3 feet getting in and out and shutting the car on and off in stead of just stepping there. thousands of stream processors that can step but cant drive is just what youre doing with the gpu

It's more like walking across the country instead of taking an airplane. Cheesy

3KzNGwzRZ6SimWuFAgh4TnXzHpruHMZmV8
Basiley
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
May 24, 2011, 03:45:00 AM
 #17

for example, converting RAW image on GPU take about 12 seconds instead of two minutes on CPU Tongue
anewbie
Newbie
*
Offline Offline

Activity: 52
Merit: 0


View Profile
May 24, 2011, 04:20:40 AM
 #18

This analogy is inaccurate, but should still help.

GPU's are designed to do the same calculation to a set of numbers.  When a game you are playing wants to create shadows from a light source, it creates an array of numbers representing characteristics of each pixel on your screen.  Imagine an Excel spreadsheet with a number for each pixel that represents the color/reflectivity/etc... of every point.  This is loaded into the GPU.  Then the game assumes there is a light source at a location with a set of qualities shining on all of those points.  To determine how that light alters the appearance of each pixel on the screen it runs the exact same calculation on each point.  Very simplistic example: It multiplies each cell on the Excel spreadsheet by 3.  All at once.

This is what GPU's are designed to do: run the exact same mathematical calculation on a large set of numbers.

For a CPU to do the same thing, it would need to set up all the points in memory, then run through them one by one, multiplying each by 3.  The CPU would do a single multiplication much faster than the GPU can give an answer for a single pixel/point.  But, the GPU effectively does them all at once and finishes long before the CPU has even started on the array of numbers.

That doesn't make a CPU worse than a GPU for calculations.  If the calculation you are doing is a series of complex mathematical operations on a single number that cannot proceed until you know the preceding answer, the CPU is going to kick the GPU's butt.

That being said, the calculations involved in generating a block do the exact same calculations to a large array of numbers.  This is exactly the task a GPU was designed for.

A GPU is a specialist.  The CPU is a really fast generalist.  There are many many things a CPU can do that a GPU cannot, but get into the area that a GPU is good at and it runs rings around a CPU.
Basiley
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
May 24, 2011, 04:33:08 AM
 #19

last years edge b/w CPU and GPU diffuse, esp on NVidia chips[which is sucks at DP calculations and had scalar core :-/], driven toward MIMD [usual]CPU workload. and CPU remain just "general OS back-end", ie stub for interfacing GPU with appz.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!