Bitcoin Forum
April 24, 2024, 11:14:32 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Noob question about hardware: Why so ATI dominant?  (Read 4551 times)
Fingler29 (OP)
Newbie
*
Offline Offline

Activity: 11
Merit: 0


View Profile
December 04, 2011, 08:36:56 AM
 #1

I am not a programmer.


but I do know that such a massive, 100% consistent, discrepancy in performance between ATI and Nvidia hardware with the existing Mining software is not just:


"DURRR ATI IS BETTAR BUCUZ THEY HAZ BETTER DESIGN!"


thats bullshit.


what is it?  was the software developed FOR ATI hardware?  were the developers fed up with Nvidia's PhysX monopolistic microsoft-esque bullshit?  is Nvidia's SDK clunky and hard to work with?

what?

there must be a REAL REASON.  Because I can tell you right now, that even the best programmers and comptuer scientists will never be able to corroborate why a particular hardware design is better than any other with why their software works better on a particular hardware system.


especially when we are talking about what amounts to the absolute simplest mathematics on the planet.


it would take more than 1 "fresh," very intelligent Ph.D. in EE/solid sate physics to even begin to expound upon that subjet, and probably more than 1.


the discrepancies I havve seen between ATI and Nvidia with Mining look like the discrepancies I have seen between PhysX based benchmarks in the past.  I think that is the real situation.  I hope my implication there makes sense to everyone.
1714000472
Hero Member
*
Offline Offline

Posts: 1714000472

View Profile Personal Message (Offline)

Ignore
1714000472
Reply with quote  #2

1714000472
Report to moderator
No Gods or Kings. Only Bitcoin
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
worldinacoin
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile
December 04, 2011, 09:06:29 AM
 #2

It doesn't really need a PhD

https://en.bitcoin.it/wiki/Mining_hardware_comparison
P4man
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
December 04, 2011, 09:55:03 AM
 #3

Sounds like we have another nvidia fan boy that thinks all bitcoin miners are amd fanboys.

Its really simple, AMD cards have a different architecture that happens to be massively better for bitcoin mining; they have many more but much simpler shaders than nvidia cards (which have fewer, but more complex). For games these different approaches tend to be competitive, for floating point math, nVidia is typically far ahead, but for simple integer math like bitcoin mining, AMD is the obvious choice.

Fiyasko
Legendary
*
Offline Offline

Activity: 1428
Merit: 1001


Okey Dokey Lokey


View Profile
December 04, 2011, 09:57:00 AM
 #4

I smell someone who Loves nvidia cards but cant understand why the Fucking Suck at mining,
Heres the thing

MAJORITY(and i Seriously dare anyone to challenge me on this) Of newer graphics that look "pretty" (and by newer i mean like the higher end dx9 stuff and all the dx11) Are done by using Shader values.

AMD cards Do Not have a "Shader Clock" Flatout dont need one. because they pack Alot more Stream Proccessing Cores to do the "Shadey work" instead.

Where as nvidia, Knows, That they do not NEED nearly as many stream proccessor cores if VIDEO GAMES are Majority programmed on Shading Style grapics.

AMD cards have more Power, They do, They are stronger, They can fit ALOT more power in that "spot" where the "shader clock" doesnt exsist.
Nvidia cards have more Programming, They do, Sorry, Im an AMD fanboy FOR LIFE, And im sorry to say but i feel that Nvidia has Toatally rigged the market with all these "new style of graphics rendering!" (Ever since dx10.1 nearly All new graphic "Polish" tech is "shadey").

Bitcoins. Need no graphics, They are nothing but math, And in the computer thats Faster, And has More Cores, You get more work done, Wich results in more bitcoins mined.

Heres a Perfect example
5830
321Mhash/sec

1000core clock

1120Stream Processing cores

Cost? About $139

While the BEST nvidia card (correct me if im wrong)GTX590

193.1Mhash/sec

1215 "Clock"

512x2 Stream Proccessing Cores

Costs about $749.99

Now clearly, The 5830, Is Faster than the Best of Nvidias cards, AT CRUNCHING ONE TYPE OF NUMBER, And that One Type of number, Is what you need to mine bitcoins

Could someone chime in and remind me what the name of the calculation is?
WILD GUESSES AS TO THE NAME:
Floating point...
Interger....
I dont know... Whats that damn name for it.

Oh just another note, Those stats were taken off the mining hardware comparison chart

http://bitcoin-otc.com/viewratingdetail.php?nick=DingoRabiit&sign=ANY&type=RECV <-My Ratings
https://bitcointalk.org/index.php?topic=857670.0 GAWminers and associated things are not to be trusted, Especially the "mineral" exchange
Gabi
Legendary
*
Offline Offline

Activity: 1148
Merit: 1008


If you want to walk on water, get out of the boat


View Profile
December 04, 2011, 10:38:20 AM
 #5

Nvidia fanboy spotted  Cheesy


Fingler29 (OP)
Newbie
*
Offline Offline

Activity: 11
Merit: 0


View Profile
December 04, 2011, 12:37:27 PM
Last edit: December 04, 2011, 12:51:32 PM by Fingler29
 #6

thus far the answer has been "because the charts say so"
remember:  all math, all simulatios are all the same.  computational fluid dynamics looks the same as quantum mechanics at the most base level of the code.


integrals, derivatives, exponentials.... they are all just represented as sums and differences of polynomials.

the math of bitcoin mining is identical between both platforms (crucially, however, the way in which the math is "sent" to and "read from" the GPU may not be the same, amongst a host of other potential differences, but I digress here).

a satisfactory logically reaoned answer, supported with proof from spec/tech sheets from reliable sources (eg:  corroborated by anandtech, THG and/or the manufacturers themselves, to ensure that "claims" are genuine, and not just fluff).

1) the AMD boards have more cores
2) those cores are individually clocked higher than the Nvidia boards (and perhaps they can be threaded)

3) thus, because the math is so simple, the AMD boards can compute more calculations per second AND there are more of those cores, so those numbers literally multiply to add up to much higher performance.
4) Memory and bus architecture are irrelevant because of the relatively small number involved, and the fact that the total data involved in a complete calclulation is miniscule in comparison to the memory avaialble.


THAT would be a sound justification of why the HARDWARE is the true source of the discrepancy, and not the software implementation.



is that the case?  I dont know.  I just made those up as possible solutions that did NOT include the software's particular development as a POSSIBILITY for the discrepancy,


despite what people say, hardware is often less of a tie breaker than you might think.

that IS why physX and other market SOFTWARE BASED norms have given nvidia the edge (and probably Intel for that matter, and probably also ARM in the field of mobile).  It is also a lot easier to FORCE software down the throats of an industry than it is to force a particular hardware architecture (I am referring to Nvidia here, not AMD.   I am referring to Nvidia forcing their software/SDK down the throats of the gaming developers, not the debate above about bitcoin mining software).

my point here is to simply say that the fundamental difference that lies at the heart of this discrepancy is more likely to be software based than hardware based, simply because there are more ways in which the software itself can cause differences in performance.

for fucks sake, most software developers don't have a fucking clue how the hardware really actually works anyway. Its a lot harder for a software developer to maximize their hardware by themselves through tintkering and testing than it is for the hardware developer to siimply give out tools to make it easier for the developers to max the hardware, which gives whichever company who offers the better deal the edge.


the addage "software lags behind hardware" comes from the fact that its actually a lot fucking hardware to write sophisticated software than it is to decrease the gate size on a silicon wafer (up to a point, of course; a point that we have obviously reached) A graduate student, working by himself, can pull it off with minial support from faculty, whereas it usually takes teams of seasoned veteran programers to churn out high quality software.

in terms of "fanboyism"

1) I don't play video games.   I would rather not expound upon my opinions of people who attempt to justify spending lots of money for the ability to play video games at higher frame rates and to be able to see the grass on the simulated ground appear more realistic

2) I have a 6 year old HP laptop with a single core, intel core duo (not even a core 2 duo) with 2 gb of memory and onboard video.  I use 2 of my 4 USB ports to juggle between external hard drive enclosures to utilize a stack of equally old 3.5" internal drives ranging between 250 to 500 gb.  Im not a gamer.  I also have a 1st generation (literally first generation of the first generation) xbox that I won from Taco Bell.  It has an Xecuter 3 mod chip and I use it as a media center with 1st generation 1080p Samsung DLP (as in:  the first 1080p DLP they ever sold) with audio piped through a 13 year old onkyo receiver.  Point is:  I don't pay attention to what is new.  I don't care either.  Everything I have is sufficient for my needs.  when 4k TVs and boxes start coming out, I will upgrade.

I dont give a flying fuck about fanboyism.   Except for my Car.  Fuck yeah Nissan.  Fuck all yall Euro, 'merikan, Australian, and other alterntive JDM shit.  Nissan reigns supreme.  fuck rotaries (mazda), fuck yamaha designed engine components (toyota), fuck lol-crank-walk (Mitsubishi), and fuck diesel-engine sounding broke-transmission bulbous monstrosities (Subaru).

haha Im joking.  but maybe Im not.

no I really am.  anyone who likes cars is a friend.  Even if you like to drive around with a live rear axle or pushrods.
worldinacoin
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile
December 04, 2011, 12:45:10 PM
 #7

thus far the answer has been "because the charts say so"
remember:  all math, all simulatios are all the same.  computational fluid dynamics looks the same as quantum mechanics at the most base level of the code.


integrals, derivatives, exponentials.... they are all just represented as sums and differences of polynomials.

the math of bitcoin mining is identical between both platforms (crucially, however, the way in which the math is "sent" to and "read from" the GPU may not be the same, amongst a host of other potential differences, but I digress here).

a satisfactory logically reaoned answer, supported with proof from spec/tech sheets from reliable sources (eg:  corroborated by anandtech, THG and/or the manufacturers themselves, to ensure that "claims" are genuine, and not just fluff).

1) the AMD boards have more cores
2) those cores are individually clocked higher than the Nvidia boards (and perhaps they can be threaded)

3) thus, because the math is so simple, the AMD boards can compute more calculations per second AND there are more of those cores, so those numbers literally multiply to add up to much higher performance.
4) Memory and bus architecture are irrelevant because of the relatively small number involved, and the fact that the total data involved in a complete calclulation is miniscule in comparison to the memory avaialble.


THAT would be a sound justification of why the HARDWARE is the true source of the discrepancy, and not the software implementation.



is that the case?  I dont know.  I just made those up as possible solutions that did NOT include the software developer POSSIBLE, I repeat this so take it to heart:  POSSIBLE discrepancies in how the software was developed offer a significantly larger number of reaons why there is this discrepancy  that is why I mentioned it first.

despite what people say, hardware is less of a tie breaker than you might think.



that IS why physX and other market SOFTWARE BASED norms have given nvidia the edge.  It is also a lot easier to FORCE software down the throats of an industry than it is to force a particular hardware architecture (I am referring to Nvidia here, not AMD.   I am referring to Nvidia forcing their software down the throats of the gaming developers, not the debate above about bitcoin mining software).


my point here is to simply say that the fundamental difference that lies at the heart of this discrepancy is more likely to be software based than hardware based


for fucks sake, most software developers don't have a fucking clue how the hardware really actually works anyway, its a lot harder for a software developer to maximize their hardware than it is for the hardware developer to give out tools to make it easier for the developers to ax the hardware


the addage "software lags behind hardware" comes from the fact that its actually a lot fucking hardware to write sophisticated software than it is to decrease the gate size on a silicon wafer  A graduate student, working by himself, can pull it off with minial support from faculty, whereas it usually takes teams of seasoned veteran programers to churn out high quality software.



in terms of "fanboyism"


1) I don't play video games.   I would rather not expound upon my opinions of people who attempt to justify spending lots of money for the ability to play video games at higher frame rates and to be able to see the grass on the simulated ground appear more realistic


2) I have a 6 year old HP laptop with a single core, intel core duo (not even a core 2 duo) with 2 gb of memory and onboard video.  I use 2 of my 4 USB ports to juggle between external hard drive enclosures to utilize a stack of equally old 3.5" internal drives ranging between 250 to 500 gb.  Im not a gamer.  I also have a 1st generation (literally first generation of the first generation) xbox that I won from Taco Bell.  It has an Xecuter 3 mod chip and I use it as a media center with 1st generation 1080p Samsung DLP (as in:  the first 1080p DLP they ever sold) with audio piped through a 13 year old onkyo receiver.

I dont give a flying fuck about fanboyism.   Except for my Car.  fuck yeah its a nissan 240sx  (that is where my money goes lol).  But no I don't drift.  never.  Drifting is an abomination.

I had used both ATI and Nvidia, the charts also collaborate with what I had seen.  So had the rest of us.  If you don't care anything, goodbye.  You don't even need to bother to ask since you can't accept anything other than what you are thinking
Fingler29 (OP)
Newbie
*
Offline Offline

Activity: 11
Merit: 0


View Profile
December 04, 2011, 12:58:24 PM
 #8

Quote
I had used both ATI and Nvidia, the charts also collaborate with what I had seen.  So had the rest of us.  If you don't care anything, goodbye.  You don't even need to bother to ask since you can't accept anything other than what you are thinking


didnt ask measurements.  I have google as well.  I can type in "which GPUs are best for BTC mining"


I asked a question that was clearly not relevant in the noob section.  I have no alternative. place to post it.



Perhaps the entire website is not the place to post such a question.



I am asking for a question that is based on genuine information, not speculation.   I highly doubt there are very many electrical engineers on this board who can explain why the simple integer math associated with bitcoin mining operates more efficiently (in terms of time, aka:  hashs/sec) than they do on Nvidia boards.


realistically I will almost certainly never get an answer here.



the answer that "simple math is done quicker because there are more cores, which were designed to supplement for the lack of complexity in those cores when dealing with more complex math associated with graphics" is nonsensical on a number of levels.


all math is simple at the base level.  The biggest determinant factor is how much memory is there, how quickly the memory communicates with the processor and how long the job is


short job, short overall calculation time pretty much eliminate most of the differences in hardware.


P4man
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
December 04, 2011, 01:22:17 PM
 #9

What on earth makes you think all math is the same or that it is memory bound?
For a start there is quite a fundamental difference between floating point and integer math. The simple truth is that AMD cards have 2-3x more ALUs than nvidia cards and can therefore process 2-3x as many 32 bit integer calculations per clock. Another simple truth is that SHA hashing (and many other similar cryptographic functions) is greatly sped  up by rotate right  register operations. AMD cards have a dedicated 1 cycle hardware function for this, nvidia does not and requires 3 clock cycles for this.

Memory bandwidth or capacity is absolutely a non issue for hashing. Miners clock the memory as low they can to save power consumption and even to speed up mining (!) and bitcoin mining takes like a few megabyte of vram. Anything more is wasted for mining.

Please get a clue.

Gabi
Legendary
*
Offline Offline

Activity: 1148
Merit: 1008


If you want to walk on water, get out of the boat


View Profile
December 04, 2011, 02:27:54 PM
 #10

That's why we have a newbie section, to avoid trolls go trolling elsewhere and forcing them to troll here.

And, infact, we have a troll here.

Dear troll here is a protip: the computing behind bitcoin mining is NOT a secret, and is indeed very simple: sha-256 hashing. It run faster on ATI because uh guess what? ATI can compute these things FASTER than nvidia, due to it's hardware.

Yes, that's all. Nvidia just sucks for mining.

You speak about software implementation. As i said, the software is very simple, go and try to make it run faster on Nvidia if you want. Hire someone, hire everyone. Good luck, call me when you have a code faster than a same priced ATI.


As for physx... oh LOL. Do you realize that it's NVIDIA CODE? And that it's closed source? Guess what, it run well on nvidia? More like it run ONLY on nvidia. Lololol

Quote
all math is simple at the base level.  The biggest determinant factor is how much memory is there, how quickly the memory communicates with the processor and how long the job is
Protip: if card 1 make the same computing in 1 cycle while card 2 take 10 cycles, card 1 will be 10 times faster.

Now go troll elsewhere.

worldinacoin
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile
December 04, 2011, 03:46:57 PM
 #11

If the OP can't accept that ATI is superior, go on and get your Nvidia to mine bitcoins and good luck!
Gabi
Legendary
*
Offline Offline

Activity: 1148
Merit: 1008


If you want to walk on water, get out of the boat


View Profile
December 04, 2011, 03:49:49 PM
 #12

It seems he think that the mining software is biased to run better on ATI and to be purposely inefficient on nvidia. That they purposely made it slow on nvidia

Too bad his trolling is epic fail, bitcoin mining is not a secret and it's very simple, he can try to make his own mining software and experiment with it and discover WHY nvidia sucks at mining.

714
Member
**
Offline Offline

Activity: 438
Merit: 10


View Profile
December 04, 2011, 03:58:11 PM
 #13

https://en.bitcoin.it/wiki/Why_a_GPU_mines_faster_than_a_CPU

Decent comparison ATI vs. Nvidia. The Nvidia hardware does better on some tasks. Bitcoin is not one of them. The instruction set of the R5xxx and higher ATI chipsets makes Bitcoin very efficient.

⬣⬣⬣⬣⬣⬣⬣⬣    ⬣⬣⬣⬣    ⬣⬣    ⬣     C O M B O     ⬣    ⬣⬣    ⬣⬣⬣⬣    ⬣⬣⬣⬣⬣⬣⬣⬣
A leading provider of scaling solutions for Web3 game developers
|      Twitter      |    Telegram    |     Discord     |     Medium     |      GitHub      |
worldinacoin
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile
December 04, 2011, 04:02:44 PM
 #14

If you look at his other post regarding using organizational resources to do bitcoin mining, one would suspect that his organization may be using Nvidia so much that he make himself believe that Nvidia is better.
Gabi
Legendary
*
Offline Offline

Activity: 1148
Merit: 1008


If you want to walk on water, get out of the boat


View Profile
December 04, 2011, 04:06:57 PM
 #15


dark_st3alth
Newbie
*
Offline Offline

Activity: 33
Merit: 0



View Profile
December 04, 2011, 08:08:24 PM
 #16

I'll give a real answer instead of what some 10 year olds posted above. /\


I'm a Nvidia lover as well, don't get that wrong. It seems the reasons are:

1. It has more "stream" processors, which can be thought of as CUDA cores.
It would seem that miners are not using the CUDA cores as well, but that's another topic.

2. AMD/ATI cards have a SHA512 (checksum) instruction on them, which speeds up calculations for mining. nVidia cards take 2 or 3 instructions to do this (as of now).
Really, this could change in the near future.

3. AMD/ATI cards are cheaper. It's less costly to setup the miner then with nVidia cards.
I posted on here that ATI/AMD cards probably have less quality. Just google "ATI loud fan".


As the little kids are arguing about PhysX, I'll explain that as well.

PhysX is used for PHYSICS calculations. Here's a nice block for all of you to read:


Quote
Before PhysX, game designers had to “precompute” how an object would behave in reaction to an event. For example, they would draw a sequence of frames showing how a football player falls on the ground after a tackle. The disadvantage of this approach was that the gamer always saw the same “canned” animation. With PhysX, games can now accurately compute the physical behavior of bodies real time! This means that the football player will now bend and twist in all different ways depending on the specific conditions associated with the tackle – thus creating a unique visual experience every time.

PhysX technology is widely adopted by over 150 games and is used by more than 10,000 developers. With hardware-accelerated physics, the world’s leading game designers’ worlds come to life: walls can be realistically torn down, trees bend and break in the wind, and water and smoke flows and interacts with body and force, instead of just getting cut-off by neighboring objects.


And a little more:

Quote
PhysX is designed specifically for hardware acceleration by powerful processors with hundreds of processing cores. Because of this design choice, NVIDIA GeForce GPUs provide a dramatic increase in physics processing power, and take gaming to a new level delivering rich, immersive physical gaming environments with features such as:

    Explosions that create dust and collateral debris
    Characters with complex, jointed geometries, for more life-like motion and interaction
    Spectacular new weapons with incredible effects
    Cloth that drapes and tears naturally
    Dense smoke & fog that billow around objects in motion

There, wasn't so hard was it guys and girls? They just wanted a simple answer.
Blind
Full Member
***
Offline Offline

Activity: 235
Merit: 100



View Profile
December 04, 2011, 09:12:27 PM
 #17

I, for one, am very happy with this situation, AMD need every penny they can get, so they can pump it in R&D, so they can compete with Intel & nVidia, so there is healthy competition and not monopoly, so we as customers won't get ass raped too much. Go ATI!

Government is not the solution to our problem. Government is the problem. -- Ronald Reagan
714
Member
**
Offline Offline

Activity: 438
Merit: 10


View Profile
December 04, 2011, 09:34:06 PM
 #18

Stick it with a fork, it's done.

⬣⬣⬣⬣⬣⬣⬣⬣    ⬣⬣⬣⬣    ⬣⬣    ⬣     C O M B O     ⬣    ⬣⬣    ⬣⬣⬣⬣    ⬣⬣⬣⬣⬣⬣⬣⬣
A leading provider of scaling solutions for Web3 game developers
|      Twitter      |    Telegram    |     Discord     |     Medium     |      GitHub      |
Gabi
Legendary
*
Offline Offline

Activity: 1148
Merit: 1008


If you want to walk on water, get out of the boat


View Profile
December 04, 2011, 09:43:44 PM
 #19

Quote
3. AMD/ATI cards are cheaper. It's less costly to setup the miner then with nVidia cards.
I posted on here that ATI/AMD cards probably have less quality. Just google "ATI loud fan".
Should i link exploding nvidia cards?  Roll Eyes


Quote
PhysX is used for PHYSICS calculations. Here's a nice block for all of you to read:
Wow so much yaddayadda

Quote
Before PhysX, game designers had to “precompute” how an object would behave in reaction to an event.
Bullshit.

Quote
PhysX is designed specifically for hardware acceleration by powerful processors with hundreds of processing cores.
Let me rewrite: PhysX is designed specifically to run on nvidia cards.
I wonder why, maybe because uh it's nvidia propietary code?

And lol at games using it. Yeah, to add like what, 2 particle?


There are other physics engine that run on all systems instead of nvidia only.

dark_st3alth
Newbie
*
Offline Offline

Activity: 33
Merit: 0



View Profile
December 04, 2011, 10:13:36 PM
 #20

Quote
3. AMD/ATI cards are cheaper. It's less costly to setup the miner then with nVidia cards.
I posted on here that ATI/AMD cards probably have less quality. Just google "ATI loud fan".
Should i link exploding nvidia cards?  Roll Eyes

I think your talking about defective cards. If that's the case, It's been identified and fixed. A brand new ATI card has a VERY loud fan problem, I just can't seem to find the video I watched a month ago.


Quote
Quote
PhysX is used for PHYSICS calculations. Here's a nice block for all of you to read:
Wow so much yaddayadda
I'm giving a straight answer that's correct. Where's your answer huh?

Quote
Before PhysX, game designers had to “precompute” how an object would behave in reaction to an event.
Bullshit.
[/quote]

I don't think your knowledgeable in that area sir/madam. It is quite well known, just take a search before you raise the alarms...

On top of that, I don't think nVidea would lie like that. Think about it next time before you say things. Smiley

Quote
PhysX is designed specifically for hardware acceleration by powerful processors with hundreds of processing cores.
Let me rewrite: PhysX is designed specifically to run on nvidia cards.
I wonder why, maybe because uh it's nvidia propietary code?

And lol at games using it. Yeah, to add like what, 2 particle?
[/quote]

No, it does make a difference. If it wasn't on the card, you would have to do the calculations on the CPU - and I hope your an expert on such things as it seems like your saying you are.

Take a look at games that use PhysX. Fluid dynamics is one such area you will need the GPU to do calculations.

Quote
There are other physics engine that run on all systems instead of nvidia only.

Ah, your talking about Havoc and such. Those are all CPU based, but lack a lot of "realism" to what they can do. Source games are a wonderful example. Smiley
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!