Bitcoin Forum
April 26, 2024, 06:07:37 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: Noob question about hardware: Why so ATI dominant?  (Read 4551 times)
Fingler29 (OP)
Newbie
*
Offline Offline

Activity: 11
Merit: 0


View Profile
December 04, 2011, 11:30:36 PM
 #21

I realize that I am asking a question here that is probably at the core of a lot of flame wars.  I promise I did not post it with that foreknowledge, although its pretty obvious given the fact that "fanboyism" is about as rampant as it gets with GPUs (probably only comparable to Ford vs Holden in Australia, where fist fights and stabbings are an occasional result).

I'm not a troll (lol how many times has that been said).  I do not disbelieve, discount, or ignore the statistitics.  They are there in black and white.  Its fact.  AMD cards are almost universally better (I say almost because there might have been maybe one single expensive nvidia card that was better than a super cheap AMD card, but I didn't pay too close attention).

I knew about that before I posted this thread, which is why I repeatedly referred to the "performance discrepancy."  that is why those posts that referred to them are kind of irrelevant.  I know.  those websites with reviews/benchmarks were the REASON why I am posting this thread.

shit.  I'm just asking WHY?

why?  why is it that 2 types of hardware, which are designed for essentially identical tasks:  namely CONSUMER ORIENTED graphics (like video games and movies and shit) produce such wildly different results?

AMD wants to make their shit work well with games.  So does Nvidia.  Everyone learns physics and electrical engineering from similar/same text books, and in many cases from the same PI or a PI who worked with his competitors PI because those PI's worked with the same PI (I cant remember the name of the statistic that measures the PhD "tree" back up to famous people like Einstein, laplace, debroglie, boltzmann, Bragg, feynman, etc. but that is what I am talking about here)

even taking into account the hurdles of patents and Intellectual property, how is it that 2 products, aimed at the same market and designed for IDENTICAL tasks produce such ABSOLUTELY WIDLY different results?  that is my question. 

the answers I have been given mostly sound like:  "because AMD is better, duh."
dark_st3alth  actually answered my question in a clear and cogent way and I do appreciate that.  Thank you. 

he also eluded to what I was implying with PhysX:

Quote
It would seem that miners are not using the CUDA cores as well, but that's another topic.

my point was "targeted" development.  I didnt mean to imply that the BTC mining programmers were like "fuck you nvidia we're only writing for AMD so all you NVidia fanboys can suck a dick."  I meant more along the lines of:  "some guys working in their spare time, who only had access to AMD gear and AMD SDKs developed with the tools they had, and the results are that the software works best with AMD."

maybe they tried to get Nvidia hardware donated and Nvidia said no!  who knows! But think it much more likely that the reason why the software works best on AMD is because it was designed with a focus on AMD hardware.
not maliciously, not angrily, not with some ill intent.  Just because that was the only option.

I would love to hear someone who actually knows about the development weigh in so that I can have that question answered.  Its just a question OIts posted in the noob section for gods sake.  I didnt pronounce it like a fact of god spoken from on high.  I am ruminating.  I am tossing around i

Ithought that was essentially the friggin purpose of a "forum":  a place to discuss things.  that is what I am trying to do.

instead i'm a troll, Im a fanboy, Im just here to start flamewars.
apparently that must be the case. 
1714154857
Hero Member
*
Offline Offline

Posts: 1714154857

View Profile Personal Message (Offline)

Ignore
1714154857
Reply with quote  #2

1714154857
Report to moderator
1714154857
Hero Member
*
Offline Offline

Posts: 1714154857

View Profile Personal Message (Offline)

Ignore
1714154857
Reply with quote  #2

1714154857
Report to moderator
1714154857
Hero Member
*
Offline Offline

Posts: 1714154857

View Profile Personal Message (Offline)

Ignore
1714154857
Reply with quote  #2

1714154857
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
Fiyasko
Legendary
*
Offline Offline

Activity: 1428
Merit: 1001


Okey Dokey Lokey


View Profile
December 05, 2011, 12:17:27 AM
 #22

Lock this fucking thread

http://bitcoin-otc.com/viewratingdetail.php?nick=DingoRabiit&sign=ANY&type=RECV <-My Ratings
https://bitcointalk.org/index.php?topic=857670.0 GAWminers and associated things are not to be trusted, Especially the "mineral" exchange
714
Member
**
Offline Offline

Activity: 438
Merit: 10


View Profile
December 05, 2011, 03:59:04 AM
 #23

Lock this fucking thread

I second that motion, this is a subject that is well covered elsewhere.

⬣⬣⬣⬣⬣⬣⬣⬣    ⬣⬣⬣⬣    ⬣⬣    ⬣     C O M B O     ⬣    ⬣⬣    ⬣⬣⬣⬣    ⬣⬣⬣⬣⬣⬣⬣⬣
A leading provider of scaling solutions for Web3 game developers
|      Twitter      |    Telegram    |     Discord     |     Medium     |      GitHub      |
worldinacoin
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile
December 05, 2011, 04:31:23 AM
 #24

Lock this fucking thread

I second that motion, this is a subject that is well covered elsewhere.


+1
Gabi
Legendary
*
Offline Offline

Activity: 1148
Merit: 1008


If you want to walk on water, get out of the boat


View Profile
December 05, 2011, 02:36:10 PM
 #25

1)Havoc and such? CPU based? No.

http://bulletphysics.org/wordpress/

A physics engine that run on GPU via OpenCL

2)
Quote
If it wasn't on the card, you would have to do the calculations on the CPU
Of course if i don't use the card i use the CPU. The problem is, physx is made to run well on nvidia card. But it's not the only engine around

3)Please explain me where i am wrong, nvidia did not invent physics engine in gaming. "precompute"? Lolwut, it was possible to do the same thing that physx do before physx was created.

Fiyasko
Legendary
*
Offline Offline

Activity: 1428
Merit: 1001


Okey Dokey Lokey


View Profile
December 05, 2011, 04:24:04 PM
 #26

1)Havoc and such? CPU based? No.

http://bulletphysics.org/wordpress/

A physics engine that run on GPU via OpenCL

2)
Quote
If it wasn't on the card, you would have to do the calculations on the CPU
Of course if i don't use the card i use the CPU. The problem is, physx is made to run well on nvidia card. But it's not the only engine around

3)Please explain me where i am wrong, nvidia did not invent physics engine in gaming. "precompute"? Lolwut, it was possible to do the same thing that physx do before physx was created.

THANKYOU GABI.

Now, I think that Gabi's post should be the Last. (or this what whatever the fuck)

I think the flame fest is at an even keel right now, And we need to Stop it, Stop the fire dammnit.

Gabi just Litup and Burned down and entire "controlled burn zone" Dont fucking walk in here and toss a jerry can'o'gas at him.

LOCK THIS THREAD, ALL DESIRED INFO OF THE THREAD OP'S QUESTION HAS BEEN ANSWERED

http://bitcoin-otc.com/viewratingdetail.php?nick=DingoRabiit&sign=ANY&type=RECV <-My Ratings
https://bitcointalk.org/index.php?topic=857670.0 GAWminers and associated things are not to be trusted, Especially the "mineral" exchange
dark_st3alth
Newbie
*
Offline Offline

Activity: 33
Merit: 0



View Profile
December 06, 2011, 03:22:37 AM
 #27

I'll add a little more gasoline if I may, to celebrate my 5th hour. Smiley

Quote
1)Havoc and such? CPU based? No.

http://bulletphysics.org/wordpress/

A physics engine that run on GPU via OpenCL

The reason is that they are supposedly partnered with ATI/AMD.

You will never see this commercially used popular (and I'm surprised a project like this even existed).

PhysX is used in almost every major game, and those that don't, use Havoc (which is CPU).


Quote
2)
Quote
If it wasn't on the card, you would have to do the calculations on the CPU
Of course if i don't use the card i use the CPU. The problem is, physx is made to run well on nvidia card. But it's not the only engine around

As answered above, PhysX:

A) The best
B) The most popular
C) Commercially backed, with Major's like VaLve.
D) It's GPU based, using a PPU.

Quote
3)Please explain me where i am wrong, nvidia did not invent physics engine in gaming. "precompute"? Lolwut, it was possible to do the same thing that physx do before physx was created.

Of course they didn't. Who said that where?

Idk if you are all into game mechanics, or developing (it seems not), games had to precompute (process long before, mainly stored in a file or memory saying what should happen) physics and thus was quite limited. As far as I remember, you would not be able to drive vehicles like you can do now.


Anyways, it would seem I am free. Cheesy
Fiyasko
Legendary
*
Offline Offline

Activity: 1428
Merit: 1001


Okey Dokey Lokey


View Profile
December 06, 2011, 03:43:47 AM
 #28

I'll add a little more gasoline if I may, to celebrate my 5th hour. Smiley

Quote
1)Havoc and such? CPU based? No.

http://bulletphysics.org/wordpress/

A physics engine that run on GPU via OpenCL

The reason is that they are supposedly partnered with ATI/AMD.

You will never see this commercially used popular (and I'm surprised a project like this even existed).

PhysX is used in almost every major game, and those that don't, use Havoc (which is CPU).


Quote
2)
Quote
If it wasn't on the card, you would have to do the calculations on the CPU
Of course if i don't use the card i use the CPU. The problem is, physx is made to run well on nvidia card. But it's not the only engine around

As answered above, PhysX:

A) The best
B) The most popular
C) Commercially backed, with Major's like VaLve.
D) It's GPU based, using a PPU.

Quote
3)Please explain me where i am wrong, nvidia did not invent physics engine in gaming. "precompute"? Lolwut, it was possible to do the same thing that physx do before physx was created.

Of course they didn't. Who said that where?

Idk if you are all into game mechanics, or developing (it seems not), games had to precompute (process long before, mainly stored in a file or memory saying what should happen) physics and thus was quite limited. As far as I remember, you would not be able to drive vehicles like you can do now.


Anyways, it would seem I am free. Cheesy

Im am going to fucking kill you if you keep thinking that GPU's Couldnt "Practically" and/or "publicly" and/or "commercially" and/or "residentially" ETC, Do Non-Precomputed physics on the GPU untill nVidiaPhysX was created.

What your saying is that my old PRE PhysX exsistence tech Radeon GPU, CANNOT (on it's gpu) Make a 3d Ball, Fall down, And Randomly (Fuck off i know there is no TRUE random) bounce

You say that if:
on a Pre PhysX Era ATI Radeon GPU
Perfect ball A drops, Falls and Smacks a Completeatly Flawed Polygon (such as a gravel path) that it Cannot, CANNOT do anything unpredictable. Because, By Computing law, That is Impossible to happen Due to the Flawless factor that all "possible" "bounceable" angles are All accounted for..... unless using your CPU or Nvidia's PhysX tech stylings.

That is just flatout stupid.

TL;DR
Link or lies.
Lock this fucking thread.
 

http://bitcoin-otc.com/viewratingdetail.php?nick=DingoRabiit&sign=ANY&type=RECV <-My Ratings
https://bitcointalk.org/index.php?topic=857670.0 GAWminers and associated things are not to be trusted, Especially the "mineral" exchange
P4man
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
December 06, 2011, 08:48:31 AM
 #29

PhysX is an API, just like bullet, havoc etc. It can run on either CPU or GPU, its up to the developer. Its also an open API, if AMD wanted, they could implement it, but they chose not as nVidia owns the IP and somewhat understandably, AMD dont want to support that. But there are hacks around to enable physx on ATI cards.

Gabi
Legendary
*
Offline Offline

Activity: 1148
Merit: 1008


If you want to walk on water, get out of the boat


View Profile
December 06, 2011, 01:12:49 PM
 #30

Here is a nice explanation about "physx on cpu"

Quote
http://techreport.com/discussions.x/19216
PhysX hobbled on the CPU by x87 code
Quote
x87 has been deprecated for many years now, with Intel and AMD recommending the much faster SSE instructions for the last 5 years. On modern CPUs, code using SSE instructions can easily run 1.5-2X faster than similar code using x87.  By using x87, PhysX diminishes the performance of CPUs, calling into question the real benefits of PhysX on a GPU.

Quote
http://www.rage3d.com/board/showthread.php?t=33965625
PhysX - Intentionally Slow on CPUs? RealWorldTech Investigates.
Quote
PhysX uses x87 because Ageia and now Nvidia want it that way. Nvidia already has PhysX running on consoles using the AltiVec extensions for PPC, which are very similar to SSE. It would probably take about a day or two to get PhysX to emit modern packed SSE2 code, and several weeks for compatibility testing. In fact for backwards compatibility, PhysX could select at install time whether to use an SSE2 version or an x87 version – just in case the elusive gamer with a Pentium Overdrive decides to try it.

But both Ageia and Nvidia use PhysX to highlight the advantages of their hardware over the CPU for physics calculations. In Nvidia’s case, they are also using PhysX to differentiate with AMD’s GPUs. The sole purpose of PhysX is a competitive differentiator to make Nvidia’s hardware look good and sell more GPUs.

Fiyasko
Legendary
*
Offline Offline

Activity: 1428
Merit: 1001


Okey Dokey Lokey


View Profile
December 06, 2011, 06:21:13 PM
 #31

Here is a nice explanation about "physx on cpu"

Quote
http://techreport.com/discussions.x/19216
PhysX hobbled on the CPU by x87 code
Quote
x87 has been deprecated for many years now, with Intel and AMD recommending the much faster SSE instructions for the last 5 years. On modern CPUs, code using SSE instructions can easily run 1.5-2X faster than similar code using x87.  By using x87, PhysX diminishes the performance of CPUs, calling into question the real benefits of PhysX on a GPU.

Quote
http://www.rage3d.com/board/showthread.php?t=33965625
PhysX - Intentionally Slow on CPUs? RealWorldTech Investigates.
Quote
PhysX uses x87 because Ageia and now Nvidia want it that way. Nvidia already has PhysX running on consoles using the AltiVec extensions for PPC, which are very similar to SSE. It would probably take about a day or two to get PhysX to emit modern packed SSE2 code, and several weeks for compatibility testing. In fact for backwards compatibility, PhysX could select at install time whether to use an SSE2 version or an x87 version – just in case the elusive gamer with a Pentium Overdrive decides to try it.

But both Ageia and Nvidia use PhysX to highlight the advantages of their hardware over the CPU for physics calculations. In Nvidia’s case, they are also using PhysX to differentiate with AMD’s GPUs. The sole purpose of PhysX is a competitive differentiator to make Nvidia’s hardware look good and sell more GPUs.

You sir, Are all my better arguments, Cleanedup and Slammed down.
Flawless explanation, And by my own experiance, It's compleatly true.

Now then
Lock This Thread.
If you dont know how, Goto you first post and hit Edit, Then near the bottom left will be a small "Lock topic" button

http://bitcoin-otc.com/viewratingdetail.php?nick=DingoRabiit&sign=ANY&type=RECV <-My Ratings
https://bitcointalk.org/index.php?topic=857670.0 GAWminers and associated things are not to be trusted, Especially the "mineral" exchange
unabridged
Newbie
*
Offline Offline

Activity: 31
Merit: 0


View Profile
December 07, 2011, 07:32:12 PM
 #32

I think the problem is you are thinking that the amd GPUs and nvida GPUs are as similar as amd CPUs and intel CPUs. The CPUs have nearly identical instruction sets and meant to be somewhat interchangeable, while the GPUs each have their own entirely separate instruction set.

AMD = more cores, but each core is slower than an nvidia core
nvidia = less cores, but each core is faster than an AMD core

hashing is not really an intensive operation so the speed of the core doesnt really make a difference, what makes a difference is being able to do many at once so the amount of cores is what increases the hash rate.

An FPGA or ASIC takes this to the extreme, many many very dumb cores (but just fast enough to do the hash)
icvader
Newbie
*
Offline Offline

Activity: 5
Merit: 0



View Profile WWW
December 17, 2011, 06:00:43 AM
 #33

I'm gonna get murdered for this post Sad.

I'm not sure if anyone truly answered your question. I am a programmer, i don't know if that helps... To the point. ATI/AMD has a one single operation that nvida does not have. It is called RightRotate aka BIT_ALIGN_INT. It is the fundamental basis for optimized bitcoin hashing. As it stands, with the most briliant minds working towards making nvidia operations as optimized as possible, It takes 3 operations; 2 shifts + 1 add Sad. There are some people that are working their best to petition for an updated set of cuda instructions to get the RightRotate and LeftRotate included. Just getting it down to one operation is an amazing improvement. The other issue as you obviously read, is that amd just hase a massive amount more of the stream processors as compared to nvidia. the stream processors are what process the SHA-256 operations, be it nvidia or amd.

The rotate right (circular right shift) operation ROTR n(x), where x is a w-bit word
and n is an integer with 0 £ n < w, is defined by
ROTRn(x) = (x >> n) Ú (x << w - n).
Thus, ROTR n(x) is equivalent to a circular shift (rotation) of x by n positions to the
right.

This operation is used by the SHA-256, SHA-384, and SHA-512 algorithms.

I don't want to over complicate things, but i love teaching so here it goes. This is super simplified btw and i am sure someone will wince and bash me over the head for it. I apologize in advance...

The SHA-256 Hash, is an operation that creates a random number, in hexadecimal of course, based off of a value. In the case of bitcoin it would be the current Block, or "Target". The main goal of bitcoin hashing is to try and randomly create a hash that is less value than the target. For example, the current target is

0000000000000E86680000000000000000000000000000000000000000000000

getting a random hash of

0000000000000E86670000000000000000000000000000000000000000000000
                               ^
                              Lesser value
would "win" you the "Target" Block thus giving you 50 BTCs.

Based off of the current Difficulty the probability of winning is 0.0000000000000002015752067216838860908012520667398348450

The lower the difficulty the higher the value of the target and vice versa for the higher diffculty.

The hash also has to be verified as a hash, you can't obviously have a program just throw back, "oh I found the lower value wink wink" lol.

This guy David Perry gives an awesome explanation on why AMD is cornering the market on integer based calculations and proves its not just the "bitcoin" world.

http://bitcoin.stackexchange.com/questions/1523/bitcoin-alternative-designed-for-nvidia

This SHA-256 stuff is and was mostly used for GPGPU processes by hackers. I wouldn't be surprised if it was group of hackers that originally wrote all the bitmining programs we use today. It is the dominate force right now in password cracking, encryption etc.

In closing, nvida, in order to keep up with market demands, will eventually have to start bringing back small integer math into their design to keep up with the next generation uses for things like full hardrive encryption, faster SSL handshakes, etc. Computer Security is ever evolving, once we start getting into the 10-50-100MB encryption algorithms, CPU processing, as it is now, will never be able to keep up; hell it can't keep up with our 2MB encryptions lol.

I don't know if you are a programmer, I assume you must have knowledge else you wouldn't be seeking more knowledge. It's addicting. Here is a post of the process of getting a target in really really simple C language.

http://pastebin.com/n8UEGA86

Thanks,
icvader
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!