Bitcoin Forum
November 09, 2024, 08:37:28 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: 8 GPU limit linuxcoin  (Read 4623 times)
jjiimm_64 (OP)
Legendary
*
Offline Offline

Activity: 1876
Merit: 1000


View Profile
November 08, 2011, 08:28:13 PM
 #1

Does anyone know a way around the 8 gpu limit in linuxcoin?

Does anyone run more then 8 gpus in their rigs?

1jimbitm6hAKTjKX4qurCNQubbnk2YsFw
cicada
Full Member
***
Offline Offline

Activity: 196
Merit: 100


View Profile
November 08, 2011, 08:57:27 PM
 #2

So far I haven't heard any solid take on whether or not more than 8GPU was possible using the linux catalyst driver - some say it's 8, some say it's 16, some say it's unlimited..

Are you asking because you're trying and can't get it to work, or just looking to see if it's possible?

If it's the former, let me be the first to call you insane Wink

If it's the latter, I'm not sure there's a definite answer.  The only way to hit that limit is more than 4 dual-gpu cards in a motherboard with more than 4 PCI slots, both things are generally outside the greater reach of miner's pockets Tongue

Team Epic!

All your bitcoin are belong to 19mScWkZxACv215AN1wosNNQ54pCQi3iB7
pirateat40
Avast Ye!
Sr. Member
****
Offline Offline

Activity: 378
Merit: 250


"Yes I am a pirate, 200 years too late."


View Profile WWW
November 08, 2011, 09:05:25 PM
 #3

If its the same issues as with windows its a hardware limit based on resources.

imsaguy
General failure and former
VIP
Hero Member
*
Offline Offline

Activity: 574
Merit: 500

Don't send me a pm unless you gpg encrypt it.


View Profile WWW
November 08, 2011, 09:07:49 PM
 #4

Both Windows and Linux are limited to 8 cores on 64bit and 4 cores on 32bit.

Coming Soon!™ © imsaguy 2011-2013, All rights reserved.

EIEIO:
https://bitcointalk.org/index.php?topic=60117.0

Shades Minoco Collection Thread: https://bitcointalk.org/index.php?topic=65989
Payment Address: http://btc.to/5r6
jjiimm_64 (OP)
Legendary
*
Offline Offline

Activity: 1876
Merit: 1000


View Profile
November 08, 2011, 09:08:51 PM
 #5

So far I haven't heard any solid take on whether or not more than 8GPU was possible using the linux catalyst driver - some say it's 8, some say it's 16, some say it's unlimited..

Are you asking because you're trying and can't get it to work, or just looking to see if it's possible?

If it's the former, let me be the first to call you insane Wink

If it's the latter, I'm not sure there's a definite answer.  The only way to hit that limit is more than 4 dual-gpu cards in a motherboard with more than 4 PCI slots, both things are generally outside the greater reach of miner's pockets Tongue

It was the former, and I could not get it to work.  4 of my rigs are  4x5970 in a msiGD70   2900Mh pulling 1200 watts.  the rig has 2 psu's and enough power to push all 5 cards if I could get them to work.

1jimbitm6hAKTjKX4qurCNQubbnk2YsFw
cicada
Full Member
***
Offline Offline

Activity: 196
Merit: 100


View Profile
November 08, 2011, 09:10:46 PM
 #6

If its the same issues as with windows its a hardware limit based on resources.
Both Windows and Linux are limited to 8 cores on 64bit and 4 cores on 32bit.

Either of you have a reference for that?  It sounds like a BS answer made up by ATI developers who didn't want to try harder.  There is no hardware limitation outside of the number of physical available ports.  A software limitation I could see, as they'd possibly run out of address space in the 32bit drivers when looking at more than 32GB of VRAM Cheesy

Team Epic!

All your bitcoin are belong to 19mScWkZxACv215AN1wosNNQ54pCQi3iB7
imsaguy
General failure and former
VIP
Hero Member
*
Offline Offline

Activity: 574
Merit: 500

Don't send me a pm unless you gpg encrypt it.


View Profile WWW
November 08, 2011, 09:12:38 PM
 #7

If its the same issues as with windows its a hardware limit based on resources.
Both Windows and Linux are limited to 8 cores on 64bit and 4 cores on 32bit.

Either of you have a reference for that?  It sounds like a BS answer made up by ATI developers who didn't want to try harder.  There is no hardware limitation outside of the number of physical available ports.  A software limitation I could see, as they'd possibly run out of address space in the 32bit drivers when looking at more than 32GB of VRAM Cheesy

Its a hardcoded driver issue on both ati and nvidia.  The 32bit was for obvious reasons, the 64bit seems less clear why.

Coming Soon!™ © imsaguy 2011-2013, All rights reserved.

EIEIO:
https://bitcointalk.org/index.php?topic=60117.0

Shades Minoco Collection Thread: https://bitcointalk.org/index.php?topic=65989
Payment Address: http://btc.to/5r6
cicada
Full Member
***
Offline Offline

Activity: 196
Merit: 100


View Profile
November 08, 2011, 09:22:16 PM
 #8

jjiimm_64, when you add the 5th card what happens? 

Does aticonfig see the card at all, or seem to just ignore it?
Does 'lspci' list all 5 cards?

I'm curious if firing up a second X session tied only to the 5th card may be able to get around the problem..

Team Epic!

All your bitcoin are belong to 19mScWkZxACv215AN1wosNNQ54pCQi3iB7
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
November 08, 2011, 09:43:47 PM
 #9

Its a hardcoded driver issue on both ati and nvidia.  The 32bit was for obvious reasons, the 64bit seems less clear why.

Exactly any claim of resource issue is BS.

x86-64 supports 2^64 bit memory space.
Current CPU only support 2^48 bit memory space.
Windows further limits that to 2^44 bit memory space for some unknown reason but that is still more than enough.

There is sufficient virtual memory space to map couple thousand video cards.
pirateat40
Avast Ye!
Sr. Member
****
Offline Offline

Activity: 378
Merit: 250


"Yes I am a pirate, 200 years too late."


View Profile WWW
November 08, 2011, 09:47:41 PM
 #10

Yea, I'm not speaking for ATI or whatever I just know the limit is suppose to be limited by the software (drivers) access to the hardware.  But I don't even know enough about the bit by bit issues to know that I don't know. Smiley

imsaguy
General failure and former
VIP
Hero Member
*
Offline Offline

Activity: 574
Merit: 500

Don't send me a pm unless you gpg encrypt it.


View Profile WWW
November 08, 2011, 09:53:19 PM
 #11

Its a hardcoded driver issue on both ati and nvidia.  The 32bit was for obvious reasons, the 64bit seems less clear why.

Exactly any claim of resource issue is BS.

x86-64 supports 2^64 bit memory space.
Current CPU only support 2^48 bit memory space.
Windows further limits that to 2^44 bit memory space for some unknown reason but that is still more than enough.

There is sufficient virtual memory space to map couple thousand video cards.


You might think it BS, but its no BS on the part of the user.  Its on the part of ati/nvidia.  I'm sure it boils down to the fact that 99.9% of people aren't ready to use >8 gpu cores in a machine.  Power, slots, temperatures, price are all concerns.  They probably set it figuring it gave enough room and then haven't put much thought to it yet, sorta like the old y2k 'issues'.  Write up your friendly GPU manufacturer and see what they have to say about it.

Coming Soon!™ © imsaguy 2011-2013, All rights reserved.

EIEIO:
https://bitcointalk.org/index.php?topic=60117.0

Shades Minoco Collection Thread: https://bitcointalk.org/index.php?topic=65989
Payment Address: http://btc.to/5r6
cicada
Full Member
***
Offline Offline

Activity: 196
Merit: 100


View Profile
November 08, 2011, 10:02:28 PM
 #12

Even if the driver is crippled by laziness, it should be possible to get around it in linux, though it might take some effort.

For example, the more-than-one X session may work, or it may be possible to use a xen virtualized system to appear as a fully-unique system with just the 9th+ GPUs

They probably set it figuring it gave enough room and then haven't put much thought to it yet, sorta like the old y2k 'issues'.

This is more akin to Bill Gates' famously quoted "640kb ought to be enough for anybody"

Incidentally it was only recently that the limit was increased from just four GPUs - they should have realized then that 8 certainly wouldn't be enough for long.

Team Epic!

All your bitcoin are belong to 19mScWkZxACv215AN1wosNNQ54pCQi3iB7
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
November 08, 2011, 10:08:22 PM
 #13

For example, the more-than-one X session may work, or it may be possible to use a xen virtualized system to appear as a fully-unique system with just the 9th+ GPUs

That might work or simply cut the # of GPU in half and put half in each virtual machine to load balance memory & cpu.

Speaking of x session.  BLECH!  There is absoultely no reason why openCL should require X-server at all.  Something many developers have complained about.  More laziness on AMD part has hard couple x-system to openCL drivers.  I mean anyone think a super computer is going to be running x-session on each node? 

Of course you also have the 100% bug too.

So really three asinine and lazy "gotchas" in AMD OpenCL implementation
1) any GPU limit (for x64 systems).
2) requirement for x-session.  Hey AMD I am not actually doing any graphical work on my  "graphics processing unit".
3) 100% CPU bug.

Maybe someday they will all be fixed, probably just in time for FGPA to replace the last of the GPU miners.  Roll Eyes
jjiimm_64 (OP)
Legendary
*
Offline Offline

Activity: 1876
Merit: 1000


View Profile
November 08, 2011, 10:22:57 PM
 #14

jjiimm_64, when you add the 5th card what happens? 

Does aticonfig see the card at all, or seem to just ignore it?
Does 'lspci' list all 5 cards?

I'm curious if firing up a second X session tied only to the 5th card may be able to get around the problem..

Well I remember it wasn't pretty.  ( my technical assessment).

Seriously i cant remember exactly..  i think it came up with only one of them, one 5970 that is.  I didn't think it would work anyway and really did not put much effort into it.   

But now I have 12 rigs (soon to be 13)  and getting 5 5970's into a rig looks really appealing..

thxs for everyone's comments...  i agree it cannot be a hardware issue


1jimbitm6hAKTjKX4qurCNQubbnk2YsFw
likuidxd
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
November 08, 2011, 10:33:22 PM
 #15

In my experience, it's enough a pain in the ass to get 8 gpu's to work well on one rig, moving past 8 sounds like more of a headache than what its worth. Also, running 8 gpus on a board requires 400w of power being pushed through the mobo to the pcie slots, 4 16x @ 75W and 4 8x or 4x or 1x @ 25W. That's a ton of power running through the board and only a few $350 + boards can handle it. Unless your using powered risers, which scare the crap out of me.

jjiimm_64 (OP)
Legendary
*
Offline Offline

Activity: 1876
Merit: 1000


View Profile
November 08, 2011, 10:36:28 PM
 #16

In my experience, it's enough a pain in the ass to get 8 gpu's to work well on one rig, moving past 8 sounds like more of a headache than what its worth. Also, running 8 gpus on a board requires 400w of power being pushed through the mobo to the pcie slots, 4 16x @ 75W and 4 8x or 4x or 1x @ 25W. That's a ton of power running through the board and only a few $350 + boards can handle it. Unless your using powered risers, which scare the crap out of me.

well, the 5970 is not very power hungry..  the board has 5 full 16x slots so it should 'handle' the power.
https://plus.google.com/u/0/photos/112408294399222065988/albums/5658727447810944545


1jimbitm6hAKTjKX4qurCNQubbnk2YsFw
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
November 08, 2011, 10:40:39 PM
Last edit: November 08, 2011, 11:47:52 PM by DeathAndTaxes
 #17

In my experience, it's enough a pain in the ass to get 8 gpu's to work well on one rig, moving past 8 sounds like more of a headache than what its worth. Also, running 8 gpus on a board requires 400w of power being pushed through the mobo to the pcie slots, 4 16x @ 75W and 4 8x or 4x or 1x @ 25W. That's a ton of power running through the board and only a few $350 + boards can handle it. Unless your using powered risers, which scare the crap out of me.

Power load doesn't depend on length of PCIe slot.  Regardless of the number of data lanes all PCIe slots have same power connectors.

While a video card "can" pull 75W from the bus none of the externally powered ones do.  I measured my 5970s under load and it was ~30W.  So 5 of them would be 150W.  I do agree though that most MB despite the spec calling for 75W per slots assume nobody will need that much and have insufficient ability to handle that current.  I wouldn't try it without powered extenders even @ 150W.

On edit: looks like I was wrong.  Learned something new today.  While the same power connector is used the spec does limit power draw depending on connector size.
1x = max 25W (10W @ startup)
4x = max 25W
8x = max 25W
16x = max 75W (25W @ startup)

Still I would point out high end cards generally don't pull 75W from the slot.  My 5970s pull about 30W.  The 75W limit is more for cards without dedicated power connector.
likuidxd
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
November 08, 2011, 11:02:31 PM
 #18

In my experience, it's enough a pain in the ass to get 8 gpu's to work well on one rig, moving past 8 sounds like more of a headache than what its worth. Also, running 8 gpus on a board requires 400w of power being pushed through the mobo to the pcie slots, 4 16x @ 75W and 4 8x or 4x or 1x @ 25W. That's a ton of power running through the board and only a few $350 + boards can handle it. Unless your using powered risers, which scare the crap out of me.

Power load doesn't depend on length of PCIe slot.  Regardless of the number of data lanes all PCIe slots have same power connectors.

While a video card "can" pull 75W from the bus none of the externally powered ones do.  I measured my 5970s under load and it was ~30W.  So 5 of them would be 150W.  I do agree though that most MB despite the spec calling for 75W per slots assume nobody will need that much and have insufficient ability to handle that current.  I wouldn't try it without powered extenders even @ 150W.

Wasn't speaking of length, but rather speed. x16 maxes at 75W with 25W required at startup and x8-x1 max at 25W.

cicada
Full Member
***
Offline Offline

Activity: 196
Merit: 100


View Profile
November 08, 2011, 11:25:19 PM
 #19

Wasn't speaking of length, but rather speed. x16 maxes at 75W with 25W required at startup and x8-x1 max at 25W.

That's incorrect - the PCI spec dictates available power, not the port length

PCI-e 1x, 4x, 8x, and 16x can all draw up to 75W

PCI is limited to 35W

The only difference between 1x and 16x is the number of available data lanes.

Team Epic!

All your bitcoin are belong to 19mScWkZxACv215AN1wosNNQ54pCQi3iB7
likuidxd
Sr. Member
****
Offline Offline

Activity: 476
Merit: 500


View Profile
November 08, 2011, 11:37:23 PM
 #20

http://www.pcisig.com/developers/main/training_materials/get_document?doc_id=6d37ec2f8543fc1f9d8ace6264d08b469f57e5f1
I miss read, that is to length

Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!