I can't even get one rig stable with three 5970's and 2 seasonic gold 750watt joined with a lian li connector Worse case scenario ... ebay the 2x 750 (or try these forums) and buy a single 1200W. 1000W is all you need but 1200W gives you the option to expand to 4x5970s. a silverstone 1200 will NOT run 4 5970's overclocked to 820. a seasonic 1250 will Dat: in reference to the 10 gpus it has 2 mobos and 2 operating systems.. . I read somewhere that solaris will load Xnumber of gpus? my new rig(or is it 2 rigs joined by a common psu) I will be easily able to do this..
it is 10x7970 3psu's 6790 Mh. I will calculate the watts tonight.
Care to share with us that magic solaris or whatever it is called server that does 10 GPUs in one mobo ? Pics would be nice Thanks !
|
|
|
If anyone is still interested I have a custom one that I made myself ( to a pretty good standard ) and that also includes some resistors on the 5V line so your PSU is not only delivering on the 12V rail.
PM me if interested at any time !
|
|
|
OK. Thank you very much for these revelations jack ! I will apply them right now and hope that this will indeed work as it is supposed to. Anyone wanna try and jam the GPU fan for a "simulation" so we can have confirmation ? Thanks !
|
|
|
temp-cutoff does not require a gpu-engine range to be defined but it absolutely depends on auto-gpu being set.
If that is true I wonder if it should be changed. Most people looking to set a static gpu speed likely are interested in failsafe protection. I would imagine having temp-cutoff always active to be the safer solution. If someone wants to risk their GPU they could always set a temp-cutoff value of say 300. That would require the user to essentially say "I don't care if it causes my GPU to melt never reduce speed or bring GPU offline". +10 ! I just had a fan failure and my card was cooking. I mistakenly assumed it would cut off at cut off temp, but I had not set dynamic clocks, so to my great surprise, it did not shut down So does this mean that if I say, set the clocks manually to 960 core then my card will fry if the fans goes out while I am not around ?
|
|
|
temp-cutoff does not require a gpu-engine range to be defined but it absolutely depends on auto-gpu being set.
If that is true I wonder if it should be changed. Most people looking to set a static gpu speed likely are interested in failsafe protection. I would imagine having temp-cutoff always active to be the safer solution. If someone wants to risk their GPU they could always set a temp-cutoff value of say 300. That would require the user to essentially say "I don't care if it causes my GPU to melt never reduce speed or bring GPU offline". I agree there is no reason to disable this feature and should be hardcoded. If the mad guys wanna fry their GPUs then let them compile it with off option etc.
|
|
|
--temp-cutoff <arg> Temperature where a GPU device will be automatically disabled, one value or comma separated list (default: 95) Will it still shut off without using --auto-gpu? I just realized I have a rig without --auto-gpu because I want the clocks fixed, but does that mean it will burn up when the AC breaks? Not sure. I always use auto-gpu. I would assume --temp-cutoff is always used even if auto-gpu = false but that might be a dangerous assumption. Hopefully conman can weigh in. Yeah. I happen to be very interested in this as well. What exactly are the safeguards in place by default ( or not by default and user must set them implicitly ) in case the GPU fan dies completely and I am not present or able to assist ASAP to come and stop the mining from killing the fanless card ? Thanks !
|
|
|
Also, maybe U will be lower because more stales will have to be discarded when it takes longer for the GPU to finish the larger batch of work, meaning higher MH/s in real life, but lower U (and lower MH/s calculated by the server).
Precisely.This fact seems to be oftentimes overlooked when users go only for MHash/s in their GPU tweaking. If your pool server is constantly showing a significantly lower MHash/s estimate then what your miner tells you, consider dropping intensity a notch. Using intensity 9 for low-to-medium class 200MHash/s GPUs is already too much, they really benefit from 8. What about 5870s ? What intensity to use for that ?
|
|
|
Where is your server located geographically ? Thanks !
lol? Why would you want to know? It is located in the USA and is being run by a well known American company by Americans who know all that is going on. I think anyone would be able to find out the location anyway with a ping or two. Because Clipse one is in the UK and I am in the UK as well so the US server would be worse than a UK one ( for me at least and for EU people as well ). That would depend on the provider as well. Mine is very good with very good connections to Europe. I am not saying his wont be better for you but it might not. Yeah. I like how both of you changed to 115% as a result of competiton. Now, if only Nvidia released a Kepler mining card that would be the day ! AMD mining monopoly is bad for consumers !
|
|
|
Where is your server located geographically ? Thanks !
lol? Why would you want to know? It is located in the USA and is being run by a well known American company by Americans who know all that is going on. I think anyone would be able to find out the location anyway with a ping or two. Because Clipse one is in the UK and I am in the UK as well so the US server would be worse than a UK one ( for me at least and for EU people as well ).
|
|
|
So it is decided ?
64bit only BAMT from now on ?
Thanks !
|
|
|
Where is your server located geographically ? Thanks !
|
|
|
OK. For SDK 2.1 to work it seems you need to get this ( for future reference ) :
miner@mining:/# find / -name libOpenCL.so.1
/opt/ati-stream-sdk-v2.1-lnx64/lib/x86_64/libOpenCL.so.1 /opt/ati-stream-sdk-v2.1-lnx64/libOpenCL.so.1 /usr/lib/libOpenCL.so.1
Thanks !
|
|
|
update to fix winxp bug PLZ
Hmm, my first thought that came to mind was Will Smith in the movie Hancock: "Say that one more time ..." Seriously, why post that twice 4 posts apart? It's pointless the first time coz you aren't even saying what the stupid problem is you have and then the second time it's just annoying. But I could also add the other obvious reply There is no fix for windows xp - windows xp is just one very large bug and all the MS fixes since then don't seem to have resolved that ... vista, 7, 8 ... Yeah. IMHO kano is totally right. The author is the kernel scheduler guy. I think there should not even be a cgminer for Windblows at all or if it was then no support. If you are a serious miner stick to Linux ! Thanks again kano and ckolivas !
|
|
|
Repost from here so ckolivas is on the case : https://bitcointalk.org/index.php?topic=53199.20Does the cgminer work with AMD SDK 2.1 ? It seems it does not because it is looking for a file called "libOpenCL.so.1" which only comes with SDK 2.4 it seems while SDK 2.1 comes with a file called "libOpenCL.so" ( without the 1 at the end ) !? Can we make it work somehow ? Modify the code ? Thanks ! cd /usr/lib sudo ln -s libOpenCL.so libOpenCL.so.1 Or wherever your linux has it ... Thank you very much ! You are a genius. Spend so much time trying to figure this one out. LOL ! Thanks ckolivas for this great software ! Can I use the poclbm kernel with this ? What happens if the GPU fan dies and I am away from the miner ? It just fries the card or does it halt the mining on that specific card with the dead fan ? Thanks kano and ckolivas again !
|
|
|
Repost from here so ckolivas is on the case : https://bitcointalk.org/index.php?topic=53199.20Does the cgminer work with AMD SDK 2.1 ? It seems it does not because it is looking for a file called "libOpenCL.so.1" which only comes with SDK 2.4 it seems while SDK 2.1 comes with a file called "libOpenCL.so" ( without the 1 at the end ) !? Can we make it work somehow ? Modify the code ? Thanks !
|
|
|
So I'm sure people have emailed Sonny and asked where their single(s) is/are. Anyone gonna post the reply he gave this time? Since they have redone the power on the board they should also have some new performance figures .... anyone got them yet?
I am about 99.9% sure that in JUST 4-6 weeks ( TM ) we will be mining on these babies
|
|
|
Yeah guys it is probably the fact that all the fools that deposited already did deposit and no new fools have been sucked in to keep your "profits" going.
If I were you I would pull out ASAP ( faster than if I was in danger of an unwanted pregnancy ) ! LOL.
|
|
|
so basically you are saying: forget about backward compatibility and implement the best solution without any constraints?
IMHO this seems the best way of doing it but make sure everyone can "convert" ( yeah I know oversimplified ) their BTC from the old type to the new type. But at this rate of developing and "screw deadlines" attitude and petty fights between Gavin and Luke ( and deepshit refusing to accept any change ) we can say bye bye to multisig for now Yeah, even the old banking system and Benny printing his toilet paper is better than this chaos Where is Satoshi when we need him ? Why has he not thought of this ?
|
|
|
Fix 27 - End of the road?
This may be the last fix for bamt "classic" i.e. the 32bit, 0.4 version.
Updates the desktop to be like the new desktop in 0.5, brings bamt tools up to date, fixes munin (finally) and installs new web ui.
This update changes a ton of stuff all at once, so cross yer fingers... Don't deploy farmwide until you've verified its ok on a test box.
desktop gui might be weird until rebooted, but who uses the gui anyway.
So no more 32 bit support from now on ? Moving to amd64 kernel ?
|
|
|
OK. Got email.
I will send my address and start mining by the end of today hopefully !
|
|
|
|