NLA (OP)
Member
Offline
Activity: 86
Merit: 10
How does I shot web?
|
|
September 13, 2011, 02:10:29 AM |
|
Evening all, this will be my first post on the bitcointalk forums. I've been around for a while, reading through guides and keeping track of everything enough to build myself a rig with the following specs (from Newegg): - 1 x Antec One Hundred Black ATX Mid Tower Computer Case
- 1 x SAPPHIRE PURE Black P67 LGA 1155 Intel P67 SATA 6Gb/s USB 3.0 ATX Intel Motherboard
- 1 x Intel Core i3-2105 Sandy Bridge 3.1GHz LGA 1155 65W Dual-Core Desktop Processor BX80623I32105
- 1 x Crucial Ballistix Tracer 2GB (2 x 1GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory w/ Green LEDs Model BL2KIT12864TG1608
- 1 x MSI R6990-4PD4GD5 Radeon HD 6990 4GB 256-bit GDDR5 PCI Express 2.1 x16 HDCP Ready CrossFireX Support Video Card with Eyefinity
- 3 x SAPPHIRE 100311SR Radeon HD 6970 2GB 256-bit GDDR5 PCI Express 2.1 x16 HDCP Ready CrossFireX Support Video Card with Eyefinity
- 1 x SILVERSTONE ST1500 1500W ATX 12V 2.3 & EPS 12V SLI Ready 80 PLUS SILVER Certified Active PFC Power Supply
- 4 x COOLER MASTER SickleFlow 120 R4-L2R-20AR-R1 120mm Silent operation Red LED case fan
I kinda went all-out on building this one, ended up costing ~2700USD. I also built this back when the price of bitcoins were much higher.. I had expectations of making it big. Those were the days, eh?~ Anyway, so I've finally gotten all of this set up with Ubuntu 11.04, such that I plug the system into the wall (directly), power it up, after 5 seconds or so it boots into Ubuntu from an internal SSD, its set up to automatically connect to the wireless network in my house, and it runs VNC and Hamachi on boot for me to remotely access it. I plan to eventually have it automatically overclock and start mining on boot after 20 or so seconds. IT ALL SEEMS TO WORK FINE, except for when it comes time to actually mine. 2 of the 6970's I've left with the stock BIOS voltages, one I have overvolted in BIOS to 1225mV, up from 1175mV stock, and I've left the 6990's alone BIOS wise. The BIOS switches on all the cards are set to the 'enthusiast' warranty-killing mode. I set all the core clocks of the cards to 900MHz (seems like a safe overclock, up from 880MHz stock), and that one special 6970 -- which is special because I've stress tested it enough to know that it can run @ 1000MHz for at least 6 hours straight under full load without hitting 83C -- I set to 990, just to be safe. Mining runs for a few minutes with phoenix, and after a few minutes the system crashes. Flat-out sudden shutdown. Fans stop spinning, the works. I look through the case, and the LED's that light up to indicate that the PC is on are still on. I hold the power button for 7 seconds to shut them (and the system?) off. I try to power it back up -- nothing. Just like before, the power-on LED's are active, but I'm assuming the PSU isn't providing the juice since its status light is red. So here's the thing. I'm wondering if I have a defective PSU. 1500W should be enough to power 3x 6970's and 1x 6990, right? I've underclocked the memory on all the cards to 180MHz (the lowest I can go before I notice a performance hit in MH/s), and the CPU is obviously a low-tier model which is not overclocked, and I haven't overclocked the system RAM, so I'm not sure where to point the finger here. This 1500W supply is supposed to be REALLY GOOD. It has a great rating on Newegg, it's very efficient, and its 1500W. I can't imagine that I'm exceeding the wattage requirements with these cards mining. Or am I? I really wanted to post this in the Hardware forums, but since I'm restricted, here I am. Halp.
|
If my post helped you in some way, please donate to 1NP2HfabXzq1BB288ymbgnLcGoeBsF7ahP.
|
|
|
Khanduras
Full Member
Offline
Activity: 168
Merit: 100
Movin' on up.
|
|
September 13, 2011, 02:31:13 AM |
|
Well the one way to make sure if it's the power supply or not would be to take a few of the video cards away and test to see if it makes a difference. If lowering the amount of cards in operation at once provides the same results, you should test each card individually. Doing so would rule out having a defective card.
If the PSU can truly handle it and none of the cards are defective, the next thing you should look for is either CPU or GPU overheating.
Other than that I have no idea what to tell you. Usually the only thing that would take a system down is a major misconfiguration or hardware issue. Given what you've said, the only thing that makes sense is either defective hardware or overheating.
|
Bitcoin Address: 1N1sex4rktWdxBJcFTczYZF5Xa75C47j4c | Mining Income Address: 15wgpV7fDN8fVYn1q9QaPb9XSLjGRhry5L |
|
|
|
NLA (OP)
Member
Offline
Activity: 86
Merit: 10
How does I shot web?
|
|
September 13, 2011, 02:48:30 AM |
|
Well, and to clarify: after 20 or so minutes, I can boot the system back up again, so it's *very* likely to be an overheating issue. Just not sure where.
|
If my post helped you in some way, please donate to 1NP2HfabXzq1BB288ymbgnLcGoeBsF7ahP.
|
|
|
Tril
|
|
September 13, 2011, 06:30:28 AM |
|
You don't even need to take out video cards. Just mine on fewer GPUs at once and adjust the number, to find the maximum number that will stay stable. Keep in mind the total watts does not matter, only watts on the actual 12V rail (some PSU's have multiple separate 12V rails) you are running PCI-E cables off of. Good PSUs will show the breakdown in a chart on the PSU itself - how many watts on each 12V, how many watts on 5V, etc. Also there is a limit of (3, I think) molex-PCI-E converters before you over-amp your molex. Motherboards also have a limit on a separate 12V that feeds the PCI-E cards from the slots, if you have too many cards it will overload the motherboard (and usually overheat wires in the ATX motherboard power connector). See: https://bitcointalk.org/index.php?topic=8847.0 (4x 6990 thread) http://blog.zorinaq.com/?e=44 (linked from above- describes modding your PCI-E extenders to connect 12V pins direct to the PSU, to avoid overloading the motherboard) The easy solution is to do as I said above, turn on one GPU at a time until it crashes, then run with one less GPU for 24 hours to make sure it's stable. IF this works, then buy a 2nd PSU, and hook it up using one of these: http://www.frozencpu.com/products/11742/cpa-167a/ModRight_CableRight_Dual_Power_Supply_Adapter_Cable.html
|
|
|
|
P4man
|
|
September 13, 2011, 02:12:50 PM |
|
edit:misread. Let me redo my math.
Definitely under dimensioned PSU. At stock speed a 6990 requires 375W. When you flip the switch, it goes to 450W. A 6970 is rated at 300W at stock speed. Overvolted and overclocked its probably closer to 400W (power consumption increases exponentially with voltage), but lets say just 350W on average for all 3.
450+3x350=1500W
We havent even taken in to account the CPU, motherboard, ram, hdd. Nor have we considered the fact that most PSU's do not achieve their rated power.
|
|
|
|
FalconFour
|
|
September 13, 2011, 03:20:28 PM Last edit: September 13, 2011, 03:34:11 PM by FalconFour |
|
Yeah, definitely a power thing. For one thing, sucking up 1500 watts is about the equivalent of running a hair dryer constantly... or one of those box room heaters... or something obscenely power-hungry like that. I mean, we're talking like, "dude, you'd better check your power cord to the wall outlet and make sure it's a couple gauges heavier than normal" kind of crap. I'll bet you it gets warm, if not hot. So in light of that topic of conversation, you really need to make sure you've run the ELECTRIC BILL analysis on this kind of completely ludicrous rig... it's going to cost you almost triple-digits on your power bill, so you'd better be CERTAIN you're going to make that back in Bitcoin. Not to mention, if you use it indoors and you need to run air conditioning in that area, it's going to cost about 1.5x that amount in electricity to remove the heat produced by the PC. It's no joking matter. It's summer here and I run my mining rig literally, outside on the porch (2nd floor = great anti-theft), to offset the cooling problem. And I'm still barely breaking even on the power bill, if that much. That said, your GPUs are stupidly over-powered. Knock them back to pure stock and you should be fine. You also need to make sure: 1) Your CPU usage is ZERO PERCENT while running miners. Any indication of CPU load is typically a rat's nest of API version, driver version, miner settings, etc., that results in any amount of CPU usage caused by a GPU miner. This was a long struggle for me to find the best way to solve. Basically it boils down to one thing: start with low aggression settings, and test each setting for 10-20 seconds to see which one starts causing CPU usage while you're seeing a "Mhash/sec" reading (actually running, that is). If it causes CPU usage, knock the aggression down to the last one you saw 0 at. And drivers, evidently there are some issues with Catalyst 11.7 and 11.8 (the latest), so you ought to move to either 11.6 (browse AMD's archive list), or find the 11.9 beta at guru3d. 2) Always keep your eyes on each GPU's temperature. With that many GPUs, god knows how you fit them in the case. Fans don't move air when they're in a vacuum, so be sure you either have a handful of fans COORDINATED to push air from the front of the case through the back (pulling air in is always easier than pushing it out, it seems), or just leave your case side off and direct a blower fan at it or something. But I can tell you one thing: those GPUs were NEVER designed to operate at full-capacity, right next to each other, for an extended period of time. You've got to make sure you stay on top of their "health" to make sure they stay working properly. 3) Clocks. First of all, don't "just" overclock. Mining is a very unusual situation to put a GPU in, so it doesn't need the same uber-high clock speeds that 3D rendering usually requires. For example, on my 6770, setting the memory clock (default at 1200, I think) to 300MHz actually puts the GPU in some sort of timing mode that essentially super-charges its mining speed, but drops about 10 C off its running temperature. Maybe it shuts down some unused clock modules or something. Whatever it does, it lets me run the thing at 950MHz/300Mhz (core/mem) when its default is more like 850/1200, and bangs out an extra 30 Mhash/sec while keeping the GPU core cooler. Find the right settings, search around, and don't just "move the slider up". And FFS, put the voltage back to stock, that's a freakin' sweet way to nuke your cards in no time... 4) Clients and settings. There's a HUUUUGGEEE difference that can be made between one client and another, and between one parameter and another. For example, my GPU goes from 211Mhash/sec to 160Mhash/sec when I just remove "worksize=128"; the kernel (currently using a modified phatk posted here somewhere, but very similar to the phatk2 provided with Phoenix) defaults to a worksize of 256 and my GPU only has a 128-bit memory bus (it doesn't know that). Performance goes through the floor. You may mess your pants when you see some 300-odd Mhash/sec, but what you don't know is that it could be straining itself with some bad settings and you could be seeing 400 if you played around a bit. "BFI_INT" is pretty much a necessity. Play with "VECTORS"/"VECTORS2"/"VECTORS4" in combination with different "WORKSIZE=" (64, 128, 256) values - the "vectors" values mesh with the given "worksize" to produce each sort of work "chunk", and finding the right combination is key to a smooth-running system. 5) You don't need a new power supply, just quit sucking power out of the wall like it's going out of style! Run the numbers - consider that 1500 watts continuous is what you're drawing (since we already know we're blowing past that as it is), run that as kilowatt-hours (kWh), which is approximately 1.5kW (long form, "1.5 kWh per hour"), and run that out to running 24/7 for a month. 732 hours in an average month, so we're talking... 1,098 kWh for your rig each month. And we plug price into it at about my average power rate, which is $0.15/kWh, and... *click click*... yeah, that'll run you $164.70/month in power to the box alone. Remember what I said about air-conditioning; when you factor in the $247.05 for cooling the house from the heat generated by the rig, that's $411.75 a month. 'Ya sweating yet? I dunno, maybe I tldr'd here, spent 20 minutes writing this crap for one guy's rig, but dude, seriously, 2000 bucks? I was sweating spending 100 on my 6770 and had to split it with a friend If it helps you any, there's an address in the sig that could help me buy a compliment to my single 6770! edit: I may have "duh, I already know that" on a few topics regarding clock, but I went back and noticed you said it was at 180MHz? Did you get that from somewhere? I read 300MHz for mine, and if I even vary from that a slight bit, I either get a power-consumption (heat) hit, or I get a performance hit - 300MHz is the "sweet spot" for the thing. Good to be right-on the number, but even better to run it through a meter and see what it does to your power consumption.
|
feed the bird: 187CXEVzakbzcANsyhpAAoF2k6KJsc55P1 (BTC) / LiRzzXnwamFCHoNnWqEkZk9HknRmjNT7nU (LTC)
|
|
|
NLA (OP)
Member
Offline
Activity: 86
Merit: 10
How does I shot web?
|
|
September 13, 2011, 09:14:26 PM Last edit: September 13, 2011, 11:50:19 PM by NLA |
|
edit: I may have "duh, I already know that" on a few topics regarding clock, but I went back and noticed you said it was at 180MHz? Did you get that from somewhere? I read 300MHz for mine, and if I even vary from that a slight bit, I either get a power-consumption (heat) hit, or I get a performance hit - 300MHz is the "sweet spot" for the thing. Good to be right-on the number, but even better to run it through a meter and see what it does to your power consumption. Well, this isn't the first time I've built a computer. Also, the 4x GPUs fit in just fine, the case definitely has enough slots, as well as the mobo. If you've ever put a bunch of 69XX series cards in a case before, you'll notice that there's practically no clearance between cards -- so I made some small 2-3mm spaces to place between the cards for a little better airflow. The 180MHz comes from me testing the rig based on MH/s performance from my 6970 miner cards. I was able to clock the memory down all the way to ~180MHz before I started noticing the MH/s performance started to plummet (why is my 920MHz 6970 mining with phatk @ 300MH/s?!) Lowering the RAM speed for the GPUs seemed to significantly lower GPU core temps as well, and I would assume lower the power usage a bit, so win-win. And I didn't "just overclock" the GPU's, I tested each card painstakingly based on benchmarks and reviews I had read about around the internet to get an idea of stability vs. speed vs. power usage, etc. I've put in the leg-work here to find what works and what doesn't for these cards. As for electricity, I have a deal worked out with a friend who gets unlimited electricity (lets not dwell on that topic and what it implies) where I'll basically store the rig at his place in an empty room by itself, which has an AC vent and stays fairly cool, for maybe $10-15/month. A small fee, no? The rig would connect automagically to his house over WiFi, and would be plugged into the wall. He'd close the door, and the fans would be running @ 100% all the time. Really, I've done my homework here in terms of automated setup, and getting things installed and running with Linux. The only problem -- and its a big one -- is GPU stability. And yes, the tower itself is connected directly to the wall with some lower-guage wire. Very thick. It has to be. Those 4x fans I mentioned are high-CFM fans, with 2 in the front of the case pulling air in, 1 on the back pushing out, one on the top pushing out, and one on the panel side blowing air directly on the top 3 cards. There is a LOT of airflow in this case. I'd like this to generate 2200MH/s -- now I know thats purely idealistic but thats how I planned it from the start when I was shopping for parts. I guess I could run everything at stock, but I'd really love a bare minimum 2000MH/s. Electricity isn't a concern for my situation (at the moment). I just need to stick this somewhere and let it run with 100% fans 24/7 at sub-90C temperatures. Also, if you're clever (read: clever), $2700 to drop on a rig like this isn't that insane. Let's just leave it at that. If all I need to do is power the 6990 with its own 500W power supply for stability, I have no problem buying one from Newegg to make this work. I really want this whole thing to work. I'd like it to work without having 2 PSU's (hence why I bought a 1500W beast to begin with) but I'll certainly invest a few more dollars in it if I have to. Especially if those extra dollars let me hit 2100MH/s or so. EDIT: I'll look into the aggression/CPU usage advice, hm.. I use VNC and SSH into the machine sometimes, might that little big of CPU usage trigger an excess usage of watts that my power supply can't deliver? :? EDIT2: Just ran with everything on stock, all switches set back to original low-wattage position, set all the GPU memory speeds to 180 (low!), and after 5 or so minutes the whole rig crashed. Defective PSU? 375 + 300 x 3 = 1275W from the GPU's, which leaves enough wiggle room I think for a non-overclocked i3 and RAM and an SSD. Not sure what to think about this now. :/
|
If my post helped you in some way, please donate to 1NP2HfabXzq1BB288ymbgnLcGoeBsF7ahP.
|
|
|
FalconFour
|
|
September 14, 2011, 03:03:48 AM |
|
Wow, OK, you've definitely got shit planned out proper! Great work there. Usually the image I get of someone with 4 GPUs wondering why their power supply clicks off, is summarized by the phrase "trust fund baby", haha... seems you don't exactly fit that bill! Well, you've taken care of the clocks, so that's cool. No, I'd think VNC wouldn't affect it that much, since I use TeamViewer to remote into mine, and the wattmeter barely flitters a watt or two. However... I'd like to throw one far-fetched idea out there just for curiosity. Linux, to me, strikes nothing but images of piss-poor graphics performance, glitchy kernel-module drivers, half-written software, and elitist "support" from anti-Micro$oft fanboys. I dab in desktop Linux every now and then (server Linux most of the time), and every time I put Linux on a desktop, I half-regret/half-pity it - despite hours of Googling, tweaking, compiling, configuring, researching, and downloading... it always runs like utter shit even on capable hardware and "proper" software. That said, have you considered putting Windows 7 to a test on there? I think it'd be worth a try with all its power-management features and all
|
feed the bird: 187CXEVzakbzcANsyhpAAoF2k6KJsc55P1 (BTC) / LiRzzXnwamFCHoNnWqEkZk9HknRmjNT7nU (LTC)
|
|
|
NLA (OP)
Member
Offline
Activity: 86
Merit: 10
How does I shot web?
|
|
September 14, 2011, 05:26:21 AM |
|
Now if only I could get my shit working proper. Tomorrow I'm gonna flip the 6990 BIOS switch back to high-wattage mode and let it run all day mining by itself with a bit of overclock. Should push the PSU, but should definitely not cause a random shutdown. If it randomly shuts down, I probably have a defective PSU. For good measure I'll probably underclock the CPU and RAM a bit too. I'm still welcome to suggestions and insight and such. I can't believe it still crashed after I returned everything to stock and flipped BIOS switches back to low-wattage mode..
|
If my post helped you in some way, please donate to 1NP2HfabXzq1BB288ymbgnLcGoeBsF7ahP.
|
|
|
Revalin
|
|
September 14, 2011, 08:19:49 AM |
|
I definitely recommend getting a Kill-A-Watt to measure how much you're really pulling. Remember to multiply the measurements by 1.25 (the PS is only 80% efficient). You may find that the estimates of what the cards pull at stock clock vs overclock are wrong.
90% chance you're hitting the thermal limit, not overcurrent or undervolt limits. Put the PS into free air and use another fan to pump some more air into it. You're probably right on the edge and it just trips whenever the room's a little too warm.
Why not pull a PS from another computer to run one of the cards and see if that helps?
There's no point underclocking the CPU. It made sense on fixed frequency CPUs years ago, but all the current ones have frequency scaling. At the most, just lock the scaling at the slowest value to prevent power surges.
|
War is God's way of teaching Americans geography. --Ambrose Bierce Bitcoin is the Devil's way of teaching geeks economics. --Revalin 165YUuQUWhBz3d27iXKxRiazQnjEtJNG9g
|
|
|
FalconFour
|
|
September 14, 2011, 04:18:52 PM |
|
@Revalin: Don't know where you got the multiply by 1.25 thing - not only would that represent a "75%" efficiency (125% correction), but... the Kill-A-Watt has no care how "efficient" the device it's monitoring is, it calculates how much power is flowing through it and that's the only part that counts! In fact, the KAW actually *does* calculate efficiency, and it's called "PF" (Power Factor), represented as a number between 0.00 and 1.00 - a PF of 0.98 means it's 98% efficient (and most PC power supplies are, since they use PFC to reclaim the wasted energy in the switching cycle). You shouldn't be doing any manual calculations with the numbers given to you by the Kill-A-Watt, plain and simple - it tells the facts as the power company sees them. Agree on pretty much everything else though. The PSU may not be pushing enough air through it - I'm pretty certain most of these PSUs were never actually *designed* to have a full load! I think a good test might be to fire up one miner at a time, giving the PSU a chance to get its cooling figured out for 3-5 minutes after each one. Maybe that'll help?
|
feed the bird: 187CXEVzakbzcANsyhpAAoF2k6KJsc55P1 (BTC) / LiRzzXnwamFCHoNnWqEkZk9HknRmjNT7nU (LTC)
|
|
|
Revalin
|
|
September 14, 2011, 05:34:05 PM |
|
@Revalin: Don't know where you got the multiply by 1.25 thing - not only would that represent a "75%" efficiency (125% correction) Sorry, multiply by 0.8. I was going the wrong way. But 1.25 is the correct factor going the other way: 100 watts AC * 0.80 efficiency = 80 watts DC 80 watts DC * 1.25 efficiency = 100 watts AC , but... the Kill-A-Watt has no care how "efficient" the device it's monitoring is, it calculates how much power is flowing through it and that's the only part that counts! That's the part that matters for your power bill. However, if you want to know if you're exceeding the power supply's rating, you have to take efficiency into account to estimate how much power is being drawn on the DC side, which is what the PS is rated for. In fact, the KAW actually *does* calculate efficiency, and it's called "PF" (Power Factor), represented as a number between 0.00 and 1.00 - a PF of 0.98 means it's 98% efficient (and most PC power supplies are, since they use PFC to reclaim the wasted energy in the switching cycle). Power Factor is not efficiency at all. It's a measure of how bursty the power usage is. An incandescent lamp has a PF of 1.00, since it draws current in a perfect sine wave with the voltage. Induction motors pull current out of phase with the voltage, so they have a low PF. Old power supplies draw power in bursts around the peak voltage, so they have a low PF. Those bursts mean you need heavier wiring than your average current requirement suggests, since the peak current goes much higher. It also means the power company needs bigger transformers because they're rated in volt-amps (IE, limited by the peaks), not watts. As far as your power bill goes it doesn't matter since they're only measuring average watts, not peak volt-amps. You shouldn't be doing any manual calculations with the numbers given to you by the Kill-A-Watt, plain and simple - it tells the facts as the power company sees them. Agreed it tells it how the power company sees it, and there are no conversions necessary to understand how you'll be billed. However you do need to convert if you want to estimate how much power is being pulled on the DC side. The Kill-A-Watt has no way to know how much power is being wasted as heat inside the power supply.
|
War is God's way of teaching Americans geography. --Ambrose Bierce Bitcoin is the Devil's way of teaching geeks economics. --Revalin 165YUuQUWhBz3d27iXKxRiazQnjEtJNG9g
|
|
|
FalconFour
|
|
September 14, 2011, 06:18:25 PM |
|
Ahh... OK, I thought you were referring to the "how much it's going to cost you" discussion with the 1.25 thing. Absolutely right all around - the K-A-W has no idea how efficient the device is internally, but it knows how much power is being wasted back to the grid with AC phase "bouncing" (I did research on that stuff too ). It's still a measure of efficiency - if you've got something with a 0.75 PF, you may be using 100 watts to only get 75 watts worth of "energy" out of the device = inefficient! Crappy "brick" power supplies tend to do that. But yeah, you're right on the 1.25 = 80% correction... I never did take any advanced math classes (trig, statistics, etc), so most of the number-shuffling I do is "um, well, putting the number together like this seems to make sense" sort of thing... heh.
|
feed the bird: 187CXEVzakbzcANsyhpAAoF2k6KJsc55P1 (BTC) / LiRzzXnwamFCHoNnWqEkZk9HknRmjNT7nU (LTC)
|
|
|
P4man
|
|
September 14, 2011, 06:30:15 PM |
|
FYI, this is what it says on the PSU: Thats "only" 1320W on 12V lines, which is all that GPU's use. Even at stock speeds (edit) 300Wx3 + 375W = 1275W for the GPU's alone. Granted, thats TDP, while mining at low memory speeds it might be lower than that, but you have other devices that need 12V, not in the least the CPU.
|
|
|
|
hmongotaku
|
|
September 14, 2011, 06:37:22 PM |
|
edit:misread. Let me redo my math.
Definitely under dimensioned PSU. At stock speed a 6990 requires 375W. When you flip the switch, it goes to 450W. A 6970 is rated at 300W at stock speed. Overvolted and overclocked its probably closer to 400W (power consumption increases exponentially with voltage), but lets say just 350W on average for all 3.
450+3x350=1500W
We havent even taken in to account the CPU, motherboard, ram, hdd. Nor have we considered the fact that most PSU's do not achieve their rated power.
I agree. This is why you shoulda went with AMD, they're less power hungry. You should try them all at stock first and work up. Silverstone is a good name brand. I'm sure it can hit 1600w or so over the rated wattage.
|
|
|
|
madnod
Newbie
Offline
Activity: 10
Merit: 0
|
|
September 14, 2011, 06:38:44 PM |
|
FYI, this is what it says on the PSU: http://www.xbitlabs.com/images/coolers/1000w-psu-roundup-2/9p7s.jpgThats "only" 1320W on 12V lines, which is all that GPU's use. Even at stock speeds (edit) 300Wx3 + 375W = 1275W for the GPU's alone. Granted, thats TDP, while mining at low memory speeds it might be lower than that, but you have other devices that need 12V, not in the least the CPU. +1, u need more power
|
|
|
|
Revalin
|
|
September 14, 2011, 07:57:43 PM |
|
it knows how much power is being wasted back to the grid with AC phase "bouncing" Nope. Power fed back will just be consumed by other devices in your house, thus reducing the amount pulled through your meter. It's not wasted. In the unlikely case that you have a huge inductive load (very large motors), your power meter subtracts the amount you kick back. People with solar arrays see this all the time: if they generate more than they use, the meter literally runs backwards. Anyway, in the case of crappy power supplies, they just switch between using a burst and then nothing. They don't kick back to the grid.
|
War is God's way of teaching Americans geography. --Ambrose Bierce Bitcoin is the Devil's way of teaching geeks economics. --Revalin 165YUuQUWhBz3d27iXKxRiazQnjEtJNG9g
|
|
|
P4man
|
|
September 14, 2011, 08:00:58 PM |
|
I agree. This is why you shoulda went with AMD, they're less power hungry.
You are kidding right? Not that it would matter a lot in this case, but that doesnt make you any more right. I also think you missed the PSU is rated for 110A max combined on 12V.
|
|
|
|
FalconFour
|
|
September 14, 2011, 08:21:34 PM |
|
Yeah, I'm with the "you're kidding, right?" thing with the AMD comment - AMDs are power-hungry whores compared to Intel's offerings. Intel was a good choice here. But now that you posted the label... SILVERSTONE?! Seriously? Mid-low grade garbage in this application - good for pretty much anything except being fully utilized. Get a Thermaltake, Antec, or I hear good stuff about a company called PC Power & Cooling (though I've never seen their products in person), who knows what else. At 1500 watts, the PSU would have to be VERY professionally designed - trying to regulate and process that much power and stay smooth requires some super-serious engineering that some cheap Chinese piece of crap is not going to be able to stand up to. Yeah, I'd say your PSU is the weakest link... go return that pieceofcrap and get a serious PSU. I'd suggest running it by us here first
|
feed the bird: 187CXEVzakbzcANsyhpAAoF2k6KJsc55P1 (BTC) / LiRzzXnwamFCHoNnWqEkZk9HknRmjNT7nU (LTC)
|
|
|
P4man
|
|
September 14, 2011, 08:57:16 PM |
|
But now that you posted the label... SILVERSTONE?! Seriously? Mid-low grade garbage in this application -
If you look up any serious reviews of the PSU, it actually does very well, its getting top grades in all reviews Ive seen, including [H] and their torture tests. Here is the conclusion: http://www.hardocp.com/article/2010/09/01/silverstone_strider_st1500_1500w_power_supply_review/9I really dont think its a bad PSU, but you cant expect a PSU to deliver more than its rated for. To run this many powerhungry cards, he needs a second PSU, "1500"W (which is actually 1300W where it matters) isnt enough for his config. Ditch a card or buy a second PSU for the GPUs.
|
|
|
|
|