"Military class" components are actually total garbage. The 870-G45 mosfets are some of the lowest quality inferior junk items around. The only goodnews is that if you are running a low power sempron processor, that this won't really be a problem as far as I know. But those boards are notoriously junky boards. The caps might be better, but I suspect that they aren't and that's all just marketing hype nonsense.
The military just uses Sun Netra, Sunfire, and various HP and DELL servers. They certainly aren't buying any MSI motherboards. The 'milspec' marketing term has been around forever and it rarely means the military uses it or even that the product is up to 'military specification'.
|
|
|
Certainly I'm biased, but...
Trying to run more than a couple of ANY type of cards in an enclosed case is trouble at best. 6990's being the double powered beasts that they are going to be even worse.
Taking the sides off and running large fans pointed inside is the only option really, but at that point are you really running an exclosed case anymore? You'd be better off with an open frame rig.
The warmest card in my open frame rig is running at 78C, and the only reasons that card is that hot is because it's a 6970 and I don't feel like cranking the fans up to %80 or higher. to damn loud. They just run hot no matter what you do.
The 12 5870's in my rig are all in the mid 60's C. It's all air flow.
I have 15 cards running in a 32" x 20" by 20" space, and the only reason the card density can be that high is because the hot air is dumped outside and all the cards have enough spacing to get fresh/cool intake air.
That just wont happen in an enclosed case.
Sounds to me like the place you bought your rig from are just slapping the bitcoin name on a HAF X case without really doing any kind of research or testing to make a buck.
There's no question that an open case solution is best. We're simply discussing options for a closed case. Two delta 240cfm 120mm case fans will move enough air to cool down those cards in an enclosed case. They are super loud though, so that's the trade off.
|
|
|
If you guys are talking about running more than two cards in a rig then the 6990s are a waste of time and fail at being power/cooling/price/density efficient.
|
|
|
Linux is a pain to set up and I've done it twice but
hahahahahahh oh man. You are killing me! LinuxCoin takes 10-15 minutes to install in persistent mode from the livecd. Admin time is far far less than Windows as well. I think you may forget that a lot of miners are also sysadmins. If you took a couple of hours to learn how to use it you might be surprised that it's more flexible, stable, cheaper, and easier to use than wasting your time click click clicking around with the mouse.
|
|
|
Expected length of the name: 8 Expected length of the password: 8 - 12
Seriously?!
I won't even register now, and i would suggest no one else does it either.
If that's your reason for not joining then mine elsewhere. It's not hard to create a 8 char username and a 8-12 char password. wtf.
|
|
|
What a rip off. Their case with no cards is $1099?! My 4-card rigs cost $394 before cards and that's case/cpu/mb/ram/disk/fans/psu.
|
|
|
I've come to realize that GPU temps are dependent on a huge array of factors - it's difficult to optimize all of them while retaining performance.
I have 4x 6950's, they were all running ~85-90C in a case with exceptional airflow. I now have them 'open air' with a ridiculous ventilation system setup to push cold air from an adjacent room directly onto the cards, and I'm getting between 75-85C now.
If your cards are the 'two fan' variety, the only 'intake' is the fans themselves. They will simply expel hot air everywhere, which generally rolls around and gets sucked back into the fans. This is a fatal flaw of these cooler types. Blowing a box fan, etc directly at the cards will make no difference, and may infact increase temperatures, as you're fighting against the exhaust of the cooler fans.
What flags are you using to run your miner? I've found a drastic difference between -f10 and -f30 on my cards with poclbm for example; With -f10, my GPU runs at 99% load and temps raise at least 5C. With -f30, I run at 98% and cooler. I lose about 5mhash/s / card at -f30 (the default).
MSI's website blows and I can't see pics of the card for some reason, so I can't offer more specific suggestions.
For a closed card with turbine fan (reference cooler), intake is at the rear of the card and exhaust is out the back of your case - these are very easy to direct air into.
Dual-fan or non-reference designs with big honkin' heatpipes, etc, are harder as you need to try to prevent the expelled hot air from being recirculated through the card.
If you've re-applied the thermal paste properly (very thin layer spread over the surface of the GPU), and you're getting 83C at full load, you're actually not doing too badly. These cards can usually take the pain, and don't thermal-throttle until 100C+
I'd do more experimenting before spending a bunch of money on aftermarket coolers. Don't get discouraged, it's really not easy to find the right balance.
Great advice there. I've been having similar issues with the non-reference cards and the only solution I've found (when running more than two cards per box) is to run this rather insane delta fan with 240cfm. Luckily my server rack isn't in my living room anymore. http://www.newegg.com/Product/Product.aspx?Item=N82E16835213001
|
|
|
Is there a JSON API? I've got all of my 3Ghash in your pool and would like to be able to monitor it without logging in.
|
|
|
Feature Request: Tabular output. It would be nice to get most of the information MiningMonitor has in tabular form so that we can do our own analyses in Excel, etc.
I am thinking:
Date TotalShares StaleShares BTCEarned AverageHash Difficulty(at noon) and MtGox(at noon)
Or something like that.
Thoughts?
I prefer CSV but yeah, it would be the tits to have export data.
|
|
|
2) Try and keep the room extra cool, perhaps put a fridge in the mining room with the door open
Refrigerators are not magical cold-making boxes, they're heat exchangers. Any heat they remove from their contents is expelled through the coils on the back of the fridge. Given that this is also less than 100% efficient, an open-doored refrigerator adds more heat to a room than it removes. Honestly people, L2Physics Funny how it keeps my beers cool then smartass! You are the best troll I've read all week!
|
|
|
Sure. On MM, other pools (such as bitclockers) supports having multiple workers within each pool. This allows you to aggregate all of the workers from the pool together. I wouldn't mind being able to create an "Eligius Pool" and then adding several wallet addresses as worker inside that pool. Right now, each worker is his own pool. Ok I see what you mean. The problem with that is the API key for Eligius is the worker address. I'm sure Pete can create a workaround for that issue
|
|
|
Excellent! I am glad to see that added.
For consistency with the other pools, would it make more sense to have the Eligius API key (BTC address) as a worker property, not the pool? It would be nice if I could make an "Eligius Pool" (without any API information) and then added several workers (BTC adddresses). One of the major drawbacks of Eligius is that it does not support multiple workers. If MiningMonitor could add multiple workers to the Eligius pool I could keep an eye on aggregated stats.
Dan
I think it would be as simple as having multiple wallet addresses, one per worker, and then naming the pools accordingly. So that you would have N-eligius-pools for however many workers you are using. Then you would get an email about whatever pool is offline, as each eligius pool is only capable of one worker/wallet address. I was thinking if the Eligius pools could have sub-workers for each wallet address. That appears to be how the other pools work. Not sure how that would work. Can you explain in more detail?
|
|
|
Excellent! I am glad to see that added.
For consistency with the other pools, would it make more sense to have the Eligius API key (BTC address) as a worker property, not the pool? It would be nice if I could make an "Eligius Pool" (without any API information) and then added several workers (BTC adddresses). One of the major drawbacks of Eligius is that it does not support multiple workers. If MiningMonitor could add multiple workers to the Eligius pool I could keep an eye on aggregated stats.
Dan
I think it would be as simple as having multiple wallet addresses, one per worker, and then naming the pools accordingly. So that you would have N-eligius-pools for however many workers you are using. Then you would get an email about whatever pool is offline, as each eligius pool is only capable of one worker/wallet address.
|
|
|
MiningMonitor is only 1BTC/month. Pete has added so many new features in the last two weeks that it is fully worth the cost. The proxy + miningmonitor has solved all of my uptime and notification issues. For example, if one of my 12 GPUs goes down (or all b/c a pool is offline) at 10am and I'm at the office and don't get home until 6pm - then it definitely makes sense to pay 1BTC/month to know about it prior to losing 8 hours of mining time. Even if I were to ssh into my home LAN (which I do anyway) to fix things, that's a whole bunch of micromanagement during the work day when I'm supposed to be doing things at work. Handling the miners should almost never happen - they should be online 24x7 and if the primary pool is down then the secondary and tertiary pool should be available. If my outgoing internet connection on the LAN goes down then that's another story - and, once I get my secondary cisco router online - the internet will automagically connect to my secondary provider. Miners must be active 24x7! I think you missed the point. When miningmonitor notifies you of a downed miner you still have to manually reset it. What the script does is detect the downed miner and restart it without telling out. MiningMonitor Miner down -> MiningMonitor notification -> You reset -> miner Scripts miner down -> script detects -> script resets ->miner No middle man -> No money spent Anyways, that's how I view it. If MiningMonitor works for your setup then by all means use it. I'm not here to make you change your setup, I'm just stating the way I see it. Yes indeed 24x7 mining ftw! I understand what SmartCoin does - but since I have actually spend time with both SmartCoin and BTC-mining-proxy I can say from *my* experience that the proxy accomplishes the purpose much easier and cleaner than SmartCoin. I can switch pools for all 12 GPUs in one click of a web interface with BTC Mining Proxy, where as SmartCoin I have to go and SSH into my bastion host, then ssh into the rigs, then get into the smartcoin interface, then issue a couple of commands. The proxy has a protected web interface that I can access much easier. I'm not trying to bad mouth smartcoin - it's a valid project and I'm glad its being developed. I just don't think it works as well as the combination of miningmonitor (trending and graphs and email/sms notification) and the proxy (pool management, worker management web interface, dashboard to see all 12GPUs state in one screen). If smartcoin works for you then that's great. It frustrated me and I didn't like the way it manages workers/pools/profiles/etc.
|
|
|
It just keeps getting better!
|
|
|
Headless automation is great. Thanks for the write up.
From personal experience, it's way easier to manage pool H/A with Bitcoin Mining Proxy than with Smartcoin or some homegrown scripts that watch load. I have 4 boxes with 3 cards each and I'm using MiningMonitor to notify me of down'd workers - having to ssh into a box to see Smartcoin information is a waste of my time. I have a separate web/db server running the proxy app so it doesn't get affected by rigs that need to reboot or have work done to them. YMMV.
You have your point but this is designed to keep the GPU's online, not the connection to the pool alive. If the pool goes down then no matter how many restarts, the pool isn't going to come back up. And MiningMonitor is kinda expensive. Ins't it for profitable to let the worker stay down until you get home then to pay MiningMonitor's fees? MiningMonitor is only 1BTC/month. Pete has added so many new features in the last two weeks that it is fully worth the cost. The proxy + miningmonitor has solved all of my uptime and notification issues. For example, if one of my 12 GPUs goes down (or all b/c a pool is offline) at 10am and I'm at the office and don't get home until 6pm - then it definitely makes sense to pay 1BTC/month to know about it prior to losing 8 hours of mining time. Even if I were to ssh into my home LAN (which I do anyway) to fix things, that's a whole bunch of micromanagement during the work day when I'm supposed to be doing things at work. Handling the miners should almost never happen - they should be online 24x7 and if the primary pool is down then the secondary and tertiary pool should be available. If my outgoing internet connection on the LAN goes down then that's another story - and, once I get my secondary cisco router online - the internet will automagically connect to my secondary provider. Miners must be active 24x7!
|
|
|
It's not a pic of tits, but it's almost as nice. Here's the current state of my rack. Monitor on left. Boxes listed top to bottom. Ultra40 #1 (2x 6870 (MSI and ASUS)) Ultra40 #2 (2x 6870 (MSI and ASUS)) Quad0 box (Currently 2x6950 MSI, room for two more 5770 single slot versions). Running a Sapphire Pure-Black MB with 4xPCIe slots. Sun X4600-M2 not running Sun X2100 #1 not running but maybe it will provide H/A for the next box... Sun X2100 #2 running BTC-Mining-Proxy, apache/mysql, monitoring apps Tres0 (2x 6870 MSI Hawk, 1x XFX 5770) room for one more single slot 5770. Running a Sapphire Pure-Black MB with 4xPCIe slots. Tres1 (2x 6870 MSI Hawk, 1x XFX 5770) room for one more single slot 5770. Running a Sapphire Pure-Black MB with 4xPCIe slots. APC-SmartUPS-1500VA I'm planning on removing the ultra40 boxes and replacing them with duplicates of the custom builds (the tres0 and tres1 boxes). And of course I'll be removing the X4600M2 box since it's taking up space and I'm not utilizing it... so that and the rest of the empty space will be filled with more boxes like the tres0. I have more delta fans coming this week that will push 240cfm each (120mm) and those are going into quad0, tres0, and tres1. You like?
|
|
|
Headless automation is great. Thanks for the write up.
From personal experience, it's way easier to manage pool H/A with Bitcoin Mining Proxy than with Smartcoin or some homegrown scripts that watch load. I have 4 boxes with 3 cards each and I'm using MiningMonitor to notify me of down'd workers - having to ssh into a box to see Smartcoin information is a waste of my time. I have a separate web/db server running the proxy app so it doesn't get affected by rigs that need to reboot or have work done to them. YMMV.
|
|
|
Switch your power supply from 120V to 240V. Most power supplies are ~2% more efficient at the higher voltage. Hmm, can you elaborate on this? Is this something you can do without having a special 240V outlet put in or can it be run over your normal lines? Um, you shouldn't be switching the PSU to a different voltage if your electric lines aren't actually running that voltage. Ie: if your breaker is 110/120v you can't just put the PSU on 208/240v to save energy. Further more, if your breaker is 240v and you put the PSU on 120v you'll fry the PSU immediately. Finally, most PSUs these days are "auto switching" which means it will adjust to whatever voltage you plug in - so there's no switching it manually anyway.
|
|
|
yeah, the CPUs will use more electric than you'd ever get BTC out of to cover costs. if electric were free, along with cooling, then sure... why not.
|
|
|
|