sp_ (OP)
Legendary
Offline
Activity: 2954
Merit: 1087
Team Black developer
|
|
January 05, 2016, 06:58:58 AM |
|
Don't get me wrong Gigabyte GTX 970 G1 where fantastic cards, I love them, power density is awesome, they are extremely stable, beautiful (i know, I'm a nerd, can't help it) but when it comes down to power efficient, it's like 40% more than 750Ti (including the rig overhead!).
On AMD you probobly undervolted your cards to get them to use less power. On Nvidia you change the TDP of the card. You can do it in software with the nvidia-smi tool, or modify the bios. (voids warranty)
|
|
|
|
bensam1231
Legendary
Offline
Activity: 1764
Merit: 1024
|
|
January 05, 2016, 10:04:56 AM |
|
guess you forgot to read (as usual) what you are quoting... if it isn't profitable to release then it isn't released, and the fast code may be eventually used elsewhere (other coins, other algorithms) to keep the edge... hence no hurry as far as I am concerned
And you missed the part where if you don't release anything you're neither profiting nor helping anyone else... As you've done in the last six months. There is no 'ROI' on code. It's not the same thing as hardware. You can sell as many copies as you want and the demand for it is only reduced by the desire to want it. So if there is a competing product or they can get it for free. But since there are 'no' products available, you aren't really accomplishing anything. You can sell as many copies as you want until you sell to one jackass, and then it's worthless. You can disprove an idiot's argument, but you can't change his opinion... Yuh... And selling one copy is still greater then 0. You also assume that no one else would purchase his miner after it becomes 'free'. SP's miner is technically free and there are plenty of people that purchase it for whatever they want (donations). Ever heard of the Humble Bundle and video game pirating? This is all just a excuse so you don't need to sell to people you don't like though. Same statement can be said about some of you tards. ROI on developed software: developing takes time and time is money (I could do something else and get paid for it). So yes, there is "roi". I surely don't expect to roi on my opensource software, and am very far from it. Different story with private works.
Entirely correct... But after initial development it's selling it for as much as possible, either through bulk sales or large small sales. It doesn't cost more to sell it to more people.
|
I buy private Nvidia miners. Send information and/or inquiries to my PM box.
|
|
|
tbearhere
Legendary
Offline
Activity: 3206
Merit: 1003
|
|
January 05, 2016, 10:49:23 AM Last edit: January 05, 2016, 11:02:36 AM by tbearhere |
|
Off topic I know....but with Cryptsy being down, does anyone know the best exchange to cut a check in U.S.A. ...I need to pay the electric bill. Thx EDIT: Correction .. with Cryptsy having temporary slow withdraw problems.
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2954
Merit: 1087
Team Black developer
|
|
January 05, 2016, 10:53:44 AM |
|
Cryptsy is not down, but here are some other markets you can try Bitcoin Markets USD Main Markets # Source Pair Volume (24h) Price Volume (%) Updated 1 OkCoin Intl. BTC/USD $ 6,198,600 $ 430.91 16.73 % Recently 2 Bitfinex BTC/USD $ 5,951,300 $ 431.60 16.06 % Recently 3 Coinbase Exchange BTC/USD $ 2,794,330 $ 433.25 7.54 % Recently 4 BTC-E BTC/USD $ 2,560,510 $ 430.92 6.91 % Recently 5 Bitstamp BTC/USD $ 2,432,920 $ 432.01 6.57 % Recently 6 Kraken BTC/EUR $ 2,103,850 $ 434.37 5.68 % Recently 7 BTCBOX BTC/JPY $ 1,747,110 $ 434.11 4.72 % Recently 8 Gatecoin BTC/HKD $ 1,398,870 $ 432.06 3.78 % Recently 9 Gatecoin BTC/USD $ 1,370,330 $ 431.73 3.70 % Recently 10 LakeBTC BTC/USD $ 1,266,490 $ 430.51 3.42 % Recently 11 Gatecoin BTC/EUR $ 1,123,760 $ 433.69 3.03 % Recently 12 BIT-X BTC/GBP $ 987,884 $ 433.54 2.67 % Recently 13 Coincheck BTC/JPY $ 711,021 $ 431.38 1.92 % Recently 14 Bitcoin Indonesia BTC/IDR $ 429,616 $ 426.47 1.16 % Recently 15 CEX.IO BTC/USD $ 296,234 $ 435.88 0.80 % Recently 16 HitBTC BTC/USD $ 260,046 $ 438.94 0.70 % Recently 17 Bitonic BTC/EUR $ 218,885 $ 433.45 0.59 % Recently 18 Livecoin BTC/USD $ 216,673 $ 430.60 0.58 % Recently 19 BTC38 BTC/CNY $ 213,477 $ 433.86 0.58 % Recently 20 BX Thailand BTC/THB $ 180,543 $ 433.04 0.49 % Recently 21 HitBTC BTC/EUR $ 165,721 $ 449.90 0.45 % Recently 22 Exmo BTC/RUB $ 145,887 $ 426.36 0.39 % Recently 23 BitBay BTC/PLN $ 142,513 $ 430.86 0.38 % Recently 24 Loyalbit BTC/USD $ 133,455 $ 431.00 0.36 % Recently 25 Justcoin BTC/USD $ 132,383 $ 437.25 0.36 % Recently 26 The Rock Trading BTC/EUR $ 126,516 $ 440.35 0.34 % Recently 27 BTC-E BTC/RUR $ 94,018 $ 414.46 0.25 % Recently 28 BitcoinToYou BTC/BRL $ 93,332 $ 440.48 0.25 % Recently 29 Bitex.la BTC/USD $ 76,149 $ 432.79 0.21 % Recently 30 Bittylicious BTC/GBP $ 71,973 $ 469.23 0.19 % Recently 31 Livecoin BTC/EUR $ 67,971 $ 435.32 0.18 % Recently 32 BTCGreece BTC/EUR $ 67,203 $ 435.65 0.18 % Recently 33 Livecoin BTC/RUR $ 56,077 $ 419.53 0.15 % Recently 34 Exmo BTC/USD $ 50,714 $ 432.00 0.14 % Recently 35 CoinMate BTC/EUR $ 43,359 $ 434.40 0.12 % Recently 36 Exmo BTC/EUR $ 23,826 $ 435.89 0.06 % Recently 37 247exchange BTC/USD $ 23,373 $ 435.34 0.06 % Recently 38 CleverCoin BTC/EUR $ 18,806 $ 435.99 0.05 % Recently 39 Kraken BTC/USD $ 15,809 $ 429.97 0.04 % Recently 40 LiteBit.eu BTC/EUR $ 15,515 $ 434.11 0.04 % Recently 41 CEX.IO BTC/EUR $ 13,291 $ 439.74 0.04 % Recently 42 BTER BTC/CNY $ 11,145 $ 431.60 0.03 % Recently 43 The Rock Trading BTC/USD $ 10,978 $ 430.94 0.03 % Recently 44 MonetaGo BTC/JPY $ 10,974 $ 237.37 0.03 % Recently 45 MonetaGo BTC/MUR $ 10,888 $ 430.43 0.03 % Recently 46 CAVirtex BTC/CAD $ 9,088 $ 445.79 0.02 % Recently 47 Negocie Coins BTC/BRL $ 7,281 $ 446.95 0.02 % Recently 48 OKCoin.cn BTC/CNY $ 537,827,000 * $ 434.50 0.00 % Recently 49 Huobi BTC/CNY $ 485,809,000 * $ 434.45 0.00 % Recently 50 BtcTrade BTC/CNY $ 61,215,000 * $ 434.95 0.00 % Recently 51 BTCChina BTC/CNY $ 18,269,400 * $ 434.26 0.00 % Recently 52 BTC100 BTC/CNY $ 17,240,500 * $ 430.44 0.00 % Recently 53 Quoine BTC/JPY $ 1,308,870 * $ 430.39 0.00 % Recently 54 itBit BTC/USD $ 692,035 * $ 431.69 0.00 % Recently 55 Yunbi BTC/CNY $ 430,134 * $ 434.88 0.00 % Recently 56 Quoine BTC/USD $ 383,421 * $ 430.00 0.00 % Recently 57 Quoine BTC/IDR $ 325,984 * $ 434.37 0.00 % Recently 58 Zaif BTC/JPY $ 239,986 * $ 432.54 0.00 % Recently 59 itBit BTC/EUR $ 230,572 * $ 434.99 0.00 % Recently 60 Quoine BTC/AUD $ 71,755 * $ 432.67 0.00 % Recently 61 Quoine BTC/INR $ 13,136 * $ 432.74 0.00 % Recently 62 Quoine BTC/SGD $ 13,024 * $ 431.58 0.00 % Recently
|
|
|
|
tbearhere
Legendary
Offline
Activity: 3206
Merit: 1003
|
|
January 05, 2016, 10:58:28 AM |
|
Thx sp Bitfinex looks good.
|
|
|
|
theotherme
Member
Offline
Activity: 81
Merit: 10
|
|
January 05, 2016, 11:00:30 AM |
|
guess you forgot to read (as usual) what you are quoting... if it isn't profitable to release then it isn't released, and the fast code may be eventually used elsewhere (other coins, other algorithms) to keep the edge... hence no hurry as far as I am concerned
And you missed the part where if you don't release anything you're neither profiting nor helping anyone else... As you've done in the last six months. There is no 'ROI' on code. It's not the same thing as hardware. You can sell as many copies as you want and the demand for it is only reduced by the desire to want it. So if there is a competing product or they can get it for free. But since there are 'no' products available, you aren't really accomplishing anything. You can sell as many copies as you want until you sell to one jackass, and then it's worthless. You can disprove an idiot's argument, but you can't change his opinion... Same statement can be said about some of you tards. It spells "devs" D E V S while tards is rather spelled "bensam123" which can also be spelled "troll" but thanks anyway ps: and you wonder why we don't want to sell to you
|
|
|
|
bathrobehero
Legendary
Offline
Activity: 2002
Merit: 1051
ICO? Not even once.
|
|
January 05, 2016, 11:01:02 AM |
|
another reason is temperature, GTX970 works at a very high temperature (75C with 25C ambient, even with the three coolers, insane!) , I don't like that, as an electronic tech, i know temp kill rigs badly!, it's only a matter of time!.
Don't get me wrong Gigabyte GTX 970 G1 where fantastic cards, I love them, power density is awesome, they are extremely stable, beautiful (i know, I'm a nerd, can't help it) but when it comes down to power efficient, it's like 40% more than 750Ti (including the rig overhead!).
There's nowhere near 40% difference in efficiency between a 750 Ti and a 970 because they are both Maxwell cards. The only way you'd get such a huge difference is if you use a memory bandwidth/latency hard algo (old lyra2) but for most algos a 970 is about 2.7-3.1x times faster than a 750 Ti which reflects their power consumption. I have GV-N970WF3OC 970 cards which are virtually the same as the G1 ones and they never ever go above 60°C with 8cm gap between them. The 1 fan 970 minis on the other hand would go 75+ if I'd let them but I use a 70°C temp target on them. I plan on measuring the hashrate, power consumption, temperature for all relevant algos, for all the different type of cards I have with different power targets for a long time, I think I'll get to it later this week and share the results. also, mining, kills hard drives like crazy, unless use SSD which are expensive here.
Not unless you use terrible HDDs to begin with like WD Green series which keep parking the head after 8 seconds of being idle and are only rated at 300.000 of these parkings. 60GB SSDs are not expensive anymore (and preserve their price better) and I'm still below 1TB total writes on them after 1 year of constant use running several wallets.
|
Not your keys, not your coins!
|
|
|
tbearhere
Legendary
Offline
Activity: 3206
Merit: 1003
|
|
January 05, 2016, 11:04:12 AM |
|
guess you forgot to read (as usual) what you are quoting... if it isn't profitable to release then it isn't released, and the fast code may be eventually used elsewhere (other coins, other algorithms) to keep the edge... hence no hurry as far as I am concerned
And you missed the part where if you don't release anything you're neither profiting nor helping anyone else... As you've done in the last six months. There is no 'ROI' on code. It's not the same thing as hardware. You can sell as many copies as you want and the demand for it is only reduced by the desire to want it. So if there is a competing product or they can get it for free. But since there are 'no' products available, you aren't really accomplishing anything. You can sell as many copies as you want until you sell to one jackass, and then it's worthless. You can disprove an idiot's argument, but you can't change his opinion... Same statement can be said about some of you tards. It spells "devs" D E V S while tards is rather spelled "bensam123" which can also be spelled "troll" but thanks anyway
|
|
|
|
thefix
Legendary
Offline
Activity: 1049
Merit: 1001
|
|
January 05, 2016, 03:21:31 PM |
|
another reason is temperature, GTX970 works at a very high temperature (75C with 25C ambient, even with the three coolers, insane!) , I don't like that, as an electronic tech, i know temp kill rigs badly!, it's only a matter of time!.
Don't get me wrong Gigabyte GTX 970 G1 where fantastic cards, I love them, power density is awesome, they are extremely stable, beautiful (i know, I'm a nerd, can't help it) but when it comes down to power efficient, it's like 40% more than 750Ti (including the rig overhead!).
There's nowhere near 40% difference in efficiency between a 750 Ti and a 970 because they are both Maxwell cards. The only way you'd get such a huge difference is if you use a memory bandwidth/latency hard algo (old lyra2) but for most algos a 970 is about 2.7-3.1x times faster than a 750 Ti which reflects their power consumption. I have GV-N970WF3OC 970 cards which are virtually the same as the G1 ones and they never ever go above 60°C with 8cm gap between them. The 1 fan 970 minis on the other hand would go 75+ if I'd let them but I use a 70°C temp target on them. I plan on measuring the hashrate, power consumption, temperature for all relevant algos, for all the different type of cards I have with different power targets for a long time, I think I'll get to it later this week and share the results. also, mining, kills hard drives like crazy, unless use SSD which are expensive here.
Not unless you use terrible HDDs to begin with like WD Green series which keep parking the head after 8 seconds of being idle and are only rated at 300.000 of these parkings. 60GB SSDs are not expensive anymore (and preserve their price better) and I'm still below 1TB total writes on them after 1 year of constant use running several wallets. You can also use the free version of ramdisk to keep some of the read/writes off of your ssd if you want to optimize things even more. http://www.radeonramdisk.com/software_downloads.php
|
|
|
|
bathrobehero
Legendary
Offline
Activity: 2002
Merit: 1051
ICO? Not even once.
|
|
January 05, 2016, 03:58:32 PM |
|
another reason is temperature, GTX970 works at a very high temperature (75C with 25C ambient, even with the three coolers, insane!) , I don't like that, as an electronic tech, i know temp kill rigs badly!, it's only a matter of time!.
Don't get me wrong Gigabyte GTX 970 G1 where fantastic cards, I love them, power density is awesome, they are extremely stable, beautiful (i know, I'm a nerd, can't help it) but when it comes down to power efficient, it's like 40% more than 750Ti (including the rig overhead!).
There's nowhere near 40% difference in efficiency between a 750 Ti and a 970 because they are both Maxwell cards. The only way you'd get such a huge difference is if you use a memory bandwidth/latency hard algo (old lyra2) but for most algos a 970 is about 2.7-3.1x times faster than a 750 Ti which reflects their power consumption. I have GV-N970WF3OC 970 cards which are virtually the same as the G1 ones and they never ever go above 60°C with 8cm gap between them. The 1 fan 970 minis on the other hand would go 75+ if I'd let them but I use a 70°C temp target on them. I plan on measuring the hashrate, power consumption, temperature for all relevant algos, for all the different type of cards I have with different power targets for a long time, I think I'll get to it later this week and share the results. also, mining, kills hard drives like crazy, unless use SSD which are expensive here.
Not unless you use terrible HDDs to begin with like WD Green series which keep parking the head after 8 seconds of being idle and are only rated at 300.000 of these parkings. 60GB SSDs are not expensive anymore (and preserve their price better) and I'm still below 1TB total writes on them after 1 year of constant use running several wallets. You can also use the free version of ramdisk to keep some of the read/writes off of your ssd if you want to optimize things even more. http://www.radeonramdisk.com/software_downloads.phpTrue, but then I'd have to buy more than 4GB RAM per rig. And I'm not worried though, SSDs are not that fragile anymore. The OS SSD in my main rig (Samsung 840 Pro) is still only at 12% wear (0 reallocated sectors) with 15 TB writes and 666 days uptime with all kinds of caching and indexing enabled for maximum speed. It will get obsolete way before it dies on me due to wear.
|
Not your keys, not your coins!
|
|
|
thefix
Legendary
Offline
Activity: 1049
Merit: 1001
|
|
January 05, 2016, 04:20:52 PM |
|
another reason is temperature, GTX970 works at a very high temperature (75C with 25C ambient, even with the three coolers, insane!) , I don't like that, as an electronic tech, i know temp kill rigs badly!, it's only a matter of time!.
Don't get me wrong Gigabyte GTX 970 G1 where fantastic cards, I love them, power density is awesome, they are extremely stable, beautiful (i know, I'm a nerd, can't help it) but when it comes down to power efficient, it's like 40% more than 750Ti (including the rig overhead!).
There's nowhere near 40% difference in efficiency between a 750 Ti and a 970 because they are both Maxwell cards. The only way you'd get such a huge difference is if you use a memory bandwidth/latency hard algo (old lyra2) but for most algos a 970 is about 2.7-3.1x times faster than a 750 Ti which reflects their power consumption. I have GV-N970WF3OC 970 cards which are virtually the same as the G1 ones and they never ever go above 60°C with 8cm gap between them. The 1 fan 970 minis on the other hand would go 75+ if I'd let them but I use a 70°C temp target on them. I plan on measuring the hashrate, power consumption, temperature for all relevant algos, for all the different type of cards I have with different power targets for a long time, I think I'll get to it later this week and share the results. also, mining, kills hard drives like crazy, unless use SSD which are expensive here.
Not unless you use terrible HDDs to begin with like WD Green series which keep parking the head after 8 seconds of being idle and are only rated at 300.000 of these parkings. 60GB SSDs are not expensive anymore (and preserve their price better) and I'm still below 1TB total writes on them after 1 year of constant use running several wallets. You can also use the free version of ramdisk to keep some of the read/writes off of your ssd if you want to optimize things even more. http://www.radeonramdisk.com/software_downloads.phpTrue, but then I'd have to buy more than 4GB RAM per rig. And I'm not worried though, SSDs are not that fragile anymore. The OS SSD in my main rig (Samsung 840 Pro) is still only at 12% wear (0 reallocated sectors) with 15 TB writes and 666 days uptime with all kinds of caching and indexing enabled for maximum speed. It will get obsolete way before it dies on me due to wear. With a quality SSD like that, it makes plenty of sense to run things the way you are. I tend to use less expensive ($25-$30)60GBdrives with 8gb of ram in my systems cloned to a backup thumb drive with clonezilla. I have had systems running for over a year with no issues, but a better quality SSD might be a better choice in my future. I have yet to experiment with a ramdisk type solution on my linux rig
|
|
|
|
induktor
|
|
January 05, 2016, 08:02:29 PM |
|
On AMD you probobly undervolted your cards to get them to use less power. On Nvidia you change the TDP of the card. You can do it in software with the nvidia-smi tool, or modify the bios. (voids warranty)
yes I modded the bios myself, 0.8V, stable at 860Mhz, 1250 mem (sapphire 7950 dual X oc) are you saying that i can run under linux, before loading ccminer, a nvidia-smi sentence that will change in real time the TDP of every card??? WOW!!! I thought that the only way was modifying the BIOS!, do you have any link / tuto or hint how to?, I will get right on it!. I decide not to modify the BIOS because if I fuck something up, there is no backup, in AMD 7950 there is a dual bios switch, if you fuck up your card (which I did more than 100 times when researching this) just move the switch to position 2, power cycle, move switch to 1, reflash, power cycle and good to go!, but nvidia has no switch!. GPU's in my country are EXTREMELY COSTLY, and there is no such a thing like warranty here, if you fuck it up, it's gone, one card to the bin basket. My Speeds: Gygabyte 750Ti (dual fan, 6 pin connector, model GV-N75TOC-2GI) I got 16 of those: 4300 Kh in Lyra2v2 6000 Kh in Quark EVGA 750 TI ACX FTW (only two of these, and seems to be the real deal, but i found out after purchasing the other 16 ) 4900 Kh in Lyra2v2 6800 Kh in Quark I tested all your releases from 67 up to 78, the fastest one is 74 by FAR specially in lyra2v2
|
BTC addr: 1vTGnFgaM2WJjswwmbj6N2AQBWcHfimSc
|
|
|
induktor
|
|
January 05, 2016, 08:21:38 PM |
|
There's nowhere near 40% difference in efficiency between a 750 Ti and a 970 because they are both Maxwell cards. The only way you'd get such a huge difference is if you use a memory bandwidth/latency hard algo (old lyra2) but for most algos a 970 is about 2.7-3.1x times faster than a 750 Ti which reflects their power consumption. I have GV-N970WF3OC 970 cards which are virtually the same as the G1 ones and they never ever go above 60°C with 8cm gap between them. The 1 fan 970 minis on the other hand would go 75+ if I'd let them but I use a 70°C temp target on them. I plan on measuring the hashrate, power consumption, temperature for all relevant algos, for all the different type of cards I have with different power targets for a long time, I think I'll get to it later this week and share the results. also, mining, kills hard drives like crazy, unless use SSD which are expensive here.
Not unless you use terrible HDDs to begin with like WD Green series which keep parking the head after 8 seconds of being idle and are only rated at 300.000 of these parkings. 60GB SSDs are not expensive anymore (and preserve their price better) and I'm still below 1TB total writes on them after 1 year of constant use running several wallets. bathrobehero what is your regular ambient temperature?, I am in a very hot area, that could explain why my cards where working at 75C and yours at 60 I had 12CM spacing between them, and a 3000 rpm fan between every card to prevent that one card heat up the other one, still, in quark specially, it was impossible to lower the temperature below 70C, in lyra2v2 yes, usually where at 60 probably, don't remember exactly. true WD Green dies fast! for offline storage they are ok, but not for much else, even if you modify the parking time (i usually do, to 30 or 60 seconds), I always use black or red drives which has better warranty and lasts longer too (one of the few companies that has true warranty in my country, WD, and is excellent!) still mining kills a lot of hard drives, until i switched to usb flash, I killed about 10 or 11 wd black, reds, and seagate drives beyond repair, keep in mind that it is the mix of 24/7 operation, a LOT of power outages (here power is extremely unstable, we have short brownouts at least 10 per day) and at least 1 power outage a week, 1 of 4 may last days. (reason why I spend a ridiculous amount of money in inverter generators, smart UPS, solar panels, power regulators, inverters, I even have a low voltage line across the apartment for the essentials (battery powered)) add to all that, high temperatures and high humidity and you get....HELL !! hehe About the SSD, mining OS does not write a lot, just moving the logs and temps to ramdisk is enough, a crappy kingston V300 will last at least a couple of years easy. of course someone mention that the 840 Pro still works, of course, it is one of the best SSD consumer drives in the market! I have one in my workstation and is fantastic and very fast!, but at almost twice the cost of the kingston, it worth it for your workstation but not for a miner IMHO. I had an excellent experience booting of flash USB drive, almost all of my stability and disk problems went away since i boot off USB flash drive, so I will keep doing it hehe. (I know i am a stubborn sob )
|
BTC addr: 1vTGnFgaM2WJjswwmbj6N2AQBWcHfimSc
|
|
|
bathrobehero
Legendary
Offline
Activity: 2002
Merit: 1051
ICO? Not even once.
|
|
January 05, 2016, 09:17:22 PM |
|
On AMD you probobly undervolted your cards to get them to use less power. On Nvidia you change the TDP of the card. You can do it in software with the nvidia-smi tool, or modify the bios. (voids warranty)
yes I modded the bios myself, 0.8V, stable at 860Mhz, 1250 mem (sapphire 7950 dual X oc) are you saying that i can run under linux, before loading ccminer, a nvidia-smi sentence that will change in real time the TDP of every card??? WOW!!! I thought that the only way was modifying the BIOS!, do you have any link / tuto or hint how to?, I will get right on it!. I decide not to modify the BIOS because if I fuck something up, there is no backup, in AMD 7950 there is a dual bios switch, if you fuck up your card (which I did more than 100 times when researching this) just move the switch to position 2, power cycle, move switch to 1, reflash, power cycle and good to go!, but nvidia has no switch!. GPU's in my country are EXTREMELY COSTLY, and there is no such a thing like warranty here, if you fuck it up, it's gone, one card to the bin basket. You can recover basically any screwed up BIOS in DOS with nvflash, you just need to plug your monitor into another GPU (onboard). You can even shut down your computer midway flashing a card and still be able to reflash it next boot. bathrobehero what is your regular ambient temperature?, I am in a very hot area, that could explain why my cards where working at 75C and yours at 60 I had 12CM spacing between them, and a 3000 rpm fan between every card to prevent that one card heat up the other one, still, in quark specially, it was impossible to lower the temperature below 70C, in lyra2v2 yes, usually where at 60 probably, don't remember exactly. true WD Green dies fast! for offline storage they are ok, but not for much else, even if you modify the parking time (i usually do, to 30 or 60 seconds), I always use black or red drives which has better warranty and lasts longer too (one of the few companies that has true warranty in my country, WD, and is excellent!) still mining kills a lot of hard drives, until i switched to usb flash, I killed about 10 or 11 wd black, reds, and seagate drives beyond repair, keep in mind that it is the mix of 24/7 operation, a LOT of power outages (here power is extremely unstable, we have short brownouts at least 10 per day) and at least 1 power outage a week, 1 of 4 may last days. (reason why I spend a ridiculous amount of money in inverter generators, smart UPS, solar panels, power regulators, inverters, I even have a low voltage line across the apartment for the essentials (battery powered)) add to all that, high temperatures and high humidity and you get....HELL !! hehe About the SSD, mining OS does not write a lot, just moving the logs and temps to ramdisk is enough, a crappy kingston V300 will last at least a couple of years easy. of course someone mention that the 840 Pro still works, of course, it is one of the best SSD consumer drives in the market! I have one in my workstation and is fantastic and very fast!, but at almost twice the cost of the kingston, it worth it for your workstation but not for a miner IMHO. I had an excellent experience booting of flash USB drive, almost all of my stability and disk problems went away since i boot off USB flash drive, so I will keep doing it hehe. (I know i am a stubborn sob ) Sorry, I meant to write 70°C, not 60 for the max temp for the 970 OC cards (I have 60°C temp limit on the 750 Ti's). The ambient temp now in the room where the rigs are is 21°C and the hottest 970 OC card is 62°C (80-85% fan). If I remove the temp target on the minis, they climb to 74-77°C. I have no idea about the humidity. Power here also isn't the best but it's nowhere near as bad as yours so I guess that's killing your harddrives. The two WD greens I have and used to mine burst for over a year (stopped about 6 months ago) have no isses so far and are at 800 load cycle (5 min parking time). I don't store anything important on them though. Yes, the 840 Pro is great but mostly because it's fast but it still uses TLC NANDs and the crappy V300's which I have in all mining rigs have MLC NANDs which should be way more durable. I always had terrible experience with USB drives though, maybe I just used really crappy ones. Just to be slightly on topic: ccMiner SPMOD Release CUDA 6.5 vs CUDA 7.5 Performance Comparison
|
Not your keys, not your coins!
|
|
|
dominuspro
|
|
January 05, 2016, 09:20:43 PM |
|
are you saying that i can run under linux, before loading ccminer, a nvidia-smi sentence that will change in real time the TDP of every card??? WOW!!! I thought that the only way was modifying the BIOS!, do you have any link / tuto or hint how to?, I will get right on it!.
I just tried in windows and it works. I'm pretty sure it should work also in linux but cannot test it because I don't have a ready linux install anymore. Setting the core and mem clocks(+changing to p0) works exactly the same way on linux and windows. Tested on 960, 970 and 980. Those are the commands: nvidia-smi -q -d POWER -i 0 -shows the actual setting and possible settings limits nvidia-smi -pl 180 -i 0 -sets the power limit of gpu 0 to 180W The successfull result: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Windows\system32>cd /
C:\>cd "Program Files"
C:\Program Files>cd "NVIDIA Corporation"
C:\Program Files\NVIDIA Corporation>cd NVSMI
C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -q -d POWER -i 0
==============NVSMI LOG==============
Timestamp : Tue Jan 05 22:05:19 2016 Driver Version : 359.06
Attached GPUs : 5 GPU 0000:01:00.0 Power Readings Power Management : Supported Power Draw : 13.98 W Power Limit : 196.15 W Default Power Limit : 163.46 W Enforced Power Limit : 196.15 W Min Power Limit : 100.00 W Max Power Limit : 196.15 W Power Samples Duration : N/A Number of Samples : N/A Max : N/A Min : N/A Avg : N/A
C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -pl 180 -i 0 Power limit for GPU 0000:01:00.0 was set to 180.00 W from 196.15 W. All done.
C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -q -d POWER -i 0
==============NVSMI LOG==============
Timestamp : Tue Jan 05 22:05:40 2016 Driver Version : 359.06
Attached GPUs : 5 GPU 0000:01:00.0 Power Readings Power Management : Supported Power Draw : 13.74 W Power Limit : 180.00 W Default Power Limit : 163.46 W Enforced Power Limit : 180.00 W Min Power Limit : 100.00 W Max Power Limit : 196.15 W Power Samples Duration : N/A Number of Samples : N/A Max : N/A Min : N/A Avg : N/A
C:\Program Files\NVIDIA Corporation\NVSMI>
|
|
|
|
tbearhere
Legendary
Offline
Activity: 3206
Merit: 1003
|
|
January 05, 2016, 10:59:54 PM Last edit: January 05, 2016, 11:11:32 PM by tbearhere |
|
are you saying that i can run under linux, before loading ccminer, a nvidia-smi sentence that will change in real time the TDP of every card??? WOW!!! I thought that the only way was modifying the BIOS!, do you have any link / tuto or hint how to?, I will get right on it!.
I just tried in windows and it works. I'm pretty sure it should work also in linux but cannot test it because I don't have a ready linux install anymore. Setting the core and mem clocks(+changing to p0) works exactly the same way on linux and windows. Tested on 960, 970 and 980. Those are the commands: nvidia-smi -q -d POWER -i 0 -shows the actual setting and possible settings limits nvidia-smi -pl 180 -i 0 -sets the power limit of gpu 0 to 180W The successfull result: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Windows\system32>cd /
C:\>cd "Program Files"
C:\Program Files>cd "NVIDIA Corporation"
C:\Program Files\NVIDIA Corporation>cd NVSMI
C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -q -d POWER -i 0
==============NVSMI LOG==============
Timestamp : Tue Jan 05 22:05:19 2016 Driver Version : 359.06
Attached GPUs : 5 GPU 0000:01:00.0 Power Readings Power Management : Supported Power Draw : 13.98 W Power Limit : 196.15 W Default Power Limit : 163.46 W Enforced Power Limit : 196.15 W Min Power Limit : 100.00 W Max Power Limit : 196.15 W Power Samples Duration : N/A Number of Samples : N/A Max : N/A Min : N/A Avg : N/A
C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -pl 180 -i 0 Power limit for GPU 0000:01:00.0 was set to 180.00 W from 196.15 W. All done.
C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -q -d POWER -i 0
==============NVSMI LOG==============
Timestamp : Tue Jan 05 22:05:40 2016 Driver Version : 359.06
Attached GPUs : 5 GPU 0000:01:00.0 Power Readings Power Management : Supported Power Draw : 13.74 W Power Limit : 180.00 W Default Power Limit : 163.46 W Enforced Power Limit : 180.00 W Min Power Limit : 100.00 W Max Power Limit : 196.15 W Power Samples Duration : N/A Number of Samples : N/A Max : N/A Min : N/A Avg : N/A
C:\Program Files\NVIDIA Corporation\NVSMI> Wow thx Did anyone try this on a 980ti ? Ps Cryptsy is off the market now.
|
|
|
|
chrysophylax
Legendary
Offline
Activity: 2912
Merit: 1091
--- ChainWorks Industries ---
|
|
January 05, 2016, 11:15:41 PM |
|
are you saying that i can run under linux, before loading ccminer, a nvidia-smi sentence that will change in real time the TDP of every card??? WOW!!! I thought that the only way was modifying the BIOS!, do you have any link / tuto or hint how to?, I will get right on it!.
I just tried in windows and it works. I'm pretty sure it should work also in linux but cannot test it because I don't have a ready linux install anymore. Setting the core and mem clocks(+changing to p0) works exactly the same way on linux and windows. Tested on 960, 970 and 980. Those are the commands: nvidia-smi -q -d POWER -i 0 -shows the actual setting and possible settings limits nvidia-smi -pl 180 -i 0 -sets the power limit of gpu 0 to 180W The successfull result: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Windows\system32>cd /
C:\>cd "Program Files"
C:\Program Files>cd "NVIDIA Corporation"
C:\Program Files\NVIDIA Corporation>cd NVSMI
C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -q -d POWER -i 0
==============NVSMI LOG==============
Timestamp : Tue Jan 05 22:05:19 2016 Driver Version : 359.06
Attached GPUs : 5 GPU 0000:01:00.0 Power Readings Power Management : Supported Power Draw : 13.98 W Power Limit : 196.15 W Default Power Limit : 163.46 W Enforced Power Limit : 196.15 W Min Power Limit : 100.00 W Max Power Limit : 196.15 W Power Samples Duration : N/A Number of Samples : N/A Max : N/A Min : N/A Avg : N/A
C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -pl 180 -i 0 Power limit for GPU 0000:01:00.0 was set to 180.00 W from 196.15 W. All done.
C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -q -d POWER -i 0
==============NVSMI LOG==============
Timestamp : Tue Jan 05 22:05:40 2016 Driver Version : 359.06
Attached GPUs : 5 GPU 0000:01:00.0 Power Readings Power Management : Supported Power Draw : 13.74 W Power Limit : 180.00 W Default Power Limit : 163.46 W Enforced Power Limit : 180.00 W Min Power Limit : 100.00 W Max Power Limit : 196.15 W Power Samples Duration : N/A Number of Samples : N/A Max : N/A Min : N/A Avg : N/A
C:\Program Files\NVIDIA Corporation\NVSMI> Wow thx Did anyone try this on a 980ti ? Ps Cryptsy is off the market now. almost finished my mates machine - having issues with the last card and the nvidia drivers accepting it - but can test this on his system the way it is while i still have it ... will do that later this afternoon / tonight if i can get the chance - unless someone beats me to it ... which is more likely the case ... #crysx
|
|
|
|
tbearhere
Legendary
Offline
Activity: 3206
Merit: 1003
|
|
January 05, 2016, 11:29:32 PM |
|
crysx Are you talking about the 6 980ti's on one rig?
|
|
|
|
chrysophylax
Legendary
Offline
Activity: 2912
Merit: 1091
--- ChainWorks Industries ---
|
|
January 05, 2016, 11:38:00 PM |
|
crysx Are you talking about the 6 980ti's on one rig?
yup ... so far only 5 x gigabyte 980ti g1 / extreme will work at any one time - and the 6th kills the drivers for some reason ... the system is 4 x gigabyte 980ti g1 + 2 x gigabyte 980ti extreme ... it doesnt matter which pcie you remove from the collection - it will only work with 5 cards - and the as soon as the 6th card goes in the driver refuses to load ... #crysx
|
|
|
|
joblo
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
January 06, 2016, 12:30:11 AM |
|
are you saying that i can run under linux, before loading ccminer, a nvidia-smi sentence that will change in real time the TDP of every card??? WOW!!! I thought that the only way was modifying the BIOS!, do you have any link / tuto or hint how to?, I will get right on it!.
I just tried in windows and it works. I'm pretty sure it should work also in linux but cannot test it because I don't have a ready linux install anymore. Setting the core and mem clocks(+changing to p0) works exactly the same way on linux and windows. Tested on 960, 970 and 980. Those are the commands: nvidia-smi -q -d POWER -i 0 -shows the actual setting and possible settings limits nvidia-smi -pl 180 -i 0 -sets the power limit of gpu 0 to 180W The successfull result: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Windows\system32>cd /
C:\>cd "Program Files"
C:\Program Files>cd "NVIDIA Corporation"
C:\Program Files\NVIDIA Corporation>cd NVSMI
C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -q -d POWER -i 0
==============NVSMI LOG==============
Timestamp : Tue Jan 05 22:05:19 2016 Driver Version : 359.06
Attached GPUs : 5 GPU 0000:01:00.0 Power Readings Power Management : Supported Power Draw : 13.98 W Power Limit : 196.15 W Default Power Limit : 163.46 W Enforced Power Limit : 196.15 W Min Power Limit : 100.00 W Max Power Limit : 196.15 W Power Samples Duration : N/A Number of Samples : N/A Max : N/A Min : N/A Avg : N/A
C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -pl 180 -i 0 Power limit for GPU 0000:01:00.0 was set to 180.00 W from 196.15 W. All done.
C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -q -d POWER -i 0
==============NVSMI LOG==============
Timestamp : Tue Jan 05 22:05:40 2016 Driver Version : 359.06
Attached GPUs : 5 GPU 0000:01:00.0 Power Readings Power Management : Supported Power Draw : 13.74 W Power Limit : 180.00 W Default Power Limit : 163.46 W Enforced Power Limit : 180.00 W Min Power Limit : 100.00 W Max Power Limit : 196.15 W Power Samples Duration : N/A Number of Samples : N/A Max : N/A Min : N/A Avg : N/A
C:\Program Files\NVIDIA Corporation\NVSMI> It works on Linux. You might also want to set persistence mode: nvidia-smi -pm 1 -i 0
|
|
|
|
|