That's what I thought too after playing with nvidias. In fact I was running these amd's long time at 1600-1700 mem thinking it automatically gives me more hash.
Nvidia with 256-bit memory lane and 3,5-4GB is memory limited, amd with 384-bit and 3GB suffers from DAG-file size, amd with 512-bit and 8GB is begging for core overclock.
maxwell cores are typically overclocked/boosted from the factory so there may not be as much room left anyway compared to AMD's, but power to get there is still the differentiator. I don't see 390's with less than 8GB, or less than $300+ each... but at $12+/ETH, ROI is so short for anything these days.
|
|
|
http://www.businessinsider.com/what-is-blockchain-2016-3The big elephants are aware of and talking/testing blockchains, smart contracts, PoS etc. for elimination of their back offices. No-one can speculate that any existing digital currencies would be a part of this future; they likely want and will create their own chains. It only goes to validate the usefulness of Ethereum over other digital currencies in that these mechanisms have already been thought out and planned into it from the beginning. Those who don't run a bank or brokerage may want access to the same capabilities, and it appears Ethereum is the only game in town to match. FWIW.
|
|
|
It is better to buy R9 390. You can get more hash per rig and so the overheads of the rig is reduced to achieve the same hash. You mean R9 380(x)? I don't see any 390's for less than $300 probably because they're all 8GB, and 4GB 380's are $190 and up. (Not looking at used market of course...)
|
|
|
Can you speculate why the 390x is not faster than the 390 at the same frequency. It has more core, 2816 vs. 2560.
I have no idea. 10% difference should be easy to see, anyone with both 390 and 390x? Memory speed may have as much or more to do with it, as well as bandwidth, since the DAG is hashed purely in gpu memory. Don't know about AMD, but GTX responds much better to maxing memclock vs any bump in core clock.
|
|
|
hello
I know you prolly come across this once before.. im having an odd issue.. I have multiple computers running several 7970 each.. What im doing is using windows remote desktop to swap between the different rigs.. What is weird is if i start ethminer within remote desktop it will only see one card, I then have to reboot computer and hook a monitor up to it and load ethminer from that console.> Have any of you seen this or had this happen.. seems their is an odd glitch if i try to run ethminer thru remote desktop. All computer running win 10 pro.
Best Regards d57heinz
If you're using batch scripts, try launching them as administrator (paths to exe's have to be full). Or try launching cmd as admin, cd to the dir, then running the batch file. I do the same with remote desktop but don't have the problem you're seeing (however I'm running nvidia cards)...
|
|
|
ETH mining started last summer.
Cudaminer by cbuchner1 started somewhere in early 2014.
No shit, thanks for the history we already knew. I was talking about my dialog on the subject they were referring to that seemed to get everyone in AMD-land upset for some reason. Genoil's CUDA ethminer fork only started working well enough a few months ago, unless you know better.
|
|
|
I don't know why you guys are comparing power when clearly ETH is so profitable that power shouldn't be an issue currently.
You are right, it isn't issue anymore. But if someone suddenly finds a way to cut power costs 30-40% without losing performance on 18 months old platform... Any idea when this started? I first posted here on January 30 and mentioned it (bad, bad me). ETH was about $2.25 then, so profitability with GTX was much better than AMD when electric cost was a consideration, many people pay much higher rates that made it unprofitable with AMD. So there.
|
|
|
2 x evga tdp limited takes maybe 280W and the rest of the rig that last 50W.
Completely ignoring the PSU (in)efficiency... lol. That power has to be accounted for somewhere, as I've somewhat crudely done in horseshoes-close fashion; you just can't throw that blame all on the GPU's. Find a mythical 99.9% PSU, then there would be no wiggle room in your assertion.
|
|
|
Copypaste from the nvidia link you posted earlier:
Note: The below specifications represent this GPU as incorporated into NVIDIA's reference graphics card design. Clock specifications apply while gaming with medium to full GPU utilization. Graphics card specifications may vary by Add-in-card manufacturer. Please refer to the Add-in-card manufacturers' website for actual shipping specifications.
You don't have reference design card.
You have a card made by what Nvidia calls add-in-card manufacturer.
Your card does not have TDP of reference design card. In your case card has TDP of 170W because EVGA chose to put that kind of bios to your card. EVGA also chose you can boost your TDP 10% to 187W. EVGA also chose that your card takes 75W from pcie1, 75W from pcie2 and the rest from the pcie slot.
So somehow you're saying the power being measured by the software is somehow disconnected from the stated TDP (145W), even though it's being represented as %TDP and requires that number to calculate against. And that the percentage would not change if the TDP were somehow able to be set higher, like 187 as you want. Or, somehow the TDP on these 970 cards can miraculously go 128% of reference TDP, something no 970 I've ever seen can do. 980's can barely come close to that, because they also have a significantly different power handling architecture, which I'm sure you also know. You still have to deal with the observed power at the wall, and make the numbers fit as you wish without going over. I've only done the best I can do to do just that. If you want to add more to the GPU load, you'll just have to take the same amount away from system power, and there isn't much there to play with if you want to make any sense. EDIT: Here ya go, the miraculous gamebox that gets 42MH/s on a mere 330W, see if you can make the numbers work to your liking: ASRock Extreme6 Intel i5 4690K Devil's Canyon (oc'd in bios to 4GHz) w/CoolerMaster D92 8GB GSkill DDR3 2400 WD Black 1TB SATA Corsair CX850M 80 Bronze 2xEVGA 3975-KR SSC ACX2.0+ (SLI bridge connected/enabled) P0 state for mining, 3800memclock, +50coreclock Fractal case w/3 fans what card do you have? maybe you have a reference that is very limited on default, and with the tdp limitation it is even more limited, because 90 otherwise it's not possible with that hashrate you are getting
Like I said, EVGA SSC ACX2.0+ 3975-KR, the numbers still add up, either stock clocks or o/c situation. Begging anyone to come up with the numbers they believe are right that match my observed loads at the wall, which I've also confirmed by checking with another watt meter just to make sure.
|
|
|
Much of the original point I tried to make has evaporated in light of ETH >.015 BTC (.017? .019!?! .02!!!), electric costs should be a non-consideration for most. That said: But that extract fan power consumption is due to the mining as well. So we have to take that in to account.
100% GPU TDP on 970 = 145W Do the math, break it down, here's the numbers: Overclocked, maintaining <70C core temp (high fan profile): gpu1 (incl. fans on card): 108W (75%TDP) gpu2 (incl. fans on card): 115W (80%TDP) 4 case and 2 cpu fans, hdd, mobo/cpu: 50W (case fans ramped to 60%) Total so far: 273WPSU loss: 55W (80% efficiency; 0.2 * 273 = 55W) 273+55 = 328W, or basically what I've monitored on the UPS 42MH/s, 7.8W/MH (total system), 5.3W/MH (gpus only) Stock, maintaining 75C core temps, stock fan profile, both @65%TDP: gpu1: 90Wgpu2: 90Wcase fans+hdd+mobo+cpu: 30W (silent pc mode) Total so far: 210WPSU loss: 42WTotal system: 264W36MH/s, 7.0W/MH (total system), 5.2W/MH (gpus only) Something I'm going to try is disabling Windows Aero. Even though it's a headless system and it's not doing anything, some have mentioned it stealing GPU power all the time regardless of use, therefore decreasing hash. I've seen weirder things, especially on Winblows. God, I've got to roll Mint or something.I've seen the "my 280x will do xxMH on yyyW with zzzV undervolt" before, I still have no clear idea because so many things play a factor in pushing the limit. I've seen people today who own multiple rigs, running mixes of 7950 and 280x, undervolted/clocked/tweaked/whatever, drawing over 3KW to get 350MH/s (9W/MH!), and they're happy with that. To each their own, ETH's at .02 BTC, electric ain't much of a matter no mo. I think I'm done with this particular thread.
|
|
|
The idle power is 25W, the full load power is 320W, so simplistically, we think the mining uses about 295W, or 147W each card.
Er, no. The system less the gpu's is not at 25W when mining; the PSU is operating at a much higher load, there are always inefficiencies (most are only 80-85%), and I can't monitor the PSU I/O by itself, only calculate it. Also, there are case fans ramped up adding to system load, each draws at least 5-10W running at 60% or so. Add it all up and the non-gpu-related draw is much higher, I add it up to about 60W. I think it is similar efficiency. the difference is very small. 7970 is 3-4 year's old card.
7970 20MH/140 = 0.142. 970: 20.75/147 = 0.141.
Bad math; it's 20.75/115 = .18, but I'd rather do it the other way (W/MH). We're only looking at the card power here, but you're including system power in your figure and don't appear to be doing the same with your 7970... Let's try to keep our numbers in perspective. I'm not advocating buying GTX cards over AMD for obvious reasons, just saying that a state of the art gaming PC can mine pretty well and efficiently.
|
|
|
Checked the UPS readout with some known power draws (different wattage lightbulbs etc.) and it checks out just fine, so that's not the question. So, system idle readout is about 20-25W, all case/gpu fans low or off. There's 10 fans total; 3 case, 2 cpu, one ps, and the four on the pair of gpu's. I can't assume the gpu fan power is included in the %TDP figure shown by monitoring, which is important because I'm running them at ~60% to keep things under 70C. Nevertheless, with mem clock @ 3800 (P0 state +300), core +50 (base boost=1440 +50=1490 but both cards do not run this rate for whatever reason, I've seen some explanations), gpu fans set 60%@65C, the %TDP is 75% for one and 80% for the other = 112W avg, so let's call it 225W for the pair. Load reported by the UPS shows 310-320W. If the fans are drawing additional power above that (not being included in %TDP as assumed), then all the case fans, system, and PSU efficiency (80% corsair 850W) would likely make up the difference of 90W or so. Reported hash rate is 41.5MH/s (fluctuates between 40 and 43 as is typical). I'm not about to go further in diagnosing this, it's not that important to me and I don't have the equipment/software to pick apart every component, so there's some assumptions here. I'd still like someone to explain how 41.5MH/s @ 320W total is "impossible" with this system.
|
|
|
No offense, mate! I must be wrong with your power consumption. Some 970's have 225w or 250w bios from factory. Nvflash and Maxwell bios tweaker are you friends if you really want to know what your card has eaten. Just flash no limit bios and post your wattage after that There is no point in going so far as to screw with the card's bios. I'm not a professional gaming overclocker, but we're not playing video games where losing a few pixels here and there doesn't matter; submitting good data is all that matters. Acceleration of mining ethereum has less to do with pushing the architecture's balls to the wall, sweating high wattage/temps for nothing, producing bad nonces and getting no submits. These cards run mem at 3700MHz+ with no errors, keeping temps in the mid 60's (read longevity), and produce 21MH on very low wattage.
|
|
|
i can not believe that his 970 consume only 90w, there is no way this is correct...
It is absolutely correct. Fact is, I cannot believe you have a 970 that can consume more than 145W, the max TDP of any GTX970 model, regardless of brand. Again: http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-970/specificationsi think you're wrong, because my 970 power is 200w with oc, impossible to be 115w or even 90w lol that's too low, something is surely wrong on your end
What brand/model 970 are you running that shows it is using 200W? Or are you quoting your total system, card/mobo/etc.? Perhaps tonight if I get a chance I'll post some pictures/screenshots, since those are the only ways some kids can believe anything. Then we'll see what's impossible or not. Or you could go over to the ethereum forums and ping anyone else who runs nVidia kit about their power profiles.
|
|
|
It is possible to break 200w barrier with 970 but you need 8+6 or 8+8 pin model for that. 6+6 pin and 8-pin models are limited lower. Gigabyte G1 is one example of a card that is normally never power limited.
I still think bluebox has a little bit overoptimistic UPS. It is possible to gain nice power savings without losing too much performance with these cards but those number are too low. I might be wrong though.
Bluebox, try mem speed of 4000MHz or even higher, you should be really close to 22MH.
970's of any brand aren't spec'd to pull more than 145W max (as are mine), likely with with clocks and fans maxed, so I don't see how you could "break 200W" http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-970/specificationsNumber of pins/connectors used doesn't change the amount of power it requires in operation, it's only how power is drawn from the ps. I don't have an "overly optimistic UPS" (APC backups 1500), it reads watts, volts and amperage on a continuous basis better than a "kill-o-watt" meter. Sheesh — I'm an electrical engineer and operate a 1700-core HPC cluster in my day job... Like I said, the mobo, ps, drives etc. draw about 70W during operation, and the cards each can be monitored for %TDP in software (afterburner, precisionX, etc). The numbers add up to what the UPS is reading out, I have no doubt it's correct particularly when modifying clocks and observing power changes. 4000MHz is too unstable, the miner reports bad nonce values from the GPU. Backing down to 3700 solved all issues.
|
|
|
I was looking into buying a GTX970, is the 90W figure for power draw something you actually measured (i.e with a Kill-a-watt) or just the stated power draw from the manufacturer?
Earlier in this thread I mentioned using a UPS that has wattage readout. You can get individual card power draw with software like afterburner, precision X, etc. Using afterburner I oc'd the pair of 970's gpu ~1440MHz, mem ~3700MHz, wattage is ~115 each, gpu temps 65-70C, total system @UPS is 330W, getting 43MH/s. These are EVGA 3975-KR SSC AC2+ Factory o/c settings they run 90W each @ 1440gpu/3000mem, total system @270W, get 38MH/s. Not sure the extra o/c is worth it...
|
|
|
Well, I need to update my numbers — just applied the nvidia-smi tweak mentioned at http://cryptomining-blog.com/7341-how-to-squeeze-some-extra-performance-mining-ethereum-on-nvidia/and the 970's are doing 21MH/s each (EVGA 3975-KR SSC's). That puts the watt/MH number at 4.2, besting an undervolted 280X handily while matching its hash rate. Now if they just weren't more expensive than old radeons, there'd be nothing left to complain about... But I suppose at .015BTC/ETH no-one's worrying about power these days. The dual 970's I have average 36+MH/s on 90W/card, stock clocks, generating ~1.2 ETH/day. Quite frankly, total system power should be the mark to judge by, mine takes 260W (gamer's i5 devil's canyon board). By the numbers above: R290X: 180W/31MH = 5.8W/MH (or 140/28 = 5W/MH undervolted, if correct, but what's the point?) GTX970: 90/18 = 5W/MH stock clocks on factory super-clocked board I'll say it again; watt-per-hash, the AMD and GTX cards are pretty much equal now thanks to dev's work on CUDA mining (Genoil!). Most GTX970's are still retailing for what they did over a year ago; used are getting 80% of retail give-take a few. If you can get 80% of current retail for a 290X later this year, good onya, to each their own.
|
|
|
What is the point of refining the target as the structure progresses? Surely that means your charts are meaningless
But you don't seem to understand, charts inherently and constantly evolve their "predictions", therefore they — and the chartologists who create them — are always right. If I thought it was worth the time, I'd look up those chart predictions from a couple months ago to see how they predicted today's — or even last month's — price rise. You might wonder, how did the rest of us predict it without charts? By understanding its value and just mining the damned thing.
|
|
|
ether will be dump as soon as possible
As long as it's making higher lows as time goes by (as it has), who gives a crap? It seems apparent that there is a rising price floor under this thing for whatever reason, and it also seems a great deal of the flow on Kraken and Poloniex is BTC->ETH.
|
|
|
|