8 pin gpu and pcie 8 different ... and the 970gtx only needs 2 6 pin on power but has 1 6 pin and 1 8 pin. Which layout is correct? I don't know.
Don't confuse the CPU power 2x4 pin with GPU 8 pin. The PSU cables have a combined 6+2 cable, just use the 6. The CPU cable is 4+4 and needs to be connected to the motherboard.
|
|
|
WHERE IS MY PAYMENT ...........
I havwn't received a payment since before the system went down Monday. The coins are all exchanged, just need to pay them. You mean the exchange ? I mean the coins I was mining before the crash have all been exchanged for BTC and show up in my balance. HP just needs to fix their autopayout now. Ok I see thx EDIT: I think I know who owns hp... not sure but if I'm right he won't respond for a week or 2. And he has 35 rigs going so he won't care. If he's mining at HP with them maybe he'll notice his missing payouts.
|
|
|
I ran across this. what is wrong with this picture? Seems wrong. Look at the 8 pin. My 970 is still on the shelf .. it maybe the psu. I give up, you tell me.
|
|
|
WHERE IS MY PAYMENT ...........
I havwn't received a payment since before the system went down Monday. The coins are all exchanged, just need to pay them. You mean the exchange ? I mean the coins I was mining before the crash have all been exchanged for BTC and show up in my balance. HP just needs to fix their autopayout now.
|
|
|
WHERE IS MY PAYMENT ...........
I havwn't received a payment since before the system went down Monday. The coins are all exchanged, just need to pay them.
|
|
|
the trend is to consume less for future gpu, so they will end up with only one 8 pin and then 6 pin eventually it's only a matter of time
in few generation you will have zero 6 or 8 pin needed for mainstream gpu, not for high end like 980ti or 390x
People are greedy, they want high performance cards, that will use more power. Maybe the limit is around 400W. If they standardize 8+8 pin connectors that will give 375 but I doubt that will happen. Nvidia will never release a GTX990 even though they could probably stay within the 300W limit. A 1080ti or 1090, or even a 1090ti probably wouldn't require 8+8 if the ever come to pass. The trend has been toward lower power, even at the top end, for several years.
|
|
|
If you have more than 6 cards,you need powered risers . A 750ti draws 50-60 watt depending on what you mine. So a 16 cards rig will only draw around 1000W. The bios might need to be cracked to support 16 cards, and more than 8 cards in windows is a pain. Have anyone tried 5 7990's in one single rig ? (10 gpu's) You probobly need linux. Expensive PCIE splitters etc.
powered riser are useless if you have asrock h81 with two molex, i always used ribbon standard risers and they work fine for every card out there, also cost less than powered riser I have to disagree. Although the mobo can supply the power cheap risers sometimes can't. At least with powered risers you're not trying to push 75W through a ribbon wire. It also reduces the overall power going through the mobo. The cost difference is trivial, no more than .05 BTC with risers on all 6 cards. If using two PSUs (not my thing) I believe you need powered risers to isolate them. average wattage per card today, especially with new nvidia is very low, they consume 150w on average, very low as i said they will use the 8 pin if not enough current come from the x1 slot, i think you don't need to provide 75w fromt he slot, lol future gpu with 10nm or less, will consume so low that not even a 6 or 8 pin would be required, only the riser High end graphics will still need 2x8pin to consume total of 375W as they are more powerful cards. To my knowledge 2x8 pin is not standard, and, IMO, won't be needed as power draw is dropping even as performance increases. Top end GPUs will be engineered to require no more than 300W (slot+6+8). For the AMD cards, the sapphire 280x uses 2x8pin, the 7990 uses 2x8pin, the R9 Fury also use 2x8pin. the trend is to consume less for future gpu, so they will end up with only one 8 pin and then 6 pin eventually it's onyl a matter of time in few generation you will have zero 6 or 8 pin needed for mainstream gpu, not for high end like 980ti or 390x [/quote] Agree mostly, but I think top end GPUs will continue to push performance within the power ceiling until one GPU can do 4K@120 or more. But the days of triple slot and dual GPU cards that can heat a small house are over. The 750ti set a new standard with no extra power connector. the 730, although Kepler, is also a trend setter as a half length, half height card. The R9 Nano is another card that packs a lot of performance in a small energy efficient pasckage. Pascal will set a new standard. AMD, well I hope they can at least keep up to keep Nvidia honest.
|
|
|
If you have more than 6 cards,you need powered risers . A 750ti draws 50-60 watt depending on what you mine. So a 16 cards rig will only draw around 1000W. The bios might need to be cracked to support 16 cards, and more than 8 cards in windows is a pain. Have anyone tried 5 7990's in one single rig ? (10 gpu's) You probobly need linux. Expensive PCIE splitters etc.
powered riser are useless if you have asrock h81 with two molex, i always used ribbon standard risers and they work fine for every card out there, also cost less than powered riser I have to disagree. Although the mobo can supply the power cheap risers sometimes can't. At least with powered risers you're not trying to push 75W through a ribbon wire. It also reduces the overall power going through the mobo. The cost difference is trivial, no more than .05 BTC with risers on all 6 cards. If using two PSUs (not my thing) I believe you need powered risers to isolate them. average wattage per card today, especially with new nvidia is very low, they consume 150w on average, very low as i said they will use the 8 pin if not enough current come from the x1 slot, i think you don't need to provide 75w fromt he slot, lol future gpu with 10nm or less, will consume so low that not even a 6 or 8 pin would be required, only the riser High end graphics will still need 2x8pin to consume total of 375W as they are more powerful cards. To my knowledge 2x8 pin is not standard, and, IMO, won't be needed as power draw is dropping even as performance increases. Top end GPUs will be engineered to require no more than 300W (slot+6+8).
|
|
|
however - the 6 card issue is ONLY with the 980ti cards ... it seems even the amd 280x oc / 7970 cards ( one and the same ) never had that issue either ... 6 cards and no problem ...
im sure its a bios issue - as ive seen 8 gpus in a single system run by an expensive supermicro motherboard designed ( with a large capable bios ) for such things in hpc servers ...
but i will research your suggestion - as it could also be an issue with the way the 6th 980ti card is allocated in the bios / motherboard ...
tanx ...
#crysx
I like puzzles like this. Have you had 6 cards working in the same rig as the one currently failing with 6x 980ti? A rig that previously worked would have the correct BIOS settings already. I'll jump ahead assuming you have already proven your HW and SW and know where I'm leading, It may be the number and model of card that is the problem. 980ti's have 6 GB mem so that is 36 GB to support. Just speculating, I don't know if adding RAM or swap would help. if you're not yet fully invested in the 980ti you may want to wait a week for the 1080 I believe this is the 980Ti problem ( V 6th card ) . ( but that's for the miner. And if OS do not see 6th card - jumped riser might help. Any time I cross a power of 2 boundary I get scared. 5 cards keeps it to 32 GiB.
|
|
|
In another thread there is discussion about getting 6 980ti's working. It seems only 5 work. Has anyone gotten a 6x 980ti rig working or is it too big? The speculation is it's the amount of memory on all cards, 6 X 6GB = 36 GB.
|
|
|
however - the 6 card issue is ONLY with the 980ti cards ... it seems even the amd 280x oc / 7970 cards ( one and the same ) never had that issue either ... 6 cards and no problem ...
im sure its a bios issue - as ive seen 8 gpus in a single system run by an expensive supermicro motherboard designed ( with a large capable bios ) for such things in hpc servers ...
but i will research your suggestion - as it could also be an issue with the way the 6th 980ti card is allocated in the bios / motherboard ...
tanx ...
#crysx
I like puzzles like this. Have you had 6 cards working in the same rig as the one currently failing with 6x 980ti? A rig that previously worked would have the correct BIOS settings already. I'll jump ahead assuming you have already proven your HW and SW and know where I'm leading, It may be the number and model of card that is the problem. 980ti's have 6 GB mem so that is 36 GB to support. Just speculating, I don't know if adding RAM or swap would help. if you're not yet fully invested in the 980ti you may want to wait a week for the 1080
|
|
|
Thanks for the links, I'll read up.
|
|
|
Out of desperation I did "upgrade" to Win 10 to ensure that Win 7 wasn't the issue with the 6 gpu's and Win 10 did not fix the issue I was seeing with the Nvidia cards. I can say that I have this exact same mobo, cpu, memory, ssd, psu combo working fine with 6 AMD gpu's for a different rig. So I know this combo is capable of running 6 gpu's. So I am not sure what the real issue is... I thought it might be drivers issues. The other rig that I spoke of is running Linux, so there is a different OS in the mix.
When I discovered one of the GPU's wasn't being seen by the Nvidia software I then pulled it out of the mix, and I could finally get the system to boot reliably and behave normally, but still when running ccminer it would fail and crash saying cannot validate to CPU.
So if your hoping that Win 10 is the magic bullet like I was I wouldn't hold my breath. The only reason I went this route with windows on this rig was the intention to buy SP_ private release. Otherwise I would have stuck with Linux. So now if I run into any Windows related issues with the AMD cards I will just reconfig the rig with Linux and call it a day.
I hope someone is able to figure out what the issue is. Nvidia cards are more efficient than the AMD cards and I would love to have a rig that takes me in a more efficient direction for the long haul.
Windows works and the drivers work so it's time to move on. Start from the begining and try to get one card stable when mining. If you can't get any card working with different combinations of slots and risers you have a bad mobo or PSU. If you can get one card working, try a second card and repeat the process with two cards. Make note of which combinations of slots, GPUs, and risers are known to work. If you run in to probems along the way swap the suspect card and riser with a known good combo from another slot. Then observe if the problem changes in any way Is the known good card/riser eorking in its new slot? Is the suspect card/riser still not working in the slot you know is good? Keep doing that until you've isolated the suspect card/riser combo, a suspect mobo slot or just the number of cards. If it's the number of cards it could be the mobo or PSU. Even if it's rated high enough it may not be working properly. If you've isolated to a PCIe slot, it's definitely the mobo. It's up to you what you want to do about it. If you've isolated it to a GPU/riser combo them swap the riser with another card to isolate the problem. And you're done.
|
|
|
If you have more than 6 cards,you need powered risers . A 750ti draws 50-60 watt depending on what you mine. So a 16 cards rig will only draw around 1000W. The bios might need to be cracked to support 16 cards, and more than 8 cards in windows is a pain. Have anyone tried 5 7990's in one single rig ? (10 gpu's) You probobly need linux. Expensive PCIE splitters etc.
powered riser are useless if you have asrock h81 with two molex, i always used ribbon standard risers and they work fine for every card out there, also cost less than powered riser Bad advice @"if you have more than 6 cards, you need powered risers". Always use a powered riser on any card that takes more than what a x1 pci-e slot/ribbons (75w) + its pins provide. Manufacturers expect these cards to be used in full x16 slots (150w) and as such place the appropriate x4/x6/x8 pins on them. That extra 75w has to be made up elsewhere, and if you're using anything more powerful than a R9 270/GTX 950, there's a good chance that extra power is gonna come through the pins / pcie slot. Expect fire. And sparks. Lots of sparks. Powered risers are usually not even more expensive than unpowered risers most of the time. Be safe - this is your investment(s) at stake here. An unrelated FYI: the one thing you do want to make sure of is that you do not overload any of your 5v lines if you hook your risers up to them (molex / sata connector etc). Agree completely with powered risers but I think you're a little confused about power from the bus. The bus power is supplied by the part of the PCIe slot that is common regardless of the number of lanes. A x16 connector can't carry any more power than x1. What is different with x1 slots is they are intended only for 25W (x16 is 75W). Unless the mobo is designed for high power to the x1 slots (ie H81) you're asking for trouble without powered risers in those slots. Either way the maximum power from the slot connector is 75W. If the GPU needs more it will have 6 or 8 pin auxiliary connectors to supply an additional 75 or 150W respectively. If your PSU doesn't have enough connectors you can combine 2 Molex or 2 SATA power connectors to one 6 pin 75W connector for the GPU. If your GPU takes 8 pin or 2x6pin you will need 4 Molex or 4 SATA power connectors.
|
|
|
I'm looking for the developer of the original hodlminer who designed the caching optimization.
hodlminer performnace on i5 is not very good due to the smaller cache size. hodlminer, both the original and the Wolf0 version are optimized for the i7 cache. I've tried changing the slice size but only get rejects. I would like to communicate with the original author to try to tweak the slice size for better performance with the i5 cache, but I don't know who that is. Anyone?
I do, but there's no point. Simply because I HAVE changed the slice size - you're still fucked. The whole thing needs to be run through AES-256-CBC - obviously if you can fit the whole slice in cache, this gets a lot better. Thanks. By fucked do you mean the smaller slice broke it, or did it just not improve it on an i5? I wasn't... er... targeting an i5 when I did it, let's say. My point is... you *technically* can make it smaller, but all you'd be doing is shuffling the shit around - it's still ALL gonna be needed for the AES transformation. And that ENTIRE transformation must be finished before you can do the next iteration of the loop. Because of this, it's always gonna be better if it fits in some kind of faster storage. Agreed but the performance difference between i5 & i7 is significantly greater than the difference in cache size. Wouldn't a slice size tuned for an i5 perform better on an i5 than a slice sized tuned for an i7? You're not getting it - no matter how you play with it, the same size shit has to be loaded. It's not gonna change a thing - the i5 is likely already packing its cache as best it can. So you're saying the same applies to the i7 and setting the slice size to match the cache makes no difference. I get that now. From looking at the code it appeared some cache tweaking was being performed to speed things up. Things like preloading the cache can help if you have other things to do waiting for the cache to load I've done that kind of stuff in the past and assumed something similar was being done with hodlminer. It would have been interesting seeing it done it from c/c++ with the x86 instruction set, neither of which is where my relevent experience lies.
|
|
|
I'm looking for the developer of the original hodlminer who designed the caching optimization.
hodlminer performnace on i5 is not very good due to the smaller cache size. hodlminer, both the original and the Wolf0 version are optimized for the i7 cache. I've tried changing the slice size but only get rejects. I would like to communicate with the original author to try to tweak the slice size for better performance with the i5 cache, but I don't know who that is. Anyone?
I do, but there's no point. Simply because I HAVE changed the slice size - you're still fucked. The whole thing needs to be run through AES-256-CBC - obviously if you can fit the whole slice in cache, this gets a lot better. Thanks. By fucked do you mean the smaller slice broke it, or did it just not improve it on an i5? I wasn't... er... targeting an i5 when I did it, let's say. My point is... you *technically* can make it smaller, but all you'd be doing is shuffling the shit around - it's still ALL gonna be needed for the AES transformation. And that ENTIRE transformation must be finished before you can do the next iteration of the loop. Because of this, it's always gonna be better if it fits in some kind of faster storage. Agreed but the performance difference between i5 & i7 is significantly greater than the difference in cache size. Wouldn't a slice size tuned for an i5 perform better on an i5 than a slice sized tuned for an i7?
|
|
|
I'm looking for the developer of the original hodlminer who designed the caching optimization.
hodlminer performnace on i5 is not very good due to the smaller cache size. hodlminer, both the original and the Wolf0 version are optimized for the i7 cache. I've tried changing the slice size but only get rejects. I would like to communicate with the original author to try to tweak the slice size for better performance with the i5 cache, but I don't know who that is. Anyone?
I do, but there's no point. Simply because I HAVE changed the slice size - you're still fucked. The whole thing needs to be run through AES-256-CBC - obviously if you can fit the whole slice in cache, this gets a lot better. Thanks. By fucked do you mean the smaller slice broke it, or did it just not improve it on an i5?
|
|
|
Found another block last night. Solo mining. Two quad core CPUs (with hyperthreading).
Was that mining with the wallet or cpuminer-hmq1725? If wallet mining works with cpuminer-hmq1725 it should also work with cpuminer-opt, or I can get it to work.
|
|
|
hello community,
newbi questions regarding mining:
(running: Windows 10, Intel core i5-4460s @ 2.90GHz, 8 GB ram)
1. got cpuminer-multi-rel1.1 2. changed config: { "_comment1" : "Any long-format command line argument ", "_comment2" : "may be used in this JSON configuration file",
"api-bind" : "127.0.0.1:22447",
"url" : "127.0.0.1:22440", "user" : "UserIsSameAsEspersConfig", "pass" : "PassIsSameAsEspersConfig",
"algo" : "quark", "threads" : 0, "cpu-priority" : 0, "cpu-affinity" : -1,
"benchmark" : false, "debug" : false, "protocol": false, "quiet" : false } 3.cpuminer-multi-rel1.1 runs without errors and shows something like: quark block 178668, diff 91.06 CPU #3: 95.81 kH/s CPU #2: 93.75 kH/s CPU #0: 91.83 kH/s CPU #1: 91.87 kH/s
Q1: is cpuminer-multi-rel1.1 really mining or just running without errors? I got zero blocks running it all night. Q2: is kH/s in miner the same as kHash/s in Espers Wallet? Q3: i get like 45 kHash/s in Wallet. Does cpuminer-multi-rel1.1 mine ~8x as fast? am i understanding this correctly? Q4: Do you mine with the Espers Wallet at the same time as running the miner? Or does that produce conflicts, with the result that neither are mining?
any help would be appreciated
greetz
Hi. First off the algo is based on quark but its hmq1725. So replace quark with hmq1725 in lowercase. And to solo mine its getting harder because of difficulty rising. Maybe you should enable debug as a test drive to see if everything is running ok. I was running it from a batch file. This is the command line i used cpuminer-amd -a hmq1725 -o 127.0.0.1:22440 -u username -p password -D You can't use both miners. Wallet + miner. Only use cpuminer-multi-rel1.1 since its faster. Good luck @eye-drop (or anyone else that can/wants to help ) first, thx for taking the time to help i replaced quark with hmq1725: and nothing, cpuminer-multi-rel1.1 doesn't even start greetz You need a miner with hmq1725 support, cpuminer-multi doesn't have it yet.. Ther are two miners to my knowledge. Suprnova created cpuminer-hmq1725, windows binaries available here: https://www.suprnova.cc/downloads/cpuminer-hmq1725.zipThen there's cpuminer-opt, which runs on Linux or a Linux VM and is 57% faster than cpuminer-hmq1725, link in my sig. first offcourse, thx for your time and help. not knowing / running Linux (did install Linux in VM once, but that's about it) i downloaden cpuminer-hmq1725 now what? clicking cpuminer.exe: it doesn't start greetz Click? what's that?
|
|
|
Then there's cpuminer-opt, which runs on Linux or a Linux VM and is 57% faster than cpuminer-hmq1725, link in my sig. Sorry, I haven't set up your miner yet. I've downloaded Debian 8.4 last night and going to set up a VM with it and go from there. May take me a couple of days or so because of "real life" stuff. Thanks and no problem.
|
|
|
|