Lyra2v2 has increased from 0.7 GH/s to 19.88 GH/s today at nicehash. Mona effect?
The biggest Mona pool have 18 ghs Lyra2rev2 hashrate...
|
|
|
Just google it. You'll find dozens of tutorials.
|
|
|
I knew that but I prefer guano apes style ))) Lord of the boards comes to mind which I find amazing!
|
|
|
-scryptr perhaps give it a test like this rpcallowip=192.168.*.* it might help you. in the miner config file. This worked for me across four different routers I have inline. All my main stuff is out in the shop "out back of the house" but I have also mined from 3 or 4 laptops inside my house while rigs mined in the shop. I have 3 wireless routers 1 main router plus cable modem.
It's worth nothing that some coin wallets doesn't allow the use of asterisks with rpcallowip in the config file and refuse to start with it. Edit:
|
|
|
Come on guys, maxcoin is nothing but a fucking joke for a long while now. In fact, I'm shocked how some twitter hype and dozens of empty promises on this forum kept the price of this coin as high as it is. Nothing ever happens with this coin development-wise.
In all honesty, it would take so little effort to skyrocket the price but the management of this coin (if there's any left at this point) must be completely and utterly retarded.
|
|
|
mazafaka ico Lol this 9M ICO 1M POW nice
|
|
|
You've made a fake Ethercoin, all kinds of ShitcoinDARK's or Shitcoinlite's or ShitcoinDOGE's even combining shitcoins like DogeShitcoinDARK. You made every color shitcoins like whitecoin, orangecoin, purplecoin.
whats the hold up here? You know some exchange will add it, you know people will mine it, and it might even pump.
You can even make your name Gavin Andreesen and do it for the lulz
Why would you make a shitcoin based on something that's already shit?
|
|
|
From left to right they are labeled as PCIE 6, 5, 4, 3, 2, 1, BUT it's not the same order as it is in your miner.
In softwares like MSI Afterburner or EVGA Precision the order is (from left to right) 6 5 4 3 1 2 so the x16 slot is the first.
But in (nvidia) miner softwares the -d order is completely different. For AMD I have no idea.
|
|
|
So mem bandwith is the limiting factor here ? Mining use P2 state for memory : 3000mhz : therefore overclocking memory/forcing P0 state would gain a lot?
On the other hand, I have tried with a GTX 980 ti (>+50% bandwith) and I have a very small gain in ethminer cuda...
As far as I know memory access is the limiting factor and not bandwidth. Unfortunately tweaking memory timings is difficult though. "Time to submit a share" doesn't depend on diff. Nor does time to compute. Higher diff just means "less likely to find a share but with more value", so statistically speaking there is no disadvantage in higher diff.
I know what I missed now and yeah, it shouldn't be a problem on modern pools.
|
|
|
(wouldn't go for evga either because they are probably more expensive and you buy mostly the brand... probably the case with corsair too tbh and they probably just rebrand seasonic psu ) yeah actually go for seasonic, they are building their psu... Generally I'd agree with that statement but the PSU I mentioned is off the charts regarding performance per dollar: https://www.techpowerup.com/reviews/EVGA/SuperNOVA_G2_1300/9.htmlI have terrible experience with lower capacity (<850W) EVGA PSU's though.
|
|
|
And with more and more optimizations for different algos the power consumption will also increase so I personally like to use actual power consumption figures rather than what the advertised specs say.
Then, the EVGA wouldn't be enough either at 1300W. We'd have to go 5 GPUs, or 1500W PSUs, which are significantly less cost efficient. I live in Central EU and there are 6 retailers selling the Evga PSU in my country and only one of them selling the Cooler Master V1200 so I have to assume there must be some webshops in France too selling the G2 for a reasonable price (260€).
The main online retailers (LDLC, materiel.net) simply don't list EVGA PSUs, at all. After a bit of digging, I found a smaller one that's not too shady and sells it for 230€. Looks great! It even lists a Super Flower Leadex 1600W - 80+ Gold at 290€... Now, the specs say 2 cables are 2*8pin, and 4 cables are 1*8pin. So you would still need adapters... Yeah, I forget about that. The G2 1300W have 6 x 8 pin and 2 x 6 pins so yeah that's a limiting factor but I have plenty of different PSU cables lying around mostly from the 750 Ti rigs, I could have 5 Gigabyte GTX 970's which each require one 8 + 6 pin. But I wouldn't do that though, as I personally use 4 x 970 and 2 x 750 Ti's which comes down to a about 900-1080W at the wall depending on the algo which is a healthy 69-83% usage. Beyond that I think it's risky to run it 0-24h and the efficiency also drops off.
|
|
|
Most 970's have a 250W TDP so 6 (1500W) is way too much. Also 6 pin power connectors can output 75W and 6+2 connectors can (should) output 150W so having 12 of them is somewhat pointless since you can't utilize all of them (1800W). I use EVGA SuperNova G2 1300W PSU's and I think they are better than what I have have read about the Cooler Master V1200. The G2 1300W is cheaper, have 3 years more warranty (10 instead of 7) and have lower ripple levels on the 12V rail. It's "just" gold, not platinum but the difference in efficiency under standard mining load (70-80%) is negligible: https://www.techpowerup.com/reviews/EVGA/SuperNOVA_G2_1300/6.htmlhttps://www.techpowerup.com/reviews/CoolerMaster/V1200/6.htmlApparently they are louder than the V1200 but I never heard any of them even under heavy load, or at least much quiter than other components like GPUs. Look into it and decide for yourself. TDP is not real world consumption: my Gigabyte GTX 970 G1 Gaming (insane overclock out of the box: 1430 Mhz) pulls 180W from the wall mining Quark. 145W mining Lyra2v2. If you really wanted to stay on the safe side, you could always replace 1-2 cards with 950s... The EVGA looks nice too. But EVGA is a very US-centric company: it seems impossible to find in France. The only one I could find was on amazon.fr for 438€. So that's a no-go for me... Any alternatives? While TDP does not mean real world consumption it does correlate to it pretty well; with high OC (1520 mhz) a 970 Windforce OC model is pulling 197W mining quark but try mining something like Groestl which will pull 220W even on stock speeds. With sysnthetic stress tests you can get even closer to the TDP figure: ( source) And with more and more optimizations for different algos the power consumption will also increase so I personally like to use actual power consumption figures rather than what the advertised specs say. I live in Central EU and there are 6 retailers selling the Evga PSU in my country and only one of them selling the Cooler Master V1200 so I have to assume there must be some webshops in France too selling the G2 for a reasonable price (260€).
|
|
|
Most 970's have a 250W TDP so 6 (1500W) is way too much. Also 6 pin power connectors can output 75W and 6+2 connectors can (should) output 150W so having 12 of them is somewhat pointless since you can't utilize all of them (1800W). I use EVGA SuperNova G2 1300W PSU's and I think they are better than what I have have read about the Cooler Master V1200. The G2 1300W is cheaper, have 3 years more warranty (10 instead of 7) and have lower ripple levels on the 12V rail. It's "just" gold, not platinum but the difference in efficiency under standard mining load (70-80%) is negligible: https://www.techpowerup.com/reviews/EVGA/SuperNOVA_G2_1300/6.htmlhttps://www.techpowerup.com/reviews/CoolerMaster/V1200/6.htmlApparently they are louder than the V1200 but I never heard any of them even under heavy load, or at least much quiter than other components like GPUs. Look into it and decide for yourself.
|
|
|
I would advise setting up each rig as it's own nicehash worker, and preferably use fixed share difficulty, adjusted to about 1 share per minute on each rig (needs experimentation until the best value is found).
Why would you want such low sharerate? With 1 share/minute you're going to have tons of blocks for which you're not going to contribute any shares in time. Especially with low block target coins like Sharkcoin with 20 second target, so you will only contribute one huge share for every 3rd block and for the rest, your work is just going to get locally discarded because there's a new block the miner has to start working on from scratch. Even though you're probably going to get paid as if you had submitted shares for every blocks (depending on the pool's settings) but the discraded work has to decrease the pool's overall efficiency. If it's not the case, what am I missing? As someone else corrected already, it is just the starting diff, so the only use on Nicehash is to prevent the slow difficulty "ramp-up" time when you 1st get connected. Personally, I tend to aim towards high'ish share difficulty also on pools that allow a fixed/permanent setting, as over time, I've built the impression that pools are not linearly increasing share-count with share-diff. Which makes sense really. A miner sending a gazillion low diff shares takes up a lot more pool resources, than a miner with identical hashrate but sending just a few high diff shares. In any case, I'm not sure it matters much. Talking about fixed diff only (as vardiff has other issues) and not just spamming the pool with tiny shares (>1 share/sec), the goal should be to avoid working on shares that take too long to submit before a new block is introduced on the network because in that case the work is being discarded and the miner starts to work on the next block from scratch. I think it matters a lot and in case of a slow sharerate you're going to see a very spiky hashrate graph with probably lower payouts and worse overall pool efficiency. Again, unless I miss something.
|
|
|
I would advise setting up each rig as it's own nicehash worker, and preferably use fixed share difficulty, adjusted to about 1 share per minute on each rig (needs experimentation until the best value is found).
Why would you want such low sharerate? With 1 share/minute you're going to have tons of blocks for which you're not going to contribute any shares in time. Especially with low block target coins like Sharkcoin with 20 second target, so you will only contribute one huge share for every 3rd block and for the rest, your work is just going to get locally discarded because there's a new block the miner has to start working on from scratch. Even though you're probably going to get paid as if you had submitted shares for every blocks (depending on the pool's settings) but the discraded work has to decrease the pool's overall efficiency. If it's not the case, what am I missing?
|
|
|
The unique idea is its a 6week coin?
No, rapidly decreasing supply, up to 20% per week. lower supply = higher price Why would anyone buy it if it has no use? It's just an admitted ponzi-coin.
|
|
|
Are there more details about this alleged ctrl+c issue? It's not clear to me what you're doing, let alone what you expect.
Ctrl+C doesn't do anything but signal some loops to break in the control logic. Which loops depends on if it's the first or second time you've pressed it. It doesn't actually touch the GPUs at all.
I think you're either observing the intended behavior or a driver bug.
If you press ctrl+c once, then see GPUs drop to power-saving, that's expected behavior. You've told the GPU worker threads to stop after they're done with current work. So as the threads complete, you will see GPUs drop to low power modes, since they have nothing to do. App is just waiting for the rest of the threads to finish before exiting.
If you're running ccminer, killing it with ctrl+c, then starting it again and seeing low performance until a reboot. That sounds like a driver bug to me.
Do you or sp_ (or anyone else) know what method is the fastest and most reliable way to exit ccminer without: waiting for the threads to finish and without ccminer crashing? Whatever I tried in my fork it either hangs until all the threads have their work finished which takes somewhere between a few seconds to a minute (probably up to diff) or ccminer will crash which would be fine except it sometimes crashes the driver and puts a card or two in different P-state (405 mhz).
|
|
|
As much as I appreciate your research on the topic you linked, I highly disagree with many things. Won't get into all the details only that I think PoS is flawed and way overrated and while PoW isn't perfect for sure, it's probably the best solution and it's safe to say it's here to stay. Regarding XT, it was a very hostile fork proposition and I honestly believe there was no way it could have possibly succeeded. At the end of the day Bitcoin develops very slowly and the people yelling about how the blocksize limit is holding back the coin from going mainstream, they are only a very small part of the whole Bitcoin economy. Sure, the blocksize limit should be increased in the future but today, and probably even in the next couple of years it's nowhere near as much of a threat as people perceive it to be. We might need to educate people to increase their transaction fees but it's really not something to shout over.
|
|
|
That's beautiful... If only a program like MultiMiner was compatible with CCminer... Last I checked it was because CCminer doesn't have a API so other programs can't hook into it.
Wouldn't the first batchfile continually start up lots of instances of ccminer as there isn't a timer and it's a loop? Does it check to see if one instance is running?
By default batch files will wait until the previous action has ended or in this case application closes. You would need to use the start command in front of ccminer.exe to launch it without waiting for it to end which would probably lock your PC within seconds.
|
|
|
|