I'm fine with 5%, but the reasoning for this increase is interesting. Is just-dice now 'tainted' in the eyes of those who deemed it too shady because dooglus was willing to go with just 1% fee? If so, increasing to 5% shouldn't convince them now. Maybe it'll convince those who hear about just-dice later and don't know it started at 1% (which they would have deemed shady). Of course, someone could take it as their personal quest to make everyone aware that the fee was 1% in the beginning, and was only increased to make people less likely to think it's a scam.
|
|
|
bit of a newb question im sure, but how does one get 'getinfo' output from cgminer?
Use the stats API command.
|
|
|
Sorry if I missed it already, but can someone tell me the change in power consumption, measured at the wall, of the 5GH/s Jalapeno before and after the firmware re-flash?
In my case ~32W to ~46W going from 6.4GH/s to 8GH/s So ~44% power increase for a ~25% hash rate increase N.B. my Jalapeno is one of first few of course. Is it now using a higher voltage when it's overclocked?
|
|
|
In the OP cgminer screeshot, I can see the jalapeno is running at 36C / 3.07V. My 5.8GH/s jalapeno is usually around 56C / 3.8V. It has the older heatpipe heatsink, but isn't that voltage of 3.8V quite high? It does have two active chips, though, so at least it's not a single-chip version. cgminer stats output is saying "PROCESSOR 3: 201 MHz PROCESSOR 7: 194 MHz ENGINES: 30 FREQUENCY: 189 MHz"
Just wondering if my unit could take this overclocking, since the voltage is quite high already. The running voltage & temps (both before & after) would be useful information to include in any report of successful OC.
|
|
|
Yep, it's down. Curiously the tab is just loading forever, not getting any error message in Firefox.
|
|
|
I copied all my *coin blockchains from Debian Linux to FreeBSD. All clients were properly shut down on Linux. None of the newly compiled clients would work on FreeBSD because of database errors, until I removed the copied blockchains and started from scratch. Should the blockchain files be transferable between different operating systems? Can anyone explain why I had to do this?
|
|
|
I'm pretty sure CoinLab's baseline for $2.50 per GH/s per day assumes no electricity costs. With current BTC price of ~$103, it's almost there, ~$2.7. After the next difficulty change it might make sense to mine at CoinLab again.
|
|
|
My user ID is 2137. 2 BTC invested
|
|
|
ckolivas / kano - Can you tell me which is a better stat for measuring. Its is MHS or Work Utility?
From your readme: WU: The Work Utility defined as the number of diff1 shares work / minute (accepted or rejected).
On some units at 375 Mhz overclock I get a higher work utility but lower MHS than I do at 350 Mhz. (and on other units I get a what I feel is good scaling). I am not sure how work utility is calculated or if its just an estimate. My theory was if Work Utility provided a better stat then I would target that vs MHS. I just have so much variance so was trying to figure it out. Any help appreciated!
Thanks!
On avalon: WU is accepted + rejected + nonce errors. MHs is accepted + rejected So you're better off with MHs for that particular hardware and how it measures those values. Is it the same for BFL ASICs, or are HW errors included in their hashrate? I thought they were included in the hashrate for all devices, it's quite confusing if they are included for some but not for others.
|
|
|
You know I'm on strat because we HAVE discussed this.. that is why you 'retired' some servers.
I get quite a few emails and requests for support, it's helpful if you answer questions in a civil tone with the information requested so we can move towards a solution. If you're not willing to do this I suggest you mine elsewhere. I have been civil, and this is a poor attempt at deflection There is more than 20Gh I can point at your pool, but if I have to continually adjust my costings and monitor the setup so I don't loose money (something you are being paid to manage), then I can just as easily point it at Elgius. You provide a chargeable service, perhaps it would be conducive to your pools health if you started to act like it, or perhaps it is YOU that should not be in this business. But I will take your suggestion.... If your system can hold together long enough to for me to meet the minimum target for payout. To that end I have thrown 4GH/s at your pool to see how long it can go without serious issues, but the pool has already started to 'pull' downwards. Elgius shows the same allocation as >4.4GH/s That yellowed out image might have meaningful numbers to you, but you can't really expect others to decipher it easily, especially since even the column headers are yellowed out. Looks like output from some custom script to me. It's a rather common problem that people expect others to be able to understand some output snippet, just because they understand it themselves. I'm doing that all the time myself
|
|
|
I see only one other user has noticed the same bug that I'm experiencing. Maybe it's because there should be no reason to reload the page. Or is this really working correctly for everyone else? https://bitcointalk.org/index.php?topic=43514.msg1453386;topicseen#msg1453386After enabling Group by Price, and reloading the page, the checkbox functionality is reversed and I have to uncheck it to enable grouping again. This doesn't happen if I hit enter on the address bar, or close the tab and reopen the URL. Only when pressing F5 or the reload button on the address bar. I'm not seeing any errors reported by Firefox on this page, using version 21. BTW, what's the difference between reloading using F5 (or the reload button), compared to hitting enter on address bar? I didn't know there is any difference, but clearly there is, judging from the behavior of this bug. I guess it's because web pages can modify what's actually reloaded when using a refresh, while "going to a URL" always reloads completely.
|
|
|
A little while ago the difficulty went up to 19.3M.
I'm running two instances of CGMiner 3.2.1 and they both reported going to 19.3M. About 10 minutes later one of them went back to 15.6m. And now about 15 minutes after that it went back to 19.3M.
Any idea what just happened here? Has anyone else ever seen something like this before? Thanks, Sam
Can't imagine what happened there unless you have multiple pools that were disagreeing about what the current block is. I'm seeing the same thing on multiple rigs, which have all been restarted after the difficulty change. One was started only two hours ago, and it's already flipped between 15.6M and 19.3M three times. I have six pools configured: BitMinter, Bitparking, Ozcoin, Slush, Deepbit & 50BTC. Running a git version of 3.2.2 compiled today. I'll try enabling debug output on some rig and pastebin the relevant parts.
|
|
|
I updated sources from git and compiled a new version. Ironically, doing so resulted in this:
Error: Error loading wallet.dat: Wallet requires newer version of Onecoin
|
|
|
Hey, this is the 5th largest pool. I don't think we are doomed yet
|
|
|
From the third line of cgminer README ( https://github.com/ckolivas/cgminer/blob/master/README): "Do not use on multiple block chains at the same time!". This notice was added in version 2.0.6. I've seen complaints of random crashes and shares not being submitted here. The reason could simply be that cgminer is not designed to work this way. We'd need some clarification what specifically that warning is about. Technically this isn't using cgminer on multiple block chains at the same time... Here's what ckolivas had to say about it on IRC: "cgminer keeps a database of blocks so changing chains means the database is wrong". I have no idea what consequences that would have. Maybe I misread the problems people were having with cgminer here. I was considering a similar chain-hopping method myself, but decided to go with restarting cgminer instead, because of the multiple chains warning.
|
|
|
From the third line of cgminer README ( https://github.com/ckolivas/cgminer/blob/master/README): "Do not use on multiple block chains at the same time!". This notice was added in version 2.0.6. I've seen complaints of random crashes and shares not being submitted here. The reason could simply be that cgminer is not designed to work this way.
|
|
|
Somehow my Jalapeno hashrate dropped from 5.75GH/s to about 5.55GH/s. snip
I don't know but when I first got mine it worked at 6GH for a couple of days and then spontaneously "dropped a gear" after a move and reconnect and it's been 5.7GH ever since. I've never been able to regain that .3 either I let the Jalapeno idle for 10 minutes and changed the USB header it was connected to. Now it's hashing at 5.75GH/s again. Thanks to KNK at BFL forums for suggesting this! But since the Jalapeno already slowed down once, I'm not so confident anymore that it will keep hashing at 5.75GH/s indefinitely without problems.
|
|
|
Somehow my Jalapeno hashrate dropped from 5.75GH/s to about 5.55GH/s. It had been mining stable at full speed for over a week previously. The drop happened after I briefly ran the Jalapeno with BitMinter client during some maintenance, but that might be purely coincidental. BitMinter client was also showing over 5.7GH/s during the short time I tried it. I've tried running the same cgminer binary that produced 5.75GH/s, and the newest version from git, but the Jalapeno just won't hash faster than 5.55GH/s now. Even the temps are about 5C lower now. Any ideas how the hashrate would suddenly plummet like that permanently? EDIT: I moved this question to BFL forums ( https://forums.butterflylabs.com/jalapeno-single-sc-support/3136-jalapeno-hashrate-dropped-200mh-s.html). If anyone has anyone insight that might be a better place to answer, so people searching through Google etc. would actually find the answer.
|
|
|
Sounds like you're running a trading bot and are angry because you're encountering weird bugs. Guess what, normal users might not be affected enough for it to matter, certainly not as much as you're trying argue. Let's take my case for example: I've executed 1317 transactions on Bitstamp, totaling $1276.26 in fees. If Bitstamp really added an extra 0.01 to all those transactions, they've stolen $13.17. That's 1% of the fees I've paid. I can see this being a problem, if you are a spammer of 0.01 BTC trades. BTW, who is the 'spammy fuck' now? EDIT: I can see the rounding problem in my fees easily, it just doesn't add up to very much in the end. Only ~20 of my 1317 transactions were smaller than 2 dollars. Basically, if you are stealing small enough its ok to do it! Just so the public knows, you fuckin idiot. You are too stupid to be affected, can't even do the math properly (yes your calc. is wrong) or you are a scammer yourself (judging from your criteria of doing business) or both. Please enlighten me on how that calculation is wrong. In my mind, it goes like this: 1317 transactions, a maximum of $0.01 stolen from each. That totals $13.17. It's hard to get 0.01*1317 wrong. Of course, not every transaction has had the maximum $0.01 stolen from it, so the total would be less than $13.17. So my transaction fees have been anywhere from 0.5% (naively expecting that 0.005 would be the average stolen fee) to 1% (13.17/(13.17+1276.26)) higher than expected. Yes, that might be called stealing. Do I care about it enough to whine endlessly on forums? No. Have you considered that maybe your dickish attitude is why you aren't getting any help with your problems? You seem to have a crusade against Bitstamp.
|
|
|
|