Whats the best way to stack these? Currently I have mine on their side sitting on top of each other. Should I take off the cases or are they fine with them on? I don't plan on OC'ing ATM.
W/O a custom rig the best way is just on their sides like you have them. Double check temps to make sure they aren't causing any restrictions. I put my S1s vertically, on simple 2x4s, with the inlet on the bottom, and outlet on the top. I put a set (whatever I can fit on one PSU) together, spaced apart as wide as I can get them but still be in the airflow of a box fan. M
|
|
|
What if you buy $80,000 worth of s3 I'd have around 66TH/S in power. Since I don't pay for electricity I'd get my ROI in about 71 days and make about 300k in one year...? Why isn't everyone doing this.. Lol
Even if you were able get enough free electricity and cooling for that much hashpower, the numbers still don't add up. At 35% increase per month, in a year you'll be about 55k in the black. M
|
|
|
8.8.8.8 and 8.8.4.4 are googles DNS. Nearly always up and reliable.
$comment++; I've always used 8.8.8.8 on my mining gear with zero problems. Easy to remember and always up. It's also Google, which means every lookup you do is stored in a database somewhere. I recommend OpenDNS: 208.67.222.222 208.67.220.220 www.opendns.comM
|
|
|
We just switched mining to p2pool from BTCGuild. No general problems over there, but I have been reading about p2pool and think it may be a better choice in the long term for many reasons.
I'm considering building our own pool node since we have the computing resources to do it in a proper datacenter setting. I have a couple main questions about resource needs, if anyone who's running pools currently could chime in with real-world experiences I'd appreciate it.
First off, can I run a "private" node, inside the firewall without opening any ports inbound? Just something that is part of the network that we could point our miners to for our use for now. Having the ability to do this would help the decision on where to run the node, being inside would help the latency between the miners and the node. Eventually I could see us opening it up for others to use if there's enough need for a local node (Northern NJ, which seems to have good coverage for now I think).
You should at least forward the p2pool port. That'll get you more connections and make you more likely to win the share race. Same for the Bitcoin port. But technically you don't need to. M
|
|
|
I'm a bit disappointed by my S3's on P2Pool. After 1 day mining:
The top one got a lot of hardware errors so I just run it at stock, the bottom one overclocked quite well to 250...started off well on P2Pool, but reported hashrate is pretty bad on both. Note: the node is not close to me, but it shouldn't show this much difference - my old S1's did fine with it.
EDIT: anyone know what the discarded shows? Mine seems pretty high.
[IM]https://i.imgur.com/tCkQfJN.jpg[/img]
Your error rates look great, both well under 1%, have you tried another pool to see how they preform? jonnybravo0311 seemed to get the best results running them at stock clocks.... Edit: Ant error rate calculator: http://www.coincadence.com/antminer-s1-hardware-error/(was built for S1, but works the same for S3...) So far, yes, I've seen the best results on stock clocks. One thing I have noticed is that one of my S3's performance slowly degrades. It hashes at 440GH/s for a while... then slows down to 430 and 420. Rebooting it sometimes helped, but a lot of times on reboot, I'd notice one of the ASIC would show a status of "-" instead of "o". Sounds like S2 behavior. M
|
|
|
From what I've seen of the S3s, they are virtually identical to the S1s. I'm suspecting they'll work as is as "S1s" in M's Ant Monitor. Anyone tried one yet?
M
|
|
|
Just pushed out v4.0. This fixes the problem with hash rate overflows on BTC Guild, and potentially other pools, with large numbers of hashes. Download link: MPoolMonitor40.zipM
|
|
|
Chinese friends on the 1st order, on the 17th of arrival. Here is the actual situation of ants running P2POOL S3, and much better than expected
very good, although, it is as would be expected, since it's a local node. Not really. My S2s lose ~10% of their hashrate local or not on p2pool. This is good news indeed for p2pool. M
|
|
|
There is a fix provided by kano ( https://github.com/kanoi/cgminer-binaries/tree/master/AntS2)that addresses the p2pool issue with the s2... its not 100% and problem with it is he built it on the build of the s2 that doesn't seem to want to hold your settings after reboot.. What I have done to improve the average hashrate by 5% is edit the etc/init.d/cgminer.sh upgrade the s2 to latest firmware so it will retain pool settings and then ssh into the s2.. then type cd /etc/init.d then vi cgminer.sh goto bottom of file and edit the PARAMS= line.. I set the queue to 0 and added this before the end comment.. --scan-time 1 --expiry 15.. They for some reason have the queue at 8192 which causes a ton of work to be discarded.. before this fix i had over 500000 discarded shares daily now down to several thousand in 24 hours.. Hope this helps someone out there:D 73's Doug d57heinz Kano's S2 update makes S2s completely unusable with p2pool and Eligius. I tried it back when it came out. I tried changing the queue as well, and it didn't make any difference for me. M
|
|
|
Congrats, Im just waiting until they open up for sale again I missed the 1st batch since I didnt see about the P2Pool answered, but they got it to work They did? I thought they were still working on this.. Link? Work or work properly? S2s work, as long as you don't mind throwing away 10% of your hashrate. M
|
|
|
I think I understand your proposal, but how do you verify shares in a proposed "round", their order, and their validity for that round without including some data from the (immediately) previous share?
In other words, if I understand you right, miners would be able to throw shares in during a round with the only proof being shares from the previous round, and not including shares already in the current round... So the question becomes how do verify the validity of the existing shares in the current round? How can you stop a single miner from withholding round shares until just before the end of a round, and then pushing out a longer or larger round just before the round end and taking the whole round?
There is still a chain. It's just not a chain as we know it today. The current round is based upon the last Bitcoin block + the payout data for all the work submitted up to that point. That work comes from the share chain, as it does today. So the current round is STILL based upon all prior work, and can still be verified as such. Hence each share submitted for that round can be verified, as it has to be based upon that round, which is based upon all the prior work. With holding shares won't accomplish anything, as far as I can tell. Remember, a round is until the next Bitcoin block is found on the Bitcoin chain, nothing else. It's not x number of shares, it's not x minutes, it's until the next Bitcoin block is found. Maybe I didn't clarify that it's the next Bitcoin block found on the main Bitcoin chain, NOT the p2pool chain? M
|
|
|
Awesome work windpath Can I ask, what is the end game for hosting a 0% p2pool node? Obviously it costs you money. One is to support a decentralized bitcoin I am sure, but what else are your goals? I'm running a 0% fee node because it cost me a pittance to run it. I can contribute that pittance to keeping the network going. M
|
|
|
Entire idea of p2pool share chain is, that noone can forge hash power and everyone can check who is mining, like in bitcoin block chain. I not see, HOW you want to collect and VERIFY data collected by miners across entire POOL? Only way I see, is kind of MOPS node, which is NOT sending LP/WR signals every share and collecting shares on share diff 256 or something. This way NODE payout can be split between node miners, but because no fast lp/wr signals node efficiency will be compromised.
Shares would still be distributed and verified. Those that are invalid would be rejected, as they are today. The difference is, instead of each share being built upon the prior one, each share is built upon the prior work round, and the prior work round is based upon the prior Bitcoin block + shares in chain prior to that block. I wasn't looking to compromise the trust no one principle. M
|
|
|
3 - Share verification has to change, as it's no longer a chain, but a sequence of rings connected by Bitcoin blocks. I'm not seeing how this is a show stopper, as each node should be able to verify the share's legitimacy and confirm the current job we're working on is legit.
This is the kicker, as it would require an entire new trustless proof-of-work system... Right now each p2pool share is built on the existing shares in the chain, and includes the previous shares payouts to each miner. Without a blockchain-like system the proof of work for p2pool (and trustless payouts for previous shares to miners) breaks. I'm not seeing how it's trustless. Each node can verify the blockchain, and how it was derived. Each share can be verified as being a legitimate share for that round of work, and each round of work can be verified as coming from the prior round + shares. Am I missing something? M
|
|
|
I was thinking about the whole S2/S3 problem with p2pool. Do we know if other ASICs have the same problem, they can't respond quick enough to the 30s or less "drop everything you're doing and start over" requests? Assuming so, is this a possible solution?
The basic requirement is changing p2pool to stop having to send those job restart requests every 30 seconds. We know if we change the share time to more than 30s, share difficulty increases accordingly.
What if p2pool only sends the job restart request when a new Bitcoin block is found, roughly every 10 minutes?
For that to work, it'd require a fundamental change in the p2pool "alt" share chain.
Instead of it being a chain, make a sequence of rings. Each ring is composed of shares submitted by miners built upon the last Bitcoin block and the payout data of shares in the p2pool altchain. They are _not_ built upon the last share submitted.
This has a few immediate consequences:
1 - When a p2pool block is found, payment is based entirely upon shares submitted up to the last Bitcoin block. Every share submitted after that gets bundled into the next "ring". That means payments are roughly 10 mins behind your shares, which I feel is of no consequence. 2 - Share orphans will drop significantly, since each share is no longer attempting to build upon the prior share, but the prior job p2pool submitted. Again, that job is based upon the last Bitcoin block + the payout data of the p2pool altchain up to that last Bitcoin block. 3 - Share verification has to change, as it's no longer a chain, but a sequence of rings connected by Bitcoin blocks. I'm not seeing how this is a show stopper, as each node should be able to verify the share's legitimacy and confirm the current job we're working on is legit. 4 - Job restarts are submitted on average once every 10 minutes, according the Bitcoin chain. 5 - We're talking a fork of the p2pool chain. 6 - Alt chain share size can not be arbitrarily set, presumably lower to solve the "low hash rate miner eligibility" problem. However the lower it gets, the bigger the rings get for each round, which means a longer share chain. However we could keep less than 3 days worth to compensate if need be. (I would argue it'd be better to lose some private nodes that don't have the bandwidth and get more miners than it would be to have more smaller nodes but a less secure "chain".) Lastly, it could be based on the number of shares submitted per 10 minutes, so the difficulty could still adjust each round if need be.
Thoughts?
M
EDIT: Added point 6.
|
|
|
Can I mine to an address which isn't in my wallet with below command? -a BITCOIN_ADDRESS : Mine to this address instead of requesting one from Bitcoin. Kindly, MZ As long as it's not an exchange address, yes. M
|
|
|
I think there's an error somewhere in the hashrate calculation on coincadence. The same data is available on p2pool.info (check the 3rd tab "active users"). It shows the first address clocking in at almost 3 TH/s, and the second one at 1.5 TH/s. M EDIT: One more thing to remember. It took me the longest time to figure out how the Bitcoin protocol knows the network hashrate. I believe the answer is it doesn't. Instead, it makes an educated guess. If X number blocks are being solved in Y time at Z hashrate, then the network hashrate is approx H. Obviously with larger numbers, the more accurate that value is, but it's still an estimate. P2pool is no different. If a given address has X numbers of shares on the chain in Y time (which I believe is fixed at ~ 3 days), with the known hashrates at the time of each share, then the hashrate for that address is approximately H. Since luck will cause the number of shares to vary, the estimated hashrate for that user will vary. Apply that logic to the entire sharechain, and you can see why the pool hashrate appears to jump all over the place. It isn't really jumping that much. What you're seeing is the affect of variance (ie luck) on the sharechain.
|
|
|
There are all sorts of things that can affect expected payout. One is luck. Another is how long that user has been here. If both users are past the 3 day PPLNS period, then the numbers are probably caused by luck, but could also be an error on the website. M Another possibility is that the "smaller" miner is actually spread across multiple nodes with the same address. I do that with rigs pointed at my personal node in the US or ones in EU or Asia depending depending on lowest latency and distance. Also failover nodes factor in too. So 1TH may not actually be just 1TH, but instead just 1TH on Coincadence. I'm pretty sure Coincadence is looking at p2pool stats, which means it's everything for that address, not just what's on that node. M
|
|
|
Yea, considered diff changes when writing the collector code (before adding the p2pool stats page...) Right now it bases it on current diff, would not be hard to add historical and store future back to beginning of luck calculations...
I assume the 30 day average diff would be most accurate?
I don't think so. You have to use the difficulty at each block. Just like you have to use the pool hash rate at each block. Also, how best to calculate diff changes when a block spans 2 diffs? Is it the average of the 2 diffs, or the diff when block was found...
It would be when the block is found. I think if you were to average, it'd have to be a weighted average, if you will. So if you had 24 hours at one difficulty, and 12 hours at the next, the 24 hours would "weigh" more in the average. While I think it's not technically 100% correct (Organ, are you here? ), it's a lot easier just to use the difficulty at the time it was solved. - average solve time When is this calculated from? That's how long, on average, it should take for the block to be solved, at the pool hashrate at that time and current difficulty. Something like: network_difficulty * 2^32 / pool_hashrate So if you know how long it should take (in seconds), and you know how long it really did take (in seconds), then it's simple division to see if your luck was good or bad: AvgSolveTime / ActualSolveTime If both are equal, you're at 100% luck. If AvgSolveTime is higher, then you're above 100% luck. If AvgSolveTime is lower, then you're below 100% luck. To span across blocks, you divide the sum of one by the other: SELECT sum(AvgSolveTime) / sum(ActualSolveTime) Disclaimer: as far as I know my numbers are right, and they jive with the more limited info Eligius shows. M
|
|
|
any chance at a mobile app for android?
Slim to none anytime soon. Sorry. M
|
|
|
|