It's over, the Eternal Block is over! Now that the servers have stress tested a 7 million block share, lets double our old record and go for 16m! Just kidding, don't hurt me T_T.
|
|
|
Patch was just deployed on US West (which has currently taken over every server except for useast and uscentral) to fix the stale/invalid issue that was happening during the 1.8 TH/sec stress test. Giving it about an hour to settle in, then I'll be pointing East/Central at it again temporarily to determine if the problem is truly behind us now.
|
|
|
I'm getting about 1.5% stale shares. Why is it so high? For comparison, Deepbit gives me about 0.4% stales
As stated in my post directly above yours: I was running a stress test on US West to see what kind of speed it could handle. It turns out it can easily handle the entire pool worth of speed right now, the issue was it was tossing out stales (no idles though). I've found out why it was tossing stales, and will attempt another test tonight. The goal is to consolidate the servers to a load balanced location to reduce the overhead of sync'ing servers all over the world, and to give us one location behind a provider that understands DDoS and will help us with perimeter level filtering if we are attacked again. Cutting out some of the monthly costs is another plus.
|
|
|
Just concluded a stress test on US West (pointed all DNS entries to it for about 45 minutes). 1.8 TH/sec and no idles reported in the chat room during the test. We did find that occasionally a long poll would be met with multiple stales (sometimes as many as 5 or 6), so I'm looking into that before merging any servers into it.
|
|
|
i only mined .97btc in the last 24 hours with 2.3Ghash. Whats the problem? thats like 50% of what I should be mining.
2.3 GH/s at this difficulty, assuming 0% luck influence and no idle time = 1.48/24 hours. Current luck is -30% over 24 hours. Bad luck + idle time on the servers = lower payouts in the last 24 hours. NOTHING CAN BE DONE ABOUT 24 HOUR LUCK. EVERY POOL HAS BAD LUCK STREAKS, EVERY POOL HAS GOOD LUCK STREAKS.Luck is not the WHOLE issue, but it's the majority of the issue. Idle time is the other side, and that has been fixed. US West has been running over 1.2 TH/sec, and has not had a single idle in the last 4 hours since re-installing the load balancer.
|
|
|
Sorry eleuthria I have to switch pools for a while (at least until you get this all worked out) My normal payout in 12 hrs is 1.3 BTC and the last 12 hrs I got .65 BTC. This has been happening since you brought the new server online. I understand there can be bad luck, but I'm thinking these really long rounds (so many of them) are a result of miners switching pools when problems are detected thus 2 Thash/s is really not 2 Thash/s. With me donating it just does not make sense to stay and mine until the bugs are worked out. Normally I am one to wait it out, but something feels not right here this time.
Good luck, I hope you are able to get it under control.
Luck in past 24 hours 2222279 shares (-29.7%) Doesn't include our current round which will drag it down even more. For those keeping up with other pools, right now BTC Mine has had a streak pushing them into +90% luck. Luck is just that, LUCK. The server issues are _NOT_ affecting our luck, they are affecting our hash rate, which we have less speed being utilized to normalize the effects of luck (although you can NEVER eliminate luck, and +/- 30% is still very possible even at 3-4 TH/sec). The new server has now been spitting out shares for the past 3 hours without a single idle except for the two seconds it took to re-enable long poling.
|
|
|
you realize you're allowed to sleep, right?
I get a few hours. During those few hours the servers team up and break on me, that way I have an urgent reason to wake up instead of hit snooze. Long Polling is back on US West, so the stale rates should fluctuate back down to < 1%.
|
|
|
US West is reachable again (for now). I did a complete reset on our pfSense load balancer. Sometime around 3 AM it completely locked up until I woke up at 5:15. 100% CPU usage locks are a pain because it means no logging can even occur to give some pointers at the cause.
It continued locking up roughly every 10 minutes after a reboot. This, plus the strange long polling related lockups, was reason to reset the VM completely.
This new server is a completely different beast from our others, and the technical problems its having simply can't be recreated in a local environment due to IP based load balancing on tens of thousands of connections. The load balancer had some issues being setup initially due to a different network environment than my original test server, so perhaps this reset got rid of some poor configurations that were done on my end during the initial setup.
|
|
|
us.btcguild.com is temporarily not using Long Polling while I debug the issue it's causing with our load balancer. It was spiking CPU usage to 100% on the LB, which caused the servers to essentially vanish from the internet for a short time. This is why we had two invalids today, the servers weren't able to push the new block to the network due to being cut off from the outside world temporarily.
Working to get it back up as soon as possible. Long polling disabled doesn't break anything, but it can give you 0-3 stales when a new block is found on the network.
|
|
|
2011-07-10 17:23:40: Listener for "5850-a2": 10/07/2011 17:23:40, Problems communicating with bitcoin RPC 0 2 2011-07-10 17:23:46: Listener for "5850-a2": 10/07/2011 17:23:46, Problems communicating with bitcoin RPC 1 2 2011-07-10 17:23:52: Listener for "5850-a2": 10/07/2011 17:23:52, Problems communicating with bitcoin RPC 2 2 2011-07-10 17:23:58: Listener for "5850-a2": 10/07/2011 17:23:58, Problems communicating with bitcoin RPC 3 2
getting bunch of these connecting to us.btcguild.com and falls back to my backup pool
I had to do a complete reset on us.btcguild.com to fix a problem with the load balancer, and at the same time put in a patch that issues keep alive packets on the Long Poll connections from pushpool. Monitoring it all very closely right now to make sure its working.
|
|
|
Would I get better performance by connecting directly to uswest.btcguild.com, instead of us.btcguild.com? I get very low ping on uswest compared to useast.
us.btcguild.com is the same as uswest.btcguild.com US West (/US) load balancer is having some issues, working on a solution right now.
|
|
|
What is difference between btcguild.com and btc-guild.com Maybe someone trying to phish your site ? Javascript added (the only piece I plan to -ever- add) to the site to break out of the frames being used by that scam.
|
|
|
Just applied a fix to the US (/US West) server that should kill off the idles that were popping up roughly once per minute.
|
|
|
WTF? UScentral Overloaded ===> 853.18 GH/s
Work queue empty, miner idles and disconnects.. yay
I've been tweaking the DNS entries to get the split more even. US East and Central idles have stopped for the last hour or so. The new server is almost ready, but I'm making sure the other servers will sync with it when they complete a round. This server is a completely different beast from the others, so I have to be very careful before deploying it live since it is essentially 4+ servers at once. UPDATE: Two blocks finished which was enough to give confidence that the new server will properly sync together with the other servers. The new server is now online. Any DNS entry which is not explicitly used has been pointed to the new server: us., uswest., eu., nl., nl1., nl2., de2., guiminer. all use the new server. Once it has proven itself under moderate load, I will include btcguild.com generic address, and slowly make plans on migrating US Central into it.
|
|
|
US East is showing a similar bottleneck to the original EU (DE) stress test of about 1 TH/sec.
I've modified some of the DNS entries to try to spread some of US East's load onto the other servers. The new server may come online in the next few hours if all goes well.
|
|
|
There was a brief (~10 minute) window on US East where it was tossing out idles fairly frequently. It required a quick restart (~5 seconds).
Been smooth since then [although it will now explode after I've said that].
|
|
|
I'm still curious why running 7 VM's on very powerful hardware is better than running one main OS, with one bitcoind and one pushpool, each of which of course would get 7x the resources. It's not like it's spreading out the risk, it's all on one piece of hardware with one network connection right?
Pushpool and bitcoind cannot fully utilize multi-core systems. Parts of them are threaded, but certain aspects can only utilize one thread, and these are the primary sources of miner idles. Additionally, the volume of connections that connect to the servers can cause issues with the linux TCP stack, although a lot of tweaking has been done to eliminate that bottleneck. By splitting a single server into the following components: Load Balancer (pfSense) MySQL Server Pushpool Servers We exploit the inability of the programs to FULLY utilize multiple cores by giving them each a smaller load on virtual cores. This also reduces the total number of connections that each pushpool has to loop through when pushing out long polls, which is part of reason occasionally you'll see a 2-3 second miner idle when a long poll is being pushed out. The overhead of XenServer is minimal. This also means if a pushpool instance freezes (hasn't happened recently), the people connected to the server will simply be routed to a different instance of pushpool. A few seconds of downtime instead of a few hours.
|
|
|
Sorry for the lack of updates this morning, wake up with a migraine that knocked me back to sleep until about 5 PM.
I'm hopeful that I can get the "super server" mentioned in the header online tonight. It is a priority over the API and Historical Stats, since the pool is growing back at a fairly quick rate, and this new server could likely support somewhere between 3 and 6 TH/sec (hard to know exactly how well it will scale, especially looking at US East's speed).
I got quite a bit of sleep, so I may be up late enough to get both done, although it'll be technically July 11th.
|
|
|
Awknet server (which runs the website) is having some connectivity issues for some people. Pools are still working absolutely fine. Info on my account page may be showing incorrect stats due to sync issues during the connectivity troubles.
UPDATE: Seems to be back in order.
|
|
|
We actually have a number of people donating VERY high amounts, some even going as high as 100% for a few rounds. Thank you ALL very much for your support, I had no idea that many of you were serious about upping your donations once we came back online.
|
|
|
|