Bitcoin Forum
May 24, 2024, 02:33:53 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 [14] 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 »
261  Alternate cryptocurrencies / Announcements (Altcoins) / Re: 🌟[ANN]🌟[XUM]🌟LuminosityCoin🌟PoW/PoS🌟📈 Airdrop📈Android&WebWallet🌟 on: February 14, 2017, 08:25:51 PM
here is Permanent node for XUM

45.76.4.171
Thank you!
262  Alternate cryptocurrencies / Announcements (Altcoins) / Re: 🌟[ANN]🌟[XUM]🌟LuminosityCoin🌟PoW/PoS🌟📈 Airdrop📈Android&WebWallet🌟 on: February 14, 2017, 06:44:44 PM
Need node please?
263  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][SMF]SMURFCOIN / HYBRID on: January 31, 2017, 07:53:47 PM
Only solo? No pool?
264  Alternate cryptocurrencies / Announcements (Altcoins) / Re: MrsaCoin - MRSA - X13 - HYBRID on: January 24, 2017, 07:39:44 PM
Cryptopia done!
Next yobit!
265  Alternate cryptocurrencies / Mining (Altcoins) / Re: Open Source ZEC (ZCash) GPU Miner AMD & NVidia (up to 18.5 sol/s on RX480) on: October 24, 2016, 08:22:15 PM
Where do these 40Sols per 290x card that Toomim supposedly are getting come from? Seems awfully high.
Secret sauce. We put it on our GPUs prior to cooking.

Btw, just a 290, not 290x.

(We wrote our own miner about a month ago. Our code is faster. No, you can't have it, sorry.)
266  Alternate cryptocurrencies / Mining (Altcoins) / Re: Open Source ZEC (ZCash) GPU Miner AMD & NVidia (up to 18.5 sol/s on RX480) on: October 24, 2016, 06:12:08 PM
Gnoli miner and the Toomin miner do around 17.5 sol on a 480x and 22 sol on a 290x respectivley
I don't know where you're getting your numbers from, but we are much faster than 22 Sol/s.
267  Alternate cryptocurrencies / Mining (Altcoins) / Re: Claymore's Dual Ethereum AMD+NVIDIA GPU Miner v7.2 (Windows/Linux) on: October 13, 2016, 09:38:00 AM
We're getting an issue on our linux-based NFS netbooted cluster with v7.2 in which after around 36 hours of operation, the miner starts to do a ridiculous amount of "disk" accesses. This saturates our gigabit ethernet (112 MiB/s) for about 5-10 minutes per rig.

As a workaround, we can restart the machines every 24 hours. We'd like to be able to do this with a cron job. What data should we send to the miner, and at which port, in order to make it restart?
268  Bitcoin / Pools / Re: What happened to P2Pool? on: August 28, 2016, 04:45:17 AM
I disagree with you regarding CPU load
Maybe it's just a problem I have had because I tend to run p2pool nodes with a lot of hashrate. I also always do SSDs, so I've never seen the HDD->SSD improvement.
269  Bitcoin / Pools / Re: What happened to P2Pool? on: August 27, 2016, 06:06:54 AM
What happened to P2Pool?

Three main reasons, as I see it:

1) Most Antminers (including the S4, S5, and S7, and possibly the S3) have problems with p2pool, and will lose hashrate and become unstable unless you use a special firmware. These problems were fixed with the S9, but during the earlier generations, p2pool lost a lot of hashrate.

2) As block sizes increased, the CPU load on p2pool nodes has increased as well. In order to get reasonable efficiency with p2pool, you now need a fairly fast CPU, like a 3+ GHz Sandy Bridge or faster. Network bandwidth and latency requirements have also increased.

3) There's no Chinese installation manual for p2pool.
270  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: August 04, 2016, 02:34:42 PM
Cool Beans! Speaking of enthusiasm...BLOCK!! Grin
Which block are you referring to? We found two.

Btw, windpath, http://minefast.coincadence.com/p2pool-stats.php is getting confused.
271  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 28, 2016, 02:28:30 PM
How do you see which node found the block?
This block was found by one of mine.

http://ml.toom.im:9334/static/share.html#000000000000000004ede06b3d98068574705bd40a494abaebbb6b883ad4875b

Note the "Peer first received from: null" message.
272  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: July 11, 2016, 06:14:55 AM
I've being out of the loop for a little while what's being going on just started mining again now being going 2 weeks and not one single btc last time it was every 4-5 days unless I've mucked something up again
P2pool is pretty small nowadays. Due to the size, it's expected for p2pool to find one block every two weeks on average. Sometimes we'll get multiple blocks in a single week, and sometimes we'll go a month or longer in between blocks. Unfortunately, you have to either be patient or switch pools.
273  Economy / Computer hardware / Re: 3X S7's hosted at toomim on: July 07, 2016, 10:21:58 PM
I can confirm most of what squal1066 has said, with a few corrections:

1) batch 1, Run sweet from day one, with PSU. Adv 4.587Th over 1 month
2) batch 8, runs nicely, slightly over clocked, with PSU, adv 5.024 over 1 month
3) batch 19, has one chip not reporting, nothing noticeable, does not come with PSU as I just bought a platt for it for $110. Adv 4.833 over 1 month

What say you? $1200 for all? theirs $160 of PSU there.

hosting prices vary as to the term you require, on a monthly it can be $95 Kwh, 6 months is (I think) $80Kwh, these units pull the normal 1.4-1.5Kwh per unit.
Our monthly rate is currently $90/kW/month.

The PSUs included will work fine on 200V or above, but will only be enough to power 2/3 of an S7 if you only have 120V available. If you wish to run them on 120V, we can provide additional PSUs for $80 for a 90% efficient model or $110 for a 93.5% efficient 80+ Platinum model.

We metered #3 at 1.43 kW. We have not metered #1 or #2 recently, but can do so upon request.
274  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: June 30, 2016, 02:25:16 AM
- P2Pool.in: Downloaded Windows binary version 16.0, installs and runs fine. P2Pool Pool Rate: approx. 140 TH/s
- Github Master Branch: Downloaded and installed master branch code from Github P2Pool. Includes 16.0 update (I assume) P2Pool Pool Rate: approx. 900 TH/s.
I don't use Windows for my mining servers, so I have no familiarity with the p2pool.in binaries. The github master branch is the one true source for p2pool code, so I would recommend using that.

I don't see why the different software distributions would connect you to a different network with a different hashrate. It's possible that you just weren't able to acquire enough peers the first time you tried, and so you didn't get to see the whole network.
275  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: June 28, 2016, 11:49:42 PM
Looks like most of the stalling was in p2pool/data.py:439(generate_transaction):

Code:
         1158218988 function calls (1142867929 primitive calls) in 1003.809 seconds

   Ordered by: internal time
   List reduced from 3782 to 100 due to restriction <100>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
4082057/4082056  679.103    0.000  679.103    0.000 {method 'poll' of 'select.epoll' objects}
     6046   23.784    0.004   53.010    0.009 p2pool/data.py:439(generate_transaction)
 15805300   22.434    0.000   27.909    0.000 p2pool/util/pack.py:221(write)
 13644452   21.269    0.000   21.269    0.000 {_hashlib.openssl_sha256}
 13644451   14.189    0.000   14.189    0.000 {method 'digest' of 'HASH' objects}
   844076   13.670    0.000   19.192    0.000 p2pool/util/math.py:64(add_dicts)
  9104192   12.518    0.000   15.279    0.000 p2pool/util/pack.py:215(read)
Of about 15 minutes total operation time, nearly 1 minute was inside generate_transaction or functions called inside generate_transaction. I believe that most of that time was in verifying one batch of 211 shares. This was running via pypy. When run via stock CPython, I think it takes much longer.

I also got this in my data/bitcoin/log file:

Code:
2016-06-28 10:25:46.131180 Processing 91 shares from 10.0.1.3:36464...
2016-06-28 10:29:38.855212 ... done processing 91 shares. New: 25 Have: 22532/~17280
2016-06-28 10:29:38.855714 Requesting parent share 01582c80 from 10.0.1.3:47276
2016-06-28 10:29:38.856757 > Watchdog timer went off at:

... boring stuff deleted ...

2016-06-28 10:29:38.858448 >   File "/home/p2pool/p2pool/p2pool/data.py", line 646, in check
2016-06-28 10:29:38.858476 >     share_info, gentx, other_tx_hashes2, get_share = self.generate_transaction(tracker, self.share_info['share_data'], self.header['bits'].target, self.share_info['timestamp'], self.share_info['bits'].target, self.contents['ref_merkle_link'], [(h, None) for h in other_tx_hashes], self.net, last_txout_nonce=self.contents['last_txout_nonce'])
2016-06-28 10:29:38.858513 >   File "/home/p2pool/p2pool/p2pool/data.py", line 491, in generate_transaction
2016-06-28 10:29:38.858541 >     65535*net.SPREAD*bitcoin_data.target_to_average_attempts(block_target),
2016-06-28 10:29:38.858568 >   File "/home/p2pool/p2pool/p2pool/util/memoize.py", line 28, in b
2016-06-28 10:29:38.858594 >     res = f(*args)
2016-06-28 10:29:38.858621 >   File "/home/p2pool/p2pool/p2pool/util/skiplist.py", line 44, in __call__
2016-06-28 10:29:38.858648 >     return self.finalize(sol_if, args)
2016-06-28 10:29:38.858674 >   File "/home/p2pool/p2pool/p2pool/data.py", line 739, in finalize
2016-06-28 10:29:38.858701 >     return math.add_dicts(*math.flatten_linked_list(weights_list)), total_weight, total_donation_weight
2016-06-28 10:29:38.858729 >   File "/home/p2pool/p2pool/p2pool/util/math.py", line 67, in add_dicts
2016-06-28 10:29:38.858760 >     for k, v in d.iteritems():
2016-06-28 10:29:38.858787 >   File "/home/p2pool/p2pool/p2pool/main.py", line 313, in <lambda>
2016-06-28 10:29:38.858814 >     sys.stderr.write, 'Watchdog timer went off at:\n' + ''.join(traceback.format_stack())
2016-06-28 10:29:38.883268 > ########################################
2016-06-28 10:29:38.883356 > >>> Warning: LOST CONTACT WITH BITCOIND for 3.9 minutes! Check that it isn't frozen or dead!
2016-06-28 10:29:38.883392 > ########################################
2016-06-28 10:29:38.883427 P2Pool: 17323 shares in chain (22532 verified/22532 total) Peers: 3 (2 incoming)
2016-06-28 10:29:38.883452  Local: 20604GH/s in last 10.0 minutes Local dead on arrival: ~1.0% (0-3%) Expected time to share: 35.6 minutes

That indicates that p2pool was working on subtasks inside generate_transaction at the moment that the watchdog timer went off. The watchdog is there to notice when something is taking too long and to spit out information on where it was stalled. Helpful in this case.

Specifically, it looks like the add_dicts function might be inefficient. The for k, v in d.iteritems() line sounds like it might be an O(n^2) issue. I'll take a look at the context and see what I can find.

In the mean time, if your nodes are stalling, try running p2pool using pypy. It seems to help nodes get caught up.

Instructions for setting up pypy to run p2pool can be found here. Note that pypy uses a lot more memory, and performance with pypy seems to degrade after a few days, so it's probably a good idea to only use pypy as a temporary measure.
276  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: June 28, 2016, 11:17:57 PM
I think what we're dealing with is not a single >= 1,000,000 byte share, but instead a single sharereq that lists multiple shares which total over 1,000,000 bytes in size. When you request shares, you request them in batches. Here's some of the code for that:

Code:
                print 'Requesting parent share %s from %s' % (p2pool_data.format_hash(share_hash), '%s:%i' % peer.addr)
                try:
                    shares = yield peer.get_shares(
                        hashes=[share_hash],
                        parents=random.randrange(500), # randomize parents so that we eventually get past a too large block of shares
                        stops=list(set(self.node.tracker.heads) | set(
                            self.node.tracker.get_nth_parent_hash(head, min(max(0, self.node.tracker.get_height_and_last(head)[0] - 1), 10)) for head in self.node.tracker.heads
                        ))[:100],
                    )

Note the "# randomize parents so that we eventually get past a too large block of shares" comment there. That looks to me like one heck of a hack. It seems that p2pool does not do anything intelligent to make sure that a bundle of shares does not exceed the limit, or to fail cleanly when it does. If I understand it correctly, this is resulting in repeated requests for too large a bundle of shares (which fail), followed eventually by a request for a large bundle that does not exceed the limit. This bundle then takes a while to process, causing the node to hang for a while and eventually lose connections to its peers. Maybe.

Or maybe there's another reason why the shares are taking so long. I'm trying pypy right now to see if that reduces the share processing time. Doesn't seem to help enough. Next step is to run cProfile and see what's taking so long.
277  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: June 28, 2016, 10:31:56 PM
That would make sense, except for one thing - p2pool shares only have a limited lifetime (less than two days?), and I've been seeing this message for much longer than that. Shouldn't the dud share eventually fall off the sharechain anyway? If that's correct, it means there's some scenario where oversized blocks can be created frequently.
I don't think it's oversize blocks. I think it's just blocks that are closer to the 1000000 byte limit than p2pool was designed to handle. IIRC, Bitcoin Classic did away with the "sanity limit" of 990,000 bytes (or something like that) and will create blocks up to 1,000,000 bytes exactly. So this might just be a bug from not allowing a few extra bytes for p2pool metadata with a 1,000,000 byte block.

I know that I recently changed my bitcoind settings to use a lower minrelaytxfee setting and a higher max

Or it could be something else.
278  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: June 28, 2016, 10:21:38 PM
Two of my three nodes are having performance issues. As far as I can tell, the third one (:9336) is serving up the shares that are tripping up the other two. I'm going to shut off my working node to see if that helps the two non-working nodes work. If it helps, then I'll leave :9336 offline long enough for the other two nodes and the rest of the p2pool network overtake it in the share chain and hopefully purge out the naughty shares.

Edit: nope, didn't help.
279  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: June 28, 2016, 10:10:24 PM
Note: accepting messages over 1000000 bytes comprises a network hard fork. Don't do this at home without approval from forrestv. I'm just doing this for testing, and afterwards I will purge my share history on that node.
280  Bitcoin / Pools / Re: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: June 28, 2016, 10:02:48 PM
My guess is that what's happening is that someone mined a share that is very close to the 1 MB limit, like 999980 bytes or something like that, and there's enough p2pool metadata overhead to push the share message over 1000000 bytes, which is the per-message limit defined in p2pool/bitcoin/p2p:17. This is triggering a TooLong exception which prevents that share (and any subsequent shares) from being processed and added to the share chain. Later, the node notices that it's missing the share (which it hasn't blacklisted or marked as invalid), and tries to download it again. Rinse, repeat.

A quick hack to get past this might be to increase the 1000000 in p2pool/bitcoin/p2p:17 to something bigger. As far as I can tell if we did that, this would mean that people who mine excessively large blocks (e.g. with Bitcoin Unlimited) that would get rejected by the network might have their shares accepted by p2pool, which would end up being a sort of block withholding attack. I don't think this would be too big of a concern, so I'm going to try to raise the limit on one of my nodes and see if it resolves the issue.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 [14] 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!