Bitcoin Forum
April 27, 2024, 12:06:04 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 [732] 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 ... 814 »
  Print  
Author Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool  (Read 2591625 times)
jtoomim
Hero Member
*****
Offline Offline

Activity: 818
Merit: 1006


View Profile WWW
June 28, 2016, 10:02:48 PM
 #14621

My guess is that what's happening is that someone mined a share that is very close to the 1 MB limit, like 999980 bytes or something like that, and there's enough p2pool metadata overhead to push the share message over 1000000 bytes, which is the per-message limit defined in p2pool/bitcoin/p2p:17. This is triggering a TooLong exception which prevents that share (and any subsequent shares) from being processed and added to the share chain. Later, the node notices that it's missing the share (which it hasn't blacklisted or marked as invalid), and tries to download it again. Rinse, repeat.

A quick hack to get past this might be to increase the 1000000 in p2pool/bitcoin/p2p:17 to something bigger. As far as I can tell if we did that, this would mean that people who mine excessively large blocks (e.g. with Bitcoin Unlimited) that would get rejected by the network might have their shares accepted by p2pool, which would end up being a sort of block withholding attack. I don't think this would be too big of a concern, so I'm going to try to raise the limit on one of my nodes and see if it resolves the issue.

Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power.
http://Toom.im
1714219564
Hero Member
*
Offline Offline

Posts: 1714219564

View Profile Personal Message (Offline)

Ignore
1714219564
Reply with quote  #2

1714219564
Report to moderator
"If you don't want people to know you're a scumbag then don't be a scumbag." -- margaritahuyan
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714219564
Hero Member
*
Offline Offline

Posts: 1714219564

View Profile Personal Message (Offline)

Ignore
1714219564
Reply with quote  #2

1714219564
Report to moderator
1714219564
Hero Member
*
Offline Offline

Posts: 1714219564

View Profile Personal Message (Offline)

Ignore
1714219564
Reply with quote  #2

1714219564
Report to moderator
1714219564
Hero Member
*
Offline Offline

Posts: 1714219564

View Profile Personal Message (Offline)

Ignore
1714219564
Reply with quote  #2

1714219564
Report to moderator
jtoomim
Hero Member
*****
Offline Offline

Activity: 818
Merit: 1006


View Profile WWW
June 28, 2016, 10:10:24 PM
 #14622

Note: accepting messages over 1000000 bytes comprises a network hard fork. Don't do this at home without approval from forrestv. I'm just doing this for testing, and afterwards I will purge my share history on that node.

Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power.
http://Toom.im
KyrosKrane
Sr. Member
****
Offline Offline

Activity: 295
Merit: 250


View Profile WWW
June 28, 2016, 10:18:07 PM
 #14623

At the time when the giant amount of transactions go through BTC network, many nodes lose their connection with the daemon.

I understand and agree with the issue you've opened, but I'm curious about this statement. Did you mean "through the P2Pool network" rather than "BTC network"? The number of shares in the P2Pool sharechain has nothing to do with how many transactions are being processed by the bitcoind daemon at any given moment.

Tips and donations: 1KyrosREGDkNLp1rMd9wfVwfkXYHTd6j5U  |  BTC P2Pool node: p2pool.kyros.info:9332
KyrosKrane
Sr. Member
****
Offline Offline

Activity: 295
Merit: 250


View Profile WWW
June 28, 2016, 10:20:59 PM
 #14624

My guess is that what's happening is that someone mined a share that is very close to the 1 MB limit, like 999980 bytes or something like that, and there's enough p2pool metadata overhead to push the share message over 1000000 bytes, which is the per-message limit defined in p2pool/bitcoin/p2p:17. This is triggering a TooLong exception which prevents that share (and any subsequent shares) from being processed and added to the share chain. Later, the node notices that it's missing the share (which it hasn't blacklisted or marked as invalid), and tries to download it again. Rinse, repeat.

That would make sense, except for one thing - p2pool shares only have a limited lifetime (less than two days?), and I've been seeing this message for much longer than that. Shouldn't the dud share eventually fall off the sharechain anyway? If that's correct, it means there's some scenario where oversized blocks can be created frequently.

Tips and donations: 1KyrosREGDkNLp1rMd9wfVwfkXYHTd6j5U  |  BTC P2Pool node: p2pool.kyros.info:9332
jtoomim
Hero Member
*****
Offline Offline

Activity: 818
Merit: 1006


View Profile WWW
June 28, 2016, 10:21:38 PM
Last edit: June 28, 2016, 10:32:09 PM by jtoomim
 #14625

Two of my three nodes are having performance issues. As far as I can tell, the third one (:9336) is serving up the shares that are tripping up the other two. I'm going to shut off my working node to see if that helps the two non-working nodes work. If it helps, then I'll leave :9336 offline long enough for the other two nodes and the rest of the p2pool network overtake it in the share chain and hopefully purge out the naughty shares.

Edit: nope, didn't help.

Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power.
http://Toom.im
jtoomim
Hero Member
*****
Offline Offline

Activity: 818
Merit: 1006


View Profile WWW
June 28, 2016, 10:31:56 PM
 #14626

That would make sense, except for one thing - p2pool shares only have a limited lifetime (less than two days?), and I've been seeing this message for much longer than that. Shouldn't the dud share eventually fall off the sharechain anyway? If that's correct, it means there's some scenario where oversized blocks can be created frequently.
I don't think it's oversize blocks. I think it's just blocks that are closer to the 1000000 byte limit than p2pool was designed to handle. IIRC, Bitcoin Classic did away with the "sanity limit" of 990,000 bytes (or something like that) and will create blocks up to 1,000,000 bytes exactly. So this might just be a bug from not allowing a few extra bytes for p2pool metadata with a 1,000,000 byte block.

I know that I recently changed my bitcoind settings to use a lower minrelaytxfee setting and a higher max

Or it could be something else.

Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power.
http://Toom.im
jtoomim
Hero Member
*****
Offline Offline

Activity: 818
Merit: 1006


View Profile WWW
June 28, 2016, 11:17:57 PM
 #14627

I think what we're dealing with is not a single >= 1,000,000 byte share, but instead a single sharereq that lists multiple shares which total over 1,000,000 bytes in size. When you request shares, you request them in batches. Here's some of the code for that:

Code:
                print 'Requesting parent share %s from %s' % (p2pool_data.format_hash(share_hash), '%s:%i' % peer.addr)
                try:
                    shares = yield peer.get_shares(
                        hashes=[share_hash],
                        parents=random.randrange(500), # randomize parents so that we eventually get past a too large block of shares
                        stops=list(set(self.node.tracker.heads) | set(
                            self.node.tracker.get_nth_parent_hash(head, min(max(0, self.node.tracker.get_height_and_last(head)[0] - 1), 10)) for head in self.node.tracker.heads
                        ))[:100],
                    )

Note the "# randomize parents so that we eventually get past a too large block of shares" comment there. That looks to me like one heck of a hack. It seems that p2pool does not do anything intelligent to make sure that a bundle of shares does not exceed the limit, or to fail cleanly when it does. If I understand it correctly, this is resulting in repeated requests for too large a bundle of shares (which fail), followed eventually by a request for a large bundle that does not exceed the limit. This bundle then takes a while to process, causing the node to hang for a while and eventually lose connections to its peers. Maybe.

Or maybe there's another reason why the shares are taking so long. I'm trying pypy right now to see if that reduces the share processing time. Doesn't seem to help enough. Next step is to run cProfile and see what's taking so long.

Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power.
http://Toom.im
jtoomim
Hero Member
*****
Offline Offline

Activity: 818
Merit: 1006


View Profile WWW
June 28, 2016, 11:49:42 PM
 #14628

Looks like most of the stalling was in p2pool/data.py:439(generate_transaction):

Code:
         1158218988 function calls (1142867929 primitive calls) in 1003.809 seconds

   Ordered by: internal time
   List reduced from 3782 to 100 due to restriction <100>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
4082057/4082056  679.103    0.000  679.103    0.000 {method 'poll' of 'select.epoll' objects}
     6046   23.784    0.004   53.010    0.009 p2pool/data.py:439(generate_transaction)
 15805300   22.434    0.000   27.909    0.000 p2pool/util/pack.py:221(write)
 13644452   21.269    0.000   21.269    0.000 {_hashlib.openssl_sha256}
 13644451   14.189    0.000   14.189    0.000 {method 'digest' of 'HASH' objects}
   844076   13.670    0.000   19.192    0.000 p2pool/util/math.py:64(add_dicts)
  9104192   12.518    0.000   15.279    0.000 p2pool/util/pack.py:215(read)
Of about 15 minutes total operation time, nearly 1 minute was inside generate_transaction or functions called inside generate_transaction. I believe that most of that time was in verifying one batch of 211 shares. This was running via pypy. When run via stock CPython, I think it takes much longer.

I also got this in my data/bitcoin/log file:

Code:
2016-06-28 10:25:46.131180 Processing 91 shares from 10.0.1.3:36464...
2016-06-28 10:29:38.855212 ... done processing 91 shares. New: 25 Have: 22532/~17280
2016-06-28 10:29:38.855714 Requesting parent share 01582c80 from 10.0.1.3:47276
2016-06-28 10:29:38.856757 > Watchdog timer went off at:

... boring stuff deleted ...

2016-06-28 10:29:38.858448 >   File "/home/p2pool/p2pool/p2pool/data.py", line 646, in check
2016-06-28 10:29:38.858476 >     share_info, gentx, other_tx_hashes2, get_share = self.generate_transaction(tracker, self.share_info['share_data'], self.header['bits'].target, self.share_info['timestamp'], self.share_info['bits'].target, self.contents['ref_merkle_link'], [(h, None) for h in other_tx_hashes], self.net, last_txout_nonce=self.contents['last_txout_nonce'])
2016-06-28 10:29:38.858513 >   File "/home/p2pool/p2pool/p2pool/data.py", line 491, in generate_transaction
2016-06-28 10:29:38.858541 >     65535*net.SPREAD*bitcoin_data.target_to_average_attempts(block_target),
2016-06-28 10:29:38.858568 >   File "/home/p2pool/p2pool/p2pool/util/memoize.py", line 28, in b
2016-06-28 10:29:38.858594 >     res = f(*args)
2016-06-28 10:29:38.858621 >   File "/home/p2pool/p2pool/p2pool/util/skiplist.py", line 44, in __call__
2016-06-28 10:29:38.858648 >     return self.finalize(sol_if, args)
2016-06-28 10:29:38.858674 >   File "/home/p2pool/p2pool/p2pool/data.py", line 739, in finalize
2016-06-28 10:29:38.858701 >     return math.add_dicts(*math.flatten_linked_list(weights_list)), total_weight, total_donation_weight
2016-06-28 10:29:38.858729 >   File "/home/p2pool/p2pool/p2pool/util/math.py", line 67, in add_dicts
2016-06-28 10:29:38.858760 >     for k, v in d.iteritems():
2016-06-28 10:29:38.858787 >   File "/home/p2pool/p2pool/p2pool/main.py", line 313, in <lambda>
2016-06-28 10:29:38.858814 >     sys.stderr.write, 'Watchdog timer went off at:\n' + ''.join(traceback.format_stack())
2016-06-28 10:29:38.883268 > ########################################
2016-06-28 10:29:38.883356 > >>> Warning: LOST CONTACT WITH BITCOIND for 3.9 minutes! Check that it isn't frozen or dead!
2016-06-28 10:29:38.883392 > ########################################
2016-06-28 10:29:38.883427 P2Pool: 17323 shares in chain (22532 verified/22532 total) Peers: 3 (2 incoming)
2016-06-28 10:29:38.883452  Local: 20604GH/s in last 10.0 minutes Local dead on arrival: ~1.0% (0-3%) Expected time to share: 35.6 minutes

That indicates that p2pool was working on subtasks inside generate_transaction at the moment that the watchdog timer went off. The watchdog is there to notice when something is taking too long and to spit out information on where it was stalled. Helpful in this case.

Specifically, it looks like the add_dicts function might be inefficient. The for k, v in d.iteritems() line sounds like it might be an O(n^2) issue. I'll take a look at the context and see what I can find.

In the mean time, if your nodes are stalling, try running p2pool using pypy. It seems to help nodes get caught up.

Instructions for setting up pypy to run p2pool can be found here. Note that pypy uses a lot more memory, and performance with pypy seems to degrade after a few days, so it's probably a good idea to only use pypy as a temporary measure.

Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power.
http://Toom.im
aib
Member
**
Offline Offline

Activity: 135
Merit: 11

Advance Integrated Blockchains (AIB)


View Profile WWW
June 29, 2016, 05:17:39 AM
 #14629

Looks like most of the stalling was in p2pool/data.py:439(generate_transaction):

Code:
         1158218988 function calls (1142867929 primitive calls) in 1003.809 seconds

   Ordered by: internal time
   List reduced from 3782 to 100 due to restriction <100>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
4082057/4082056  679.103    0.000  679.103    0.000 {method 'poll' of 'select.epoll' objects}
     6046   23.784    0.004   53.010    0.009 p2pool/data.py:439(generate_transaction)
 15805300   22.434    0.000   27.909    0.000 p2pool/util/pack.py:221(write)
 13644452   21.269    0.000   21.269    0.000 {_hashlib.openssl_sha256}
 13644451   14.189    0.000   14.189    0.000 {method 'digest' of 'HASH' objects}
   844076   13.670    0.000   19.192    0.000 p2pool/util/math.py:64(add_dicts)
  9104192   12.518    0.000   15.279    0.000 p2pool/util/pack.py:215(read)
Of about 15 minutes total operation time, nearly 1 minute was inside generate_transaction or functions called inside generate_transaction. I believe that most of that time was in verifying one batch of 211 shares. This was running via pypy. When run via stock CPython, I think it takes much longer.

I also got this in my data/bitcoin/log file:

Code:
2016-06-28 10:25:46.131180 Processing 91 shares from 10.0.1.3:36464...
2016-06-28 10:29:38.855212 ... done processing 91 shares. New: 25 Have: 22532/~17280
2016-06-28 10:29:38.855714 Requesting parent share 01582c80 from 10.0.1.3:47276
2016-06-28 10:29:38.856757 > Watchdog timer went off at:

... boring stuff deleted ...

2016-06-28 10:29:38.858448 >   File "/home/p2pool/p2pool/p2pool/data.py", line 646, in check
2016-06-28 10:29:38.858476 >     share_info, gentx, other_tx_hashes2, get_share = self.generate_transaction(tracker, self.share_info['share_data'], self.header['bits'].target, self.share_info['timestamp'], self.share_info['bits'].target, self.contents['ref_merkle_link'], [(h, None) for h in other_tx_hashes], self.net, last_txout_nonce=self.contents['last_txout_nonce'])
2016-06-28 10:29:38.858513 >   File "/home/p2pool/p2pool/p2pool/data.py", line 491, in generate_transaction
2016-06-28 10:29:38.858541 >     65535*net.SPREAD*bitcoin_data.target_to_average_attempts(block_target),
2016-06-28 10:29:38.858568 >   File "/home/p2pool/p2pool/p2pool/util/memoize.py", line 28, in b
2016-06-28 10:29:38.858594 >     res = f(*args)
2016-06-28 10:29:38.858621 >   File "/home/p2pool/p2pool/p2pool/util/skiplist.py", line 44, in __call__
2016-06-28 10:29:38.858648 >     return self.finalize(sol_if, args)
2016-06-28 10:29:38.858674 >   File "/home/p2pool/p2pool/p2pool/data.py", line 739, in finalize
2016-06-28 10:29:38.858701 >     return math.add_dicts(*math.flatten_linked_list(weights_list)), total_weight, total_donation_weight
2016-06-28 10:29:38.858729 >   File "/home/p2pool/p2pool/p2pool/util/math.py", line 67, in add_dicts
2016-06-28 10:29:38.858760 >     for k, v in d.iteritems():
2016-06-28 10:29:38.858787 >   File "/home/p2pool/p2pool/p2pool/main.py", line 313, in <lambda>
2016-06-28 10:29:38.858814 >     sys.stderr.write, 'Watchdog timer went off at:\n' + ''.join(traceback.format_stack())
2016-06-28 10:29:38.883268 > ########################################
2016-06-28 10:29:38.883356 > >>> Warning: LOST CONTACT WITH BITCOIND for 3.9 minutes! Check that it isn't frozen or dead!
2016-06-28 10:29:38.883392 > ########################################
2016-06-28 10:29:38.883427 P2Pool: 17323 shares in chain (22532 verified/22532 total) Peers: 3 (2 incoming)
2016-06-28 10:29:38.883452  Local: 20604GH/s in last 10.0 minutes Local dead on arrival: ~1.0% (0-3%) Expected time to share: 35.6 minutes

That indicates that p2pool was working on subtasks inside generate_transaction at the moment that the watchdog timer went off. The watchdog is there to notice when something is taking too long and to spit out information on where it was stalled. Helpful in this case.

Specifically, it looks like the add_dicts function might be inefficient. The for k, v in d.iteritems() line sounds like it might be an O(n^2) issue. I'll take a look at the context and see what I can find.

In the mean time, if your nodes are stalling, try running p2pool using pypy. It seems to help nodes get caught up.

Instructions for setting up pypy to run p2pool can be found here. Note that pypy uses a lot more memory, and performance with pypy seems to degrade after a few days, so it's probably a good idea to only use pypy as a temporary measure.


Do you think if using Golang to implement the P2Pool will improve the efficiency?


mememiner
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
June 29, 2016, 08:01:20 AM
 #14630

Hello, just my BTC0.02.

Instead of diving straight into the code, I first leaned back and started thinking at bit.
Since forrest said this is a hard fork, than quite possibly the switchover mechanism is ... well ... failing.

Code:
2016-06-29 06:32:58.539260 Switchover imminent. Upgraded: 95.437% Threshold: 95.000%
2016-06-29 06:32:58.743562 Switchover imminent. Upgraded: 95.437% Threshold: 95.000%
2016-06-29 06:34:06.060484 Switchover imminent. Upgraded: 95.418% Threshold: 95.000%
2016-06-29 06:34:13.480511 Switchover imminent. Upgraded: 95.418% Threshold: 95.000%
2016-06-29 06:40:43.738976 Switchover imminent. Upgraded: 95.351% Threshold: 95.000%
2016-06-29 06:40:43.830259 Switchover imminent. Upgraded: 95.351% Threshold: 95.000%
2016-06-29 06:42:48.230374 Switchover imminent. Upgraded: 95.416% Threshold: 95.000%
2016-06-29 06:42:48.327881 Switchover imminent. Upgraded: 95.416% Threshold: 95.000%
2016-06-29 06:43:18.880773 Switchover imminent. Upgraded: 95.333% Threshold: 95.000%
2016-06-29 06:43:18.988181 Switchover imminent. Upgraded: 95.333% Threshold: 95.000%
(An upgrade that counts down?)

I could still see v15 clients connecting and sending me v15 shares, somehow these get dealt with in a certain -most inefficient- way.
What seemed most strange was that suddenly there were 22000+ shares instead of the normal 17300 or so.



so I went and edited ./p2pool/networks/bitcoin.py and changed 1500 to 1600 on the line were it says:
Code:
MINIMUM_PROTOCOL_VERSION = 1500
(Don't want no nasty v15 shares to deal with.)

After removing bitcoin.pyc and restarting, p2pool is chugging along nicely again.

(I also removed my local sharechain and let it rebuild, but I don't know if this was necessary)

Good luck!
KorbinDallas
Newbie
*
Offline Offline

Activity: 55
Merit: 0


View Profile
June 29, 2016, 06:52:06 PM
 #14631

Quick Question to my P2Pool Jedi's:

Just upgraded to V.16, experienced many of the cut over problems disabused already....
But what is with the Pool Rate?  Check out the screenshot, only 42TH/s?  I utilized the Windows version provided by Forest.

Any ideas?

http://imgur.com/dPgieNm
squidicuz
Newbie
*
Offline Offline

Activity: 58
Merit: 0


View Profile
June 29, 2016, 10:30:08 PM
 #14632

I submitted a PR to fix the above issues: https://github.com/p2pool/p2pool/pull/313/files

The fix is working for me and my node is no longer experiencing connecting issues with peers.


This happens every time we hardfork.  Perhaps we should implement a better method to do the upgrade?
KorbinDallas
Newbie
*
Offline Offline

Activity: 55
Merit: 0


View Profile
June 30, 2016, 12:53:58 AM
 #14633

As my profile tag indicates, I'm clearly a newbie. So just wanted to start by saying thanks for the help, and they weren't kidding when they said "the best part about Bitcoin is the community"

So the hardfork, here's the newbie question: How can I tell which fork is the right one?

A little explanation:
- P2Pool.in: Downloaded Windows binary version 16.0, installs and runs fine. P2Pool Pool Rate: approx. 140 TH/s
- Github Master Branch: Downloaded and installed master branch code from Github P2Pool. Includes 16.0 update (I assume) P2Pool Pool Rate: approx. 900 TH/s.

- Why such the big variance between these two different versions of P2Pool? Which side is best?
jtoomim
Hero Member
*****
Offline Offline

Activity: 818
Merit: 1006


View Profile WWW
June 30, 2016, 02:25:16 AM
 #14634

- P2Pool.in: Downloaded Windows binary version 16.0, installs and runs fine. P2Pool Pool Rate: approx. 140 TH/s
- Github Master Branch: Downloaded and installed master branch code from Github P2Pool. Includes 16.0 update (I assume) P2Pool Pool Rate: approx. 900 TH/s.
I don't use Windows for my mining servers, so I have no familiarity with the p2pool.in binaries. The github master branch is the one true source for p2pool code, so I would recommend using that.

I don't see why the different software distributions would connect you to a different network with a different hashrate. It's possible that you just weren't able to acquire enough peers the first time you tried, and so you didn't get to see the whole network.

Hosting bitcoin miners for $65 to $80/kW/month on clean, cheap hydro power.
http://Toom.im
KyrosKrane
Sr. Member
****
Offline Offline

Activity: 295
Merit: 250


View Profile WWW
June 30, 2016, 05:18:49 AM
 #14635

Is it possible we have a bona fide fork in the p2pool share chain? That is, someone generated a share that was incompatible with v16, causing v16 nodes to reject it and fork away. That v15 chain continued building, and is the reason we're seeing massive influx of invalid shares on v16 nodes. New pools starting up now have a (small) chance of picking up the old v15 chain, which causes the v15 shares to get propagated to the network. As independent nodes connect to both v15 and v16 nodes, the poor performance we see starts to happen - massive lumps of shares appear, requiring huge CPU time to process, since they don't integrate neatly with the existing share chain. In the end, the incompatible shares are rejected, but they still take time to handle. (To be clear, this is just a guess at what's going on in terms of the big picture. Jtoomim seems to have a handle on the technical aspects of why the performance is poor.)

Tips and donations: 1KyrosREGDkNLp1rMd9wfVwfkXYHTd6j5U  |  BTC P2Pool node: p2pool.kyros.info:9332
squidicuz
Newbie
*
Offline Offline

Activity: 58
Merit: 0


View Profile
June 30, 2016, 05:49:06 AM
 #14636

Is it possible we have a bona fide fork in the p2pool share chain? That is, someone generated a share that was incompatible with v16, causing v16 nodes to reject it and fork away. That v15 chain continued building, and is the reason we're seeing massive influx of invalid shares on v16 nodes. New pools starting up now have a (small) chance of picking up the old v15 chain, which causes the v15 shares to get propagated to the network. As independent nodes connect to both v15 and v16 nodes, the poor performance we see starts to happen - massive lumps of shares appear, requiring huge CPU time to process, since they don't integrate neatly with the existing share chain. In the end, the incompatible shares are rejected, but they still take time to handle. (To be clear, this is just a guess at what's going on in terms of the big picture. Jtoomim seems to have a handle on the technical aspects of why the performance is poor.)

That is sort of pretty much what is going on..  The current upgrade process is that once the majority of the network is upgraded to the next version, a new version is released that bans the old peers and restores performance to the majority network.  Currently the remaining old version peers are spamming the majority network with invalid shares.

We need an automatic implementation of this function as currently it seems to be manual...?  There should be a way to support both versions of peers until majority has reached and maintained the consensus threshold, and then the older peers are banned from the network and the new version is fully enforced.  This would prevent what is occurring and seems to occur every hardfork. :s


The fix has been merged into master.  Upgrade to the latest git version to restore performance to your node.
veqtrus
Member
**
Offline Offline

Activity: 107
Merit: 10


View Profile WWW
June 30, 2016, 10:54:58 PM
 #14637

We need an automatic implementation of this function as currently it seems to be manual...?  There should be a way to support both versions of peers until majority has reached and maintained the consensus threshold, and then the older peers are banned from the network and the new version is fully enforced.  This would prevent what is occurring and seems to occur every hardfork. :s

https://github.com/p2pool/p2pool/pull/314

P2Pool donation button | Bitrated user: veqtrus.
KorbinDallas
Newbie
*
Offline Offline

Activity: 55
Merit: 0


View Profile
July 01, 2016, 01:19:19 AM
 #14638

Thanks @squidicuz  Wink Smiley
imap30
Newbie
*
Offline Offline

Activity: 17
Merit: 0


View Profile
July 01, 2016, 09:17:41 AM
 #14639

I am trying to get p2pool working on my computer. I am having the following problem:

When I load the user interface I get these errors:

https://s31.postimg.org/5gbhvmowr/p2pool_problem_1.png

My user interface also looks like this:

https://s31.postimg.org/mfpyr77wb/p2pool_problem_2.png

1) I have NO CLUE how to use python. The installation instructions are completely useless as far as I am concerned. The setup guide obviously assumes I know more about python/p2pool than I know.

2) the instructions need to list an exact file name so that when I go to download the files needed, I download the correct file

3) the install instructions are not clear enough for some one that is not a developer to get p2pool working

Getting p2pool working is nothing short of reinforcing my believe that linux users have an ego they need to stroke by making things hard to use.

You guys need to make it you're number 1 priority to make the install of p2pool 1 click. thanks for wasting my time...

in2tactics
Hero Member
*****
Offline Offline

Activity: 578
Merit: 501



View Profile
July 01, 2016, 10:52:17 AM
 #14640

I am trying to get p2pool working on my computer. I am having the following problem:

When I load the user interface I get these errors:



My user interface also looks like this:

https://s31.postimg.org/mfpyr77wb/p2pool_problem_2.png

1) I have NO CLUE how to use python. The installation instructions are completely useless as far as I am concerned. The setup guide obviously assumes I know more about python/p2pool than I know.

2) the instructions need to list an exact file name so that when I go to download the files needed, I download the correct file

3) the install instructions are not clear enough for some one that is not a developer to get p2pool working

Getting p2pool working is nothing short of reinforcing my believe that linux users have an ego they need to stroke by making things hard to use.

You guys need to make it you're number 1 priority to make the install of p2pool 1 click. thanks for wasting my time...

You should be using git to clone into both bitcoin and p2pool. You should not need to do anything special with python other than running the script. From start to finish, I used the following on a clean install of debian linux:

Code:
//as root type
apt-get install sudo
adduser YOURUSERNAME sudo

//login to YOURUSERNAME
sudo apt-get install git
sudo apt-get install build-essential libtool autotools-dev automake pkg-config libssl-dev libevent-dev bsdmainutils
sudo apt-get install libboost-all-dev
sudo apt-get install libminiupnpc-dev

git clone https://github.com/bitcoin/bitcoin

cd bitcoin
./autogen.sh
./configure --disable-wallet --without-gui
make
sudo make install

bitcoind -daemon

sudo apt-get install python-twisted python-argparse
sudo apt-get install curl
sudo apt-get install ncurses-dev
sudo apt-get install libffi-dev

cd ~/

git clone https://github.com/forrestv/p2pool

cd p2pool
make

python ~/p2pool/run_p2pool.py --give-author 0 --fee 0 --address YOURBITCOINPOOLADDRESSHERE

You should probably also make changes to your bitcoin.conf.

e.g.

daemon=1
server=1
rpcuser=bitcoinrpc
rpcpassword=YOURRPCPASSWORD
rpcallowip=127.0.0.1
rpcport=8332
blockmaxsize=1000000
mintxfee=0.00001
minrelaytxfee=0.00001

Current HW: 2x Apollo
Retired HW: 3x 2PAC, 3x Moonlander 2, 2x AntMiner S7-LN, 5x AntMiner U1, 2x ASICMiner Block Erupter Cube, 4x AntMiner S3, 4x AntMiner S1, GAW Black Widow, and ZeusMiner Thunder X6
Pages: « 1 ... 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 [732] 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 ... 814 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!