jedimstr
|
|
February 15, 2014, 06:46:13 AM |
|
Hi. Would like to ask about latency Why going below 0.2s will hurt your income and everyone else's on P2Pool. ? What is the reason of that?
It doesn't. It does because lowering the getblocktemplate latency is a tradeoff. As explained in the guide with current bitcoind version reducing this latency is done by reducing the number of transactions you include in the template. Doing so you reduce the fees included in the blocks you find : you reduce your own income and everyone else's when this happens. Today this latency has only a negative effect when a new block is found by the whole network (every ~10 minutes) unless you have a very weak CPU (where CPU usage by bitcoind can slow down P2Pool). This is the same effect for P2Pool and every other pool : until a getblocktemplate returns a result after a new block, the pool can only work on an empty template (no transaction). Going below 0.2s only tries to reduce this 0.2/600 = 0.03% negative impact on the fees income (the block reward isn't impacted) : including more transactions (which amounts to ~1% of the average block value) simply brings more income to everyone including yourself. Continuing the discussion about the getBlockTemplate latency... I see quite a few public nodes have getBlockTemplate latencies approaching that magic 0.2 mark... but I've always struggled. My p2pool node used to be in the 1.5 to 2.0 range until about a month ago when I tweaked it further per your guide and other suggestions on the net. That brought it down to the 0.5 to 0.7 range while trying to retain a high blockmaxsize for greater income. Lately (or it seems right when the whole Bitcoin DDOS started happening in the last week), my latencies have bumped back up to the 1.0 range with occasional spikes much higher. So let's discount that bump up for now. My real question is the efficacy of the dbcache and datadir settings. I've upped my dbcache from the default 25 to 1000. I have quite a bit of free memory to use, but not quite enough to host the entire Blockchain on a ramdrive, so datadir is probably out. Will I see any gains on getBlockTemplate Latency from upping dbcache further to lets say 2000, 3000, or even 5000 megabytes? The Blockchain (as well as the OS and Apps) are all on an SSD already, running OSX Mavericks and using the Bitcoin-QT OMG10 0.8.5 client (waiting for the official 0.9 client to be out before I upgrade since OMG10 already has most of what went into 0.8.6 and I can't find a binary download for the Mac version of the 0.9 preview release) with wallet disabled. Connection is over Verizon FiOS 75/35 and the Mac Pro is connected to my managed switch with dual gigabit ethernet with Link Aggregation. Here's my bitcoin.conf (with certain parameters redacted of course): rpcuser=XXXXREDACTEDXXXX rpcpassword=XXXXREDACTEDXXXX server=1 rpcport=8332 disablewallet=1 dbcache=1000 maxconnections=26 blockmaxsize=1000000 mintxfee=0.00001 minrelaytxfee=0.00001 addnode=67.186.224.85 addnode=88.198.58.172
|
|
|
|
smoothrunnings
|
|
February 15, 2014, 12:26:25 PM Last edit: February 15, 2014, 12:39:53 PM by smoothrunnings |
|
Hi. Would like to ask about latency Why going below 0.2s will hurt your income and everyone else's on P2Pool. ? What is the reason of that?
It doesn't. It does because lowering the getblocktemplate latency is a tradeoff. As explained in the guide with current bitcoind version reducing this latency is done by reducing the number of transactions you include in the template. Doing so you reduce the fees included in the blocks you find : you reduce your own income and everyone else's when this happens. Today this latency has only a negative effect when a new block is found by the whole network (every ~10 minutes) unless you have a very weak CPU (where CPU usage by bitcoind can slow down P2Pool). This is the same effect for P2Pool and every other pool : until a getblocktemplate returns a result after a new block, the pool can only work on an empty template (no transaction). Going below 0.2s only tries to reduce this 0.2/600 = 0.03% negative impact on the fees income (the block reward isn't impacted) : including more transactions (which amounts to ~1% of the average block value) simply brings more income to everyone including yourself. Continuing the discussion about the getBlockTemplate latency... I see quite a few public nodes have getBlockTemplate latencies approaching that magic 0.2 mark... but I've always struggled. My p2pool node used to be in the 1.5 to 2.0 range until about a month ago when I tweaked it further per your guide and other suggestions on the net. That brought it down to the 0.5 to 0.7 range while trying to retain a high blockmaxsize for greater income. Lately (or it seems right when the whole Bitcoin DDOS started happening in the last week), my latencies have bumped back up to the 1.0 range with occasional spikes much higher. So let's discount that bump up for now. My real question is the efficacy of the dbcache and datadir settings. I've upped my dbcache from the default 25 to 1000. I have quite a bit of free memory to use, but not quite enough to host the entire Blockchain on a ramdrive, so datadir is probably out. Will I see any gains on getBlockTemplate Latency from upping dbcache further to lets say 2000, 3000, or even 5000 megabytes? The Blockchain (as well as the OS and Apps) are all on an SSD already, running OSX Mavericks and using the Bitcoin-QT OMG10 0.8.5 client (waiting for the official 0.9 client to be out before I upgrade since OMG10 already has most of what went into 0.8.6 and I can't find a binary download for the Mac version of the 0.9 preview release) with wallet disabled. Connection is over Verizon FiOS 75/35 and the Mac Pro is connected to my managed switch with dual gigabit ethernet with Link Aggregation. Here's my bitcoin.conf (with certain parameters redacted of course): rpcuser=XXXXREDACTEDXXXX rpcpassword=XXXXREDACTEDXXXX server=1 rpcport=8332 disablewallet=1 dbcache=1000 maxconnections=26 blockmaxsize=1000000 mintxfee=0.00001 minrelaytxfee=0.00001 addnode=67.186.224.85 addnode=88.198.58.172
What do you expect to get out of a dual Gbit connection when your internet is only 75/35Mbit/s? Your internet isn't anywhere near 1GbE so there no gain in what have have done. Download the source for Bitcoin-QT and compile it on your mac.
|
|
|
|
jedimstr
|
|
February 15, 2014, 12:44:11 PM Last edit: February 15, 2014, 12:54:49 PM by jedimstr |
|
What do you expect to get out of a dual Gbit connection when your internet is only 75/35Mbit/s? Your internet isn't anywhere near 1GbE so there no gain in what have have done. The dual Link Aggregated Connection has nothing to do with my WAN bandwidth. It was to solve the following performance issues I was having: 1. Connection Starvation: My MacPro serves up multiple services on my LAN with p2pool, Bitcoin node, Namecoin Node, Stratum Proxy for Scrypt Mining, individual miners connecting to it, Apache reverse proxy and gateway to outside access of my network, Media serving (large HD video and FLAC music), etc. Even maxing out custom TCP/IP settings for connections didn't fully solve the contention and queuing I was seeing. LAG solved all of it. 2. Round Robin / Load Balancing across ethernet connections: I'm now able to support the above service connections AND transcode video to my SAN box with no/little constraints. Previously whenever I transcoded video, it ground everything else to a halt, which put a damper on my p2pool mining node. And I do transcode video often. No more issues. 3. Internal LAN QoS is much easier with more connections available. I can manage my LAN latency-defeating rules for certain traffic in a more flexible way now that I have more paths available to my main Mac Pro server. I can set priorities for various connections across my LAN and not have to worry about the traffic jam that I used to get to my Mac's IP. Link Aggregation is almost never about pure bandwidth. It's about freeing up queues and load balancing socket contention so you can support more parallel services on a single server, especially ones that are latency adverse. I'd still like to see someone else answer my original questions above regarding dbcache setting.
|
|
|
|
smoothrunnings
|
|
February 15, 2014, 02:04:11 PM |
|
What do you expect to get out of a dual Gbit connection when your internet is only 75/35Mbit/s? Your internet isn't anywhere near 1GbE so there no gain in what have have done. The dual Link Aggregated Connection has nothing to do with my WAN bandwidth. It was to solve the following performance issues I was having: 1. Connection Starvation: My MacPro serves up multiple services on my LAN with p2pool, Bitcoin node, Namecoin Node, Stratum Proxy for Scrypt Mining, individual miners connecting to it, Apache reverse proxy and gateway to outside access of my network, Media serving (large HD video and FLAC music), etc. Even maxing out custom TCP/IP settings for connections didn't fully solve the contention and queuing I was seeing. LAG solved all of it. 2. Round Robin / Load Balancing across ethernet connections: I'm now able to support the above service connections AND transcode video to my SAN box with no/little constraints. Previously whenever I transcoded video, it ground everything else to a halt, which put a damper on my p2pool mining node. And I do transcode video often. No more issues. 3. Internal LAN QoS is much easier with more connections available. I can manage my LAN latency-defeating rules for certain traffic in a more flexible way now that I have more paths available to my main Mac Pro server. I can set priorities for various connections across my LAN and not have to worry about the traffic jam that I used to get to my Mac's IP. Link Aggregation is almost never about pure bandwidth. It's about freeing up queues and load balancing socket contention so you can support more parallel services on a single server, especially ones that are latency adverse. I'd still like to see someone else answer my original questions above regarding dbcache setting. Thanks for the clarification, and good luck with that, but teaming as it's called shouldn't be required for you are doing if you have the right setup.
|
|
|
|
gyverlb (OP)
|
|
February 15, 2014, 05:59:46 PM |
|
I'd still like to see someone else answer my original questions above regarding dbcache setting. I don't think the dbcache setting is interesting. Even when bitcoind/qt was purely BerkeleyDB I didn't see any performance gains with large cache values. Now bitcoind/qt rely on LevelDB for a large part of it's data so if dbcache is only related to the BerkeleyDB database it is probably even less effective. 0.2s for getblocktemplate isn't bad at all especially with the changes of last summer which made p2pool even less dependent on getblocktemplate latency. If your efficiency is ~100%, I wouldn't change anything. If it's below I'd advise raising the mintxfee and minrelaytxfee every 24h (maybe double it on each change) until you either reach the default value of 0.0001 or reach the 100% efficiency level.
|
|
|
|
jedimstr
|
|
February 16, 2014, 12:36:31 AM |
|
I'd still like to see someone else answer my original questions above regarding dbcache setting. I don't think the dbcache setting is interesting. Even when bitcoind/qt was purely BerkeleyDB I didn't see any performance gains with large cache values. Now bitcoind/qt rely on LevelDB for a large part of it's data so if dbcache is only related to the BerkeleyDB database it is probably even less effective. 0.2s for getblocktemplate isn't bad at all especially with the changes of last summer which made p2pool even less dependent on getblocktemplate latency. If your efficiency is ~100%, I wouldn't change anything. If it's below I'd advise raising the mintxfee and minrelaytxfee every 24h (maybe double it on each change) until you either reach the default value of 0.0001 or reach the 100% efficiency level. That's the thing. I'd love a getBlockTemplate of 0.2s. Unfortunately my node lately is above 1.0s. That's what I need help with (if it does help any).
|
|
|
|
jedimstr
|
|
February 16, 2014, 12:39:52 AM |
|
What do you expect to get out of a dual Gbit connection when your internet is only 75/35Mbit/s? Your internet isn't anywhere near 1GbE so there no gain in what have have done. The dual Link Aggregated Connection has nothing to do with my WAN bandwidth. It was to solve the following performance issues I was having: 1. Connection Starvation: My MacPro serves up multiple services on my LAN with p2pool, Bitcoin node, Namecoin Node, Stratum Proxy for Scrypt Mining, individual miners connecting to it, Apache reverse proxy and gateway to outside access of my network, Media serving (large HD video and FLAC music), etc. Even maxing out custom TCP/IP settings for connections didn't fully solve the contention and queuing I was seeing. LAG solved all of it. 2. Round Robin / Load Balancing across ethernet connections: I'm now able to support the above service connections AND transcode video to my SAN box with no/little constraints. Previously whenever I transcoded video, it ground everything else to a halt, which put a damper on my p2pool mining node. And I do transcode video often. No more issues. 3. Internal LAN QoS is much easier with more connections available. I can manage my LAN latency-defeating rules for certain traffic in a more flexible way now that I have more paths available to my main Mac Pro server. I can set priorities for various connections across my LAN and not have to worry about the traffic jam that I used to get to my Mac's IP. Link Aggregation is almost never about pure bandwidth. It's about freeing up queues and load balancing socket contention so you can support more parallel services on a single server, especially ones that are latency adverse. I'd still like to see someone else answer my original questions above regarding dbcache setting. Thanks for the clarification, and good luck with that, but teaming as it's called shouldn't be required for you are doing if you have the right setup. The "Right Setup" is exactly what this is. Especially when dealing with multi stream uncompressed video to and from my SAN for edits. (#2 above) . Nevermind. Obviously you didn't read anything I wrote.
|
|
|
|
gyverlb (OP)
|
|
February 16, 2014, 02:26:17 AM |
|
That's the thing. I'd love a getBlockTemplate of 0.2s. Unfortunately my node lately is above 1.0s. That's what I need help with (if it does help any).
If you don't have low efficiency then getBlockTemplate being above 1.0s isn't a problem in itself. The advice I gave above should lower your getBlockTemplate latency. The minimum fees are used to select which transactions are kept in your bitcoind's memory and which ones are included in the block template given to p2pool. If you raise these fees just a bit your node won't have to do as much work and your latency will lower. If you raise them to 0.0001 you can expect <0.2s according to the recent amount of txs. Note that if you target a latency you'll have to change these values when more/less transactions are waiting to be included in a block (which is not doable as these changes happen unpredictably). This is why I advise to leave them alone if the efficiency is good: the target should be to get good income and good behavior on the network not reaching arbitrary latency targets.
|
|
|
|
jedimstr
|
|
February 16, 2014, 03:23:31 AM |
|
That's the thing. I'd love a getBlockTemplate of 0.2s. Unfortunately my node lately is above 1.0s. That's what I need help with (if it does help any).
If you don't have low efficiency then getBlockTemplate being above 1.0s isn't a problem in itself. The advice I gave above should lower your getBlockTemplate latency. The minimum fees are used to select which transactions are kept in your bitcoind's memory and which ones are included in the block template given to p2pool. If you raise these fees just a bit your node won't have to do as much work and your latency will lower. If you raise them to 0.0001 you can expect <0.2s according to the recent amount of txs. Note that if you target a latency you'll have to change these values when more/less transactions are waiting to be included in a block (which is not doable as these changes happen unpredictably). This is why I advise to leave them alone if the efficiency is good: the target should be to get good income and good behavior on the network not reaching arbitrary latency targets. Thanks. Efficiency has been bouncing around 100 but is 94 right now. Will try mintxfee and minrelaytxfee at 0.00006 and tweak from there.
|
|
|
|
matthewh3
Legendary
Offline
Activity: 1372
Merit: 1003
|
|
February 22, 2014, 03:55:59 PM |
|
Would this server cope well for a merged-mining P2Pool node - http://www.supermicro.com/products/system/1U/5018/SYS-5018A-FTN4.cfm - It's only Atom CPU based but it has eight cores. So a core each for each of the five wallets, a core for P2Pool itself, a core for the OS and one spare core left over.
|
|
|
|
bitpop
Legendary
Offline
Activity: 2912
Merit: 1060
|
|
February 22, 2014, 03:58:41 PM Last edit: February 23, 2014, 04:07:21 PM by bitpop |
|
That's a nice cpu Not the old atom
|
|
|
|
smoothrunnings
|
|
February 22, 2014, 06:38:09 PM |
|
Sorry how do you plan to assign which CPU gets what core?
|
|
|
|
bitpop
Legendary
Offline
Activity: 2912
Merit: 1060
|
|
February 22, 2014, 06:54:26 PM |
|
Sorry how do you plan to assign which CPU gets what core? I know in Windows you can but he probably meant hypothetically Try it, cpu affinity
|
|
|
|
smoothrunnings
|
|
February 22, 2014, 07:00:32 PM |
|
Sorry how do you plan to assign which CPU gets what core? I know in Windows you can but he probably meant hypothetically Try it, cpu affinity He probably would be better off installing ESXi like I have on my IBM System x3400 M3.
|
|
|
|
matthewh3
Legendary
Offline
Activity: 1372
Merit: 1003
|
|
February 22, 2014, 07:34:43 PM Last edit: February 22, 2014, 10:59:47 PM by matthewh3 |
|
|
|
|
|
CartmanSPC
Legendary
Offline
Activity: 1270
Merit: 1000
|
|
February 25, 2014, 08:10:18 AM |
|
If you don't have this problem, you should raise the blockmaxsize value instead to get more income: blockmaxsize=1000000 #default is 500000
This is the maximum value allowed by the Bitcoin protocol. There are lots of unconfirmed transactions with low fees just waiting for P2Pool users to mine them and get the benefits. You should only use lower values if all other means of saving bandwidth don't work without lowering your efficiency (lowering the number of connections from both bitcoind and P2Pool as shown above). Lowering this setting not only lowers your income, it lowers every other P2Pool user's income too. Was looking into using blockmaxsize=1000000 and started digging into the source code for bitcoin to confirm it's benefits. Found that in 0.9.0rc1 it is increasing the default -blockmaxsize to 750K and that it is the MAXIMUM size for mined blocks. /** Default for -blockmaxsize, maximum size for mined blocks **/ static const unsigned int DEFAULT_BLOCK_MAX_SIZE = 750000;The old version was: -/** The maximum size for mined blocks */ -static const unsigned int MAX_BLOCK_SIZE_GEN = MAX_BLOCK_SIZE/2; (500K) Unless I'm missing something does this show that setting blockmaxsize=1000000 has no effect on mined blocks? Source: https://github.com/bitcoin/bitcoin/commit/ad898b40aaf06c1cc7ac12e953805720fc9217c0
|
|
|
|
gyverlb (OP)
|
|
February 25, 2014, 09:08:31 AM |
|
Was looking into using blockmaxsize=1000000 and started digging into the source code for bitcoin to confirm it's benefits.
Found that in 0.9.0rc1 it is increasing the default -blockmaxsize to 750K and that it is the MAXIMUM size for mined blocks.
It's the *default* maximum, they can't change the protocol maximum without invalidating some previous blocks which would break the current blockchain.
|
|
|
|
CartmanSPC
Legendary
Offline
Activity: 1270
Merit: 1000
|
|
February 25, 2014, 06:10:50 PM |
|
Was looking into using blockmaxsize=1000000 and started digging into the source code for bitcoin to confirm it's benefits.
Found that in 0.9.0rc1 it is increasing the default -blockmaxsize to 750K and that it is the MAXIMUM size for mined blocks.
It's the *default* maximum, they can't change the protocol maximum without invalidating some previous blocks which would break the current blockchain. Ok, thanks. It was late and I read the comment differently. It is slightly ambiguous especially right before bed
|
|
|
|
|
roy7
|
|
February 27, 2014, 02:14:00 PM |
|
My patch has no effect on running your own node. It's purpose is to let each miner on a public node get the share difficulty target they would get as if they were running their own node. To control your miner's difficulty target put /DIFF after your username you are using to connect to your node with. If you use a tiny one it'll make sure you always find the smaller shares possible (whatever the pool's minimum share difficulty is for that time).
|
|
|
|
|