Why do you think those peers are malicious? The blockchain is over 150 GB. Those peers are probably syncing the blockchain.
Bitcoin Core does not think those peers are malicious, otherwise it would have disconnected and banned them itself. It has its own ban criteria which is much more than just bandwidth usage and strange versions (the version isn't even a criteria). This criteria includes things like receiving invalid blocks and transactions, receiving a lot of malformed messages, etc. Downloading a lot of blocks is not evidence of malicious behavior.
Are you sure that the version is not a criteria being used by bitcoind to automatically ban the peer? What do you think about a node which was banned by mine this morning shown below? The list of banned peers on my node with that particular node being highlighted. The detail of that node on https://bitnodes.earn.com/ with the version being highlighted.
|
|
|
Thanks a lot for your confirmation.
As the support for NODE_GETUTXO looks to be one of the reasons why some people who disagree created the fork, i.e. Bitcoin XT, it is not worth at all for me to spend any efforts pursuing to enable it.
|
|
|
Thanks a lot for your reply. In the link you provided, it is part of a "version" message, used to initiate a connection. It is enable by default and means that you can provide some part of the UTXO set if someone (which would be a SPV node) requests it.
I was also assuming that it should be enabled by default, as I cannot find any parameters to set it in bitcoin.conf. However, when I executed "bitcoin-cli getnetworkinfo" I got the following: { "version": 170100, "subversion": "/Satoshi:0.17.1/", "protocolversion": 70015, "localservices": "000000000000040d", "localrelay": true, "timeoffset": 0, "networkactive": true, "connections": 66, "networks": [ . .
As we can see, the "localservices" is shown that the 2nd bit is not set (0x40d instead of 0x40f) which means that my full node does not support NODE_GETUTXO. It seems that Bitcoin Core for whatever reason does not support it.
|
|
|
Perhaps curl is the wrong tool for my purpose, or maybe because I cannot figure out the right syntax for that. I have just found the solution for my purpose using jq tool ( https://github.com/stedolan/jq). I found it to be a quite powerful tool for managing JSON data from command line. For those who are looking for similar solution, here is the example of mine. root@deeppurple:~$ bitcoin-cli getpeerinfo | jq -r '.[] | [.addr, .services, (.version|tostring), .subver] | join(",")'
The output for that is like the following: [2001:41d0:2:af72::1]:8333,000000000000040d,70015,/Satoshi:0.17.0/ [2a01:4f8:10a:37ee::2]:25424,0000000000000000,70015,/Satoshi:0.15.1/ 88.99.167.186:14287,0000000000000000,70015,/Satoshi:0.15.1/ [2a01:4f8:141:4d7::2]:8333,000000000000040d,70015,/Satoshi:0.17.0/ 162.209.88.174:8333,000000000000040d,70015,/Satoshi:0.16.3/ [2a03:b0c0:2:d0::4bc:2001]:38160,0000000000000000,70015,/bitnodes.earn.com:0.1/ 188.166.69.73:45169,0000000000000000,70015,/bitnodes.earn.com:0.1/ . .
|
|
|
I would like to get only some parts of "getpeerinfo" output particularly the "addr", "services", "version" and "subver" of the peers, and put them into a list in text file. So far, I could not figure out the right JSON-RPC syntax to query that using curl. I can get the entire output of "getpeerinfo" using curl command like below: curl --user <rpcuser>:<rpcpassword> --data-binary '{"jsonrpc": "1.0", "id":"getpeerinfo", "method": "getpeerinfo", "params": []}' -H 'content-type: text/plain;' http://127.0.0.1:8332/
I assume that I have to put the right syntax on the "params" part, but I cannot find any information so far. I got errors when I tried for instance the following: curl --user <rpcuser>:<rpcpassword> --data-binary '{"jsonrpc": "1.0", "id":"getpeerinfo", "method": "getpeerinfo", "params": [{"addr"}]}' -H 'content-type: text/plain;' http://127.0.0.1:8332/ {"result":null,"error":{"code":-32700,"message":"Parse error"},"id":null}
Could anyone give me hints please? Thanks a lot in advance for your help.
|
|
|
You might want to consider using the `-maxuploadtarget` option. From the wiki :
Thanks for your suggestion but maxuploadtarget only helps people with limited traffic quota. It helps to maintain the traffic quota below the figure set by the ISP. It will not help us preventing our network interface from being saturated on uplink by the peers with much higher downlink bandwidth capabilities, e.g. in Gbps level. How about 80:20 rule (or sometimes called Parreto rules)? It's good option for managing bandwidth.
Obviously 80% goes to your full nodes and you need to measure maximum/average bandwidth first.
I have already tried to implement that 80:20 rule before with tc HTB qdisc, so 80% bitcoin traffic and 20% other traffic. But since I have only 100 Mbps on uplink, that makes me have only 80 Mbps for bitcoin traffic. As soon as a peer with much higher downlink capability occupies the whole bandwidth, my full node could not serve the requests from other peers. I could indeed add more classes on the HTB for each peer. But my traffic shaper will become more complex as the peers can very dynamically come and go anytime. So I think I will stick with a simple hashlimit on my iptable to limit the uplink traffic to 2 Mbps per IP:port pair of the peers. From what I observed, a peer some times have multiple connections with the same IP address to my full node. As of my writing, there is a peer which has been downloading blocks from my full node for more than 3 hours. Its status when it was downloading the blocks for about 30 minutes is as below (that peer is the first one on the list). And about 2.5 hours later, it was still downloading up to about 2.8 GB as below As it seems to provide full service (services = 0x40d) and it looks like legitimate /Satoshi:0.17.1/, I let it keep downloading blocks as it is harmless anyway to other peers as it can only download with the speed below 2 Mbps (thanks to iptables hashlimit) as shown on some parts of iftop output below. => 216.21.162.208:35166 1.98Mb 1.95Mb 1.85Mb => 150.109.74.119:39375 20.5Kb 4.09Kb 2.59Kb => 84.26.108.54:8333 0b 3.71Kb 3.22Kb => 88.25.100.45:63935 14.1Kb 3.17Kb 1.67Kb => 185.25.224.202:8333 8.45Kb 3.16Kb 2.79Kb => 88.99.186.25:50854 11.0Kb 2.62Kb 2.11Kb => [2a01:4f8:141:4d7::2]:8333 2.16Kb 2.45Kb 1.96Kb => [2a03:b0c0:2:d0::4bc:2001]:38160 10.7Kb 2.34Kb 1.81Kb => [2a01:4f8:10a:37ee::2]:25424 10.7Kb 2.30Kb 1.71Kb => 158.109.79.23:55301 10.4Kb 2.23Kb 1.66Kb => 188.166.69.73:45169 10.4Kb 2.23Kb 1.72Kb => 129.13.88.177:59646 10.4Kb 2.23Kb 1.66Kb => [2a04:3544:1000:1510:b08f:6fff:fe1b:3007]:55612 10.7Kb 2.15Kb 1.76Kb => [2a00:1398:4:2a00::a5]:42533 10.7Kb 2.15Kb 1.73Kb => [2a00:1398:4:2a01::78]:35724 10.7Kb 2.15Kb 1.71Kb => [2a00:1398:4:2a01::77]:44310 10.7Kb 2.15Kb 1.63Kb => 150.109.74.119:45717 10.4Kb 2.09Kb 2.10Kb => 23.92.36.2:56913 10.4Kb 2.09Kb 1.67Kb => 94.237.44.67:48927 10.4Kb 2.09Kb 1.66Kb => 129.13.252.36:55358 10.4Kb 2.09Kb 1.66Kb => 88.99.167.186:14287 10.4Kb 2.09Kb 1.59Kb => 162.218.65.27:53265 10.4Kb 2.09Kb 1.59Kb => 129.13.88.175:58702 10.4Kb 2.09Kb 1.59Kb => 188.65.213.21:49201 10.4Kb 2.09Kb 1.59Kb => 162.218.65.53:13354 10.4Kb 2.09Kb 1.59Kb => 162.218.65.236:10183 10.4Kb 2.09Kb 1.59Kb => 183.205.191.79:22241 6.82Kb 1.84Kb 1.18Kb => [2001:41d0:2:af72::1]:8333 1.50Kb 1.75Kb 1.21Kb => 185.53.156.255:50501 6.16Kb 1.58Kb 1.66Kb => 13.209.125.83:8333 208b 1.36Kb 1.39Kb => [2a02:8388:2282:f900:e29d:31ff:fe26:2628]:50256 2.59Kb 1.29Kb 2.01Kb => 162.209.88.174:8333 1.82Kb 1.26Kb 1.63Kb => 47.6.34.160:8333 1.13Kb 1.24Kb 1.33Kb => 94.237.44.67:35799 3.49Kb 1.02Kb 524b => 158.109.79.23:61545 0b 0.98Kb 500b => 80.110.127.178:9282 1.05Kb 943b 854b => 84.73.200.108:8333 804b 526b 1.68Kb => 185.220.101.66:39843 2.23Kb 520b 2.15Kb => 59.110.8.18:34358 864b 173b 86b => 59.110.8.18:33241 0b 173b 86b => 59.110.8.18:58310 0b 0b 86b => 129.13.88.178:43591 0b 0b 0b => 129.13.72.197:43591 0b 0b 0b
|
|
|
Why do you think those peers are malicious? The blockchain is over 150 GB. Those peers are probably syncing the blockchain.
Well... I am not sure myself, that is why I asked the question. Unfortunately that happens very rare, not even once a month I got a peer which is continuously downloading blocks in the order above 10 GB within an hour. So I cannot make any good conclusion. What I notice is that, normal peers do not behave like that. And I don't usually manually ban peers with version above 0.15.x and services=0x40d.
|
|
|
Last week, I found a peer got banned by my script because of it kept downloading blocks for up to 6 GB within an hour. The ban score of the peers increased by 1 if they keep continuously downloading blocks up to 1 GB every 10 minutes. My script bans a peer that has a ban score more than 5 or continuously downloading up to 6 GB within a hour.
In case you are wondering how could this happen, last week I set more relax hashlimit rule on my iptables to observe the behaviour of my full node's peers. I saw there were a lot more packets being dropped by my iptables than accepted. So I set the outgoing traffic per peer to 16 Mbps (--hashlimit-upto 2000kb/s). At the moment, I have the hashlimit-upto set to 250kb/s (2 Mbps), so the maximum traffic for each peers in an hour when they are continuously downloading blocks will only be about 900 MB. I kept playing with the hashlimit rule as I am not sure what is the best setting for both protecting my full node and satisfying legitimate peers. With the current setting, the legitimate peers which have 100 Mbps link (like my bitcoin-qt), will have to wait a lot longer time to download the blocks they need.
|
|
|
It happened once that a peer chocked the eth0 interface of my full node, because I forgot to add the hashlimit rule on my iptables to limit the outgoing traffic per peer to 1 Mbps. That peer kept continuously downloading blocks from my full node for more than 10 GB within an hour. That made even SSH session to my full node very slow. Everything went fine after I manually banned that peer.
Last week, I found a peer got banned by my script because of it kept downloading blocks for up to 6 GB within an hour. The ban score of the peers increased by 1 if they keep continuously downloading blocks up to 1 GB every 10 minutes. My script bans a peer that has a ban score more than 5 or continuously downloading up to 6 GB within a hour.
Perhaps I need to tighten the ban criteria, but I am afraid that I will ban legitimate peers. However, when I observe the behaviour of the bitcoin-qt on my PC in which my full node is its prefer peer, it only downloads a few hundreds MB within an hour even after I didn't launch it for a week. So it does not keep downloading all blocks that it needs from my full node the whole time, as it also downloads some blocks from its other outgoing peers.
Do any of you know why peers keep continuously downloading blocks like that? Are they legitimate peers?
What are actually the criteria of illegitimate peers applied in bitcoin software, apart from the strange versions that they advertise and anything related to the obvious like that?
Thanks a lot in advance.
|
|
|
Existed network nodes do not ask old blocks. So, your node will be fine. Transaction validation does not require data from old blocks either. So, if your UTXO database is fine - your node is also fine and robust. The only thing you can not do is cloning.
Thanks for your confirmation. Perhaps I am just being too paranoid about maintaining the integrity of my full nodes. That is perhaps because I recently keep updating my criteria to ban misbehaving peers, on top of the ones automatically banned by bitcoind. I think I will post another topic about this as I am not sure about something.
|
|
|
It's my understanding that the blk*.dat files are append only, so everything but the current file is 100% immutable. (Until I started pruning on my oldest Bitcoin node, my low numbered blk*.dat files had a last-modified timestamp from 2011.) If this is correct, there's no need to repeatedly verify the entire blockchain with exhaustive checks, since the data in the large majority of the dat files will never be updated by the client. Something like this should suffice:
Indeed. It is only the latest blk*.dat file that is being updated. The rest of the blk*.dat files are left untouched and they are only being read when peers request the blocks located in those older blk*.dat. My question was, what happen when a peer requests old blocks that are located in a corrupted blk*.dat file (e.g. due to bitcoind crashes or not properly shutdown), so bitcoind sends corrupted blocks as well. From Bitcoin network perspective, I think those peers will just reject the corrupted blocks (perhaps banned the peer which sends it) and request the same blocks to other peers. So there is no issue at bitcoin network level. But the peer which has corrupted blk*.dat files will eventually be isolated as a lot of peers ban it. So I was wondering if there would be a mechanism to notify the peer which sends corrupted blocks, so that it can update its blk*.dat based on the data from its outbound peers. But the main problem is that, nobody can know for sure whether the corrupted block is due to the peer has corrupted blk*.dat file or the peer intentionally sends garbage block. So I think it might be difficult to implement such mechanism. But perhaps the top Bitcoin programmers have some ideas to deal with this kind of issues.
|
|
|
Next time I will ask you to write a small program which intercepts network traffic from your node to 8333 ports and puts some garbage in it. Why do you think I would do that? Even if I were a top Bitcoin programmer, that would defy my own personal principle. I have already spent a lot of my own personal efforts and (some) money to support Bitcoin network since version 0.9.x as I like the idea and the objective. I hate the change of its name to "Bitcoin Core" though, just because of some stupid people tried to disrupt it. Take the binary file editor, open one of the oldest files ( for example blk00005.dat ) and change some data in it. I am quite sure that the bitcoind on your node will not discover any problems.
I believe this would only happen on a crappy in-house developed software and only maintained by 1 or 2 programmers. There are a lot of companies involve as this becomes an industry with huge market valuation and supported by thousands of developers around the globe. So I really doubt that we (including me) are so stupid to trust software that has no mechanism to maintain the integrity of the database. What I asked in this topic is how to do manual check which is better and faster than using "bitcoin-cli verifychain" command. Maybe there is no other way to do manual check. But I believe there must be a mechanism within bitcoind (and bitcoin-qt) software to make sure the integrity of blockchain data is properly maintained. I asked about that mechanism is for me to understand the impact on my full node. A few years back when the blockchain data on my full node was corrupted, I had to do reindex resulting it to re-download the entire blocks from blk00000.dat. But you pointed out something that I have to double check myself. I think I will intentionally corrupt the 2nd last blk*.dat file and run my full node on Bitcoin testnet to see what will happen.
|
|
|
Out of curiosity, I just executed anto@deeppurple:~$ bitcoin-cli verifychain 1 10000 true anto@deeppurple:~$
And the result 2019-02-20T22:24:01Z Verifying last 10000 blocks at level 1 2019-02-20T22:24:01Z [0%]...[10%]...[20%]...[30%]...[40%]...[50%]...[60%]...[70%]...[80%]...[90%]...[DONE]. 2019-02-20T22:32:38Z No coin database inconsistencies in last 10000 blocks (0 transactions)
It took 8 minutes and 37 seconds to verify just the validity of the last 10000 blocks. It will take about 8 hours to verify 565947 blocks. I think that is do-able. So the next time I restart my bitcoind, I will use the following command to check the validity of all block files. bitcoind -checkblocks=0 -checklevel=1
I think that would be enough. But that means I assume that everything must be valid as all block files are valid. I really don't like that kind of assumption though. I am wondering how many people (who run full nodes obviously) check all block files (checkblocks=0) with checklevel=4. Is there anybody who does that regularly? There must be better way than that, as otherwise it will take a lot longer time next year when we reach above 1 million blocks. For instance, applying some kind of a process to notify the peer that the block which it sent out is invalid so that the peer can update its block file based on the data on its outbound peers.
|
|
|
Your peers will disconnect/ban your misbehaviour node and will try to get valid data from another sources.
This is exactly what I want to avoid, hence my questions. If something went wrong outside my control causing the corruption on blk*dat and/or rev*.dat files, e.g. power outage or glitch, my full node should not be categorised as "misbehave node". There must be better mechanism to avoid such full nodes from being banned. In my case, I want to be able to make sure the integrity of the blockchain files on my full node.
|
|
|
a) BitcoinCore writes blocks into blk*.dat files not in their order. b) BitcoinCore keeps orphan blocks in blk*.dat files
Thanks a lot. That explains why my pruning full node and my main full node have different md5sum from blk01513.dat and rev01513.dat onward, as starting from those files they are running independently. How about my other question? I am sorry for a basic question as I didn't realise that until now. What will happen if my main full node (which has all blocks from blk00000.dat) fails to send valid data to its peers due to corrupted (for whatever reasons) blk*.dat and/or rev*.dat files? What kind of mechanism applies in bitcoind? I am running bitcoin core 0.17.1 by the way. And does it make sense to run for instance "bitcoin-cli verifychain 4 563927" once in a while to make sure the integrity of my full node? On my VPS that might take about 7 hours as it took about 4 seconds to verify 100 blocks.Edited: Sorry... wrong calculation on "4 seconds to verify 100 blocks". It took about 1 minutes 14 seconds to verify 100 blocks. So to verify 563927 blocks will take about 5 days on my VPS.
|
|
|
I apologise if this had been asked and answered before, but I failed to find that on this forum. Every time I found a new VPS provider offering cheaper and bigger resources, I move my full node to the new VPS. I make sure the integrity of blk*.dat and rev*.dat files by comparing their md5sum on the source and target VPS. I recently decided not to renew the contract of one of my VPS'. Instead of letting it waiting for its contract expiry date and doing nothing, I configured another full node with pruning as it has only about 100 GB storage space. When I compared the pruning full node with the main one, the most recent blk*.dat and rev*.dat files have different md5sum as below. . . 49d09f29e04dd8c91fea980a1978ee9c blk01510.dat 49d09f29e04dd8c91fea980a1978ee9c blk01510.dat 6bdeff13773a02b73b38a859b5f8a461 blk01511.dat 6bdeff13773a02b73b38a859b5f8a461 blk01511.dat e419727a0b5a370256f623ccbac64e13 blk01512.dat e419727a0b5a370256f623ccbac64e13 blk01512.dat 802fa5848a9cb9fa4adc0b5e2b584207 blk01513.dat | 3edb80b29dbb6b80c9fbe477d26b5edd blk01513.dat bb8f563889804ebce134f371cdd61213 blk01514.dat | eb7642fcc1de5ff0825a23e23d991e71 blk01514.dat eeedb8d91fe9fa7cc19021d142744b0f blk01515.dat | 08786174285354d992c730c14fb50edb blk01515.dat . . 2e3c8f5cf0bfa1e9147390d238d71001 blk01533.dat | 07e3d7f105cf504830afae31be5c9574 blk01533.dat caf153aaf6433814b6579f0a531ae6b4 blk01534.dat | 5f3021e57cfd4a97204bf25f38ef56a8 blk01534.dat 9a7d8d931540f64a06c7fce73db791cf blk01535.dat | 49ae0a801b7fd36ff22f578e521141a8 blk01535.dat . . 3a70613f860c242201f4f84d27524992 rev01510.dat 3a70613f860c242201f4f84d27524992 rev01510.dat 7dc17fc34d5ffe5dca1f4fccf0591641 rev01511.dat 7dc17fc34d5ffe5dca1f4fccf0591641 rev01511.dat ac77bbefd85305fce50dfcc63e41cd1d rev01512.dat ac77bbefd85305fce50dfcc63e41cd1d rev01512.dat 95c0f5bc1b19ceee3ee7ba727e580fd3 rev01513.dat | 0c861743a64dfa3b50002aee40e391b9 rev01513.dat 9d6d81f0429fc33f36ed7fb49cdfe1bd rev01514.dat | 74caa8ce3429ccb55e4fb50de0a23630 rev01514.dat 375cf8c1ffef33d6899981d09087205b rev01515.dat | 43c2a326b5c5c66da8172f924ba4357d rev01515.dat . . 207ce7cc6b82f9dd0e514edfbfaa3332 rev01533.dat | dd0dc8175770ea616dce3d7c4fb292fc rev01533.dat 3a22716cecc8759d96545c8be7274c4a rev01534.dat | 167bce7ead5ee42fcdb7629fb86edb2e rev01534.dat befed144db33cd208be59bdedb360098 rev01535.dat | 63fa05c946ad883797ed6053b3a87da5 rev01535.dat
That makes me wonder on the integrity of my blockchain files. As far as I know, the only command to verify the blockchain database is "bitcoin-cli verifychain" as below bitcoin-cli shell anto@deeppurple:~$ bitcoin-cli verifychain 4 100 true anto@deeppurple:~$
debug.log . . 2019-02-20T16:46:18Z Verifying last 100 blocks at level 4 2019-02-20T16:46:18Z [0%]...[10%]...[20%]...[30%]...[40%]...[50%]...[DONE]. 2019-02-20T16:47:32Z No coin database inconsistencies in last 100 blocks (236944 transactions) . .
But I don't think that will tell me which blk*.dat and rev*.dat files that are corrupted (if any). And it will take quite a long time to verify the whole blockchain files. Is there any better and faster way to manually verify those blk*.dat and rev*.dat? What will happen if my full node sends corrupted data to its peers who asked very old blocks, due to the corrupted blk*.dat and rev*.dat files? Will my full node get notified so that it can automatically repair the relevant corrupted blk*.dat and rev*.dat files? Thanks a lot in advance for your help.
|
|
|
All of the released stuff (source tarballs, binaries, etc.) can be found here: https://bitcoincore.org/bin/ and https://bitcoin.org/bin/. You should use the source tarballs that are distributed from those places as those are the ones that are deterministically built and PGP signed by the release maintainer. Thanks a lot. I just realised that there are source tarballs on https://bitcoincore.org/bin/ and https://bitcoin.org/bin/. I took https://bitcoin.org/bin/bitcoin-core-0.15.1/bitcoin-0.15.1.tar.gz. I have just re-compiled bitcoind and bitcoin-qt using that source tarball. However, I could only have bitcoind successfully compiled. I already installed it on my 2 full nodes. I got the following error when I compiled bitcoin-qt: dh_install --list-missing Failed to copy 'share/pixmaps/bitcoin16.png': No such file or directory at /usr/share/dh-exec/dh-exec-install-rename line 32, <> line 2. dh_install: problem reading debian/bitcoin-qt.install: debian/rules:27: recipe for target 'override_dh_install' failed make[1]: *** [override_dh_install] Error 127 There is no share/pixmaps/bitcoin16.png on https://bitcoin.org/bin/bitcoin-core-0.15.1/bitcoin-0.15.1.tar.gz. When I diff-ed them, there are actually a lot of differences. There are 51 files only exist on bitcoin.org tarball, and there are 113 files only exist on github tarball. So at the moment, I will keep using bitcoin-qt from github tarball. And I will closely watch the bitcoind running on my 2 full nodes. If there would be strange behaviours (I am not sure yet) compare to the behaviours of the one which tarball is from github, I will switch it.
|
|
|
The download link on the GitHub is the uncompiled source code, and the download on others site is the compiled program(binary file). So you don't have to compile the program directly from a source.
Thanks a lot. However, I want to compile it myself for my PC and 2 nodes which are running on Devuan distro. I was actually wondering myself because there is actually no source tarball on https://bitcoin.org/en/download. There is a link there that brings me to the github repo. So I guess the only source tarball in on github.
|
|
|
|