almightyruler
Legendary
Offline
Activity: 2268
Merit: 1092
|
|
June 25, 2015, 09:08:26 AM |
|
So even though you've been sent the new block 1679 times, it's loading the same information about the previous transaction from the disk each time. I'm guessing that it's this previous transaction that is corrupted, and so it doesn't matter how many times you are resent the new transaction, it's never going to be correct. That's something I hadn't considered. So a block which is verified as valid is corrupted as it's being committed to storage... from that point, verification of the following block, which arrives in perfect condition from peers, will always fail. I no longer have the corrupt files as I couldn't wait around indefinitely. It would be handy if there was an RPC command to forget every block past a specified height (or hardened checkpoint). Or even just be able to delete one specific block.
|
|
|
|
almightyruler
Legendary
Offline
Activity: 2268
Merit: 1092
|
|
June 25, 2015, 02:19:41 PM |
|
So even though you've been sent the new block 1679 times, it's loading the same information about the previous transaction from the disk each time. I'm guessing that it's this previous transaction that is corrupted, and so it doesn't matter how many times you are resent the new transaction, it's never going to be correct. That's something I hadn't considered. So a block which is verified as valid is corrupted as it's being committed to storage... from that point, verification of the following block, which arrives in perfect condition from peers, will always fail. One thing that just popped into my head. Wouldn't a -rescan reject the previous corrupt block? That's the first thing I tried after a couple of restarts.
|
|
|
|
cloverme
Legendary
Offline
Activity: 1512
Merit: 1057
SpacePirate.io
|
|
June 25, 2015, 02:20:15 PM |
|
Clam friends! ShapeShift has re-added Clams to our instant altcoin exchange! So for the downtime. Instantly exchange Clams with 35+ altcoins today with ShapeShift.ioShows as "Unknown exchange pair" this morning.... run out of clam?
|
|
|
|
chilly2k
Legendary
Offline
Activity: 1007
Merit: 1000
|
|
June 25, 2015, 05:48:30 PM |
|
So even though you've been sent the new block 1679 times, it's loading the same information about the previous transaction from the disk each time. I'm guessing that it's this previous transaction that is corrupted, and so it doesn't matter how many times you are resent the new transaction, it's never going to be correct. That's something I hadn't considered. So a block which is verified as valid is corrupted as it's being committed to storage... from that point, verification of the following block, which arrives in perfect condition from peers, will always fail. I no longer have the corrupt files as I couldn't wait around indefinitely. It would be handy if there was an RPC command to forget every block past a specified height (or hardened checkpoint). Or even just be able to delete one specific block. Along these lines, I seem to be getting a lot of clients connecting to me, that are not up to sync. They end up killing my bandwidth. I have to stop/start the client. Last time I checked it was 8 out of 40 connections were not even close to the current block height. I'm going to start writing down the connections and initial block height before I recycle, and see if there is a pattern. Otherwise is there a setting to limit the number of new connections that require syncing?
|
|
|
|
dooglus
Legendary
Offline
Activity: 2940
Merit: 1333
|
|
June 25, 2015, 06:42:40 PM Last edit: June 25, 2015, 07:00:49 PM by dooglus |
|
One thing that just popped into my head. Wouldn't a -rescan reject the previous corrupt block? That's the first thing I tried after a couple of restarts.
If the blk0001.dat file was OK, but the txleveldb/ files contained some corruption then -rescan wouldn't see the corruption. As I understand it, -rescan recreates the txleveldb/ files from the blk0001.dat file. I just tried running the Windows binary of v1.4.12 but didn't have much luck. I put the bootstrap.dat file in place and it started syncing, but then this happens after it has loaded a few blocks: Is that something that happens a lot on Windows still? It's part of why I stopped using Windows in the first place. I guess I'll try syncing from bootstrap.dat on Linux a few times and see how well that works. Edit: I'm running 4 clamd processes, all syncing from bootstrap.dat into different datadirs. None of them have "encountered a problem" yet: ==> /home/clam/.clam.bs3/debug.log <== SetBestChain: new best=9109e501431fb62e64dfce26112a22703e5feecb56501232bd4a5e8bf09c63a2 height=12200 trust=14866939091 blocktrust=1130602 date=05/28/14 02:23:26 ==> /home/clam/.clam.bs1/debug.log <== SetBestChain: new best=10a98d283d1952cf2ef5cf073faf1b9ec955f785511634fc202b5af0e9ed4bff height=19200 trust=22450387823 blocktrust=1048577 date=06/12/14 02:06:00 ==> /home/clam/.clam.bs2/debug.log <== SetBestChain: new best=2c47724a65e02a69a103f0bbf5f7e15b17e7915b92052e529b554226972edf4f height=14800 trust=17681301681 blocktrust=1157520 date=06/02/14 15:45:53 ==> /home/clam/.clam.bs4/debug.log <== SetBestChain: new best=28c04885cd701375d847853b814decb4bf1ef346f8b996703834f1cd0c6f080d height=10400 trust=12685082747 blocktrust=1231051 date=05/25/14 07:15:07 I'm surprised to see that the processes are CPU bound: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16265 clam 20 0 817072 29660 11732 S 95.0 0.4 7:41.56 clamd 16295 clam 20 0 823492 32428 11340 S 93.4 0.4 6:20.31 clamd 16187 clam 20 0 815872 34408 11760 S 88.4 0.4 9:49.87 clamd 16226 clam 20 0 837548 44116 26332 S 80.1 0.5 8:37.20 clamd I would expect the 4 of them all reading from bootstrap.dat and writing to database files to hit an hdd bottleneck, but no.
|
Just-Dice | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | Play or Invest | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | 1% House Edge |
|
|
|
dooglus
Legendary
Offline
Activity: 2940
Merit: 1333
|
|
June 25, 2015, 10:45:11 PM |
|
Clam friends! ShapeShift has re-added Clams to our instant altcoin exchange! So for the downtime. Instantly exchange Clams with 35+ altcoins today with ShapeShift.ioShows as "Unknown exchange pair" this morning.... run out of clam? They were having problems recently but say they're fixed now: https://bitcointalk.org/index.php?topic=717973.msg11711979#msg11711979 Along these lines, I seem to be getting a lot of clients connecting to me, that are not up to sync. They end up killing my bandwidth. I have to stop/start the client. Last time I checked it was 8 out of 40 connections were not even close to the current block height. I'm going to start writing down the connections and initial block height before I recycle, and see if there is a pattern.
Otherwise is there a setting to limit the number of new connections that require syncing?
I don't think there's anything to limit that, other than limiting your total number of connections: -maxconnections=<n> Maintain at most <n> connections to peers (default: 125)
|
Just-Dice | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | Play or Invest | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | 1% House Edge |
|
|
|
|
SuperClam (OP)
|
|
June 26, 2015, 04:26:10 AM |
|
CLAM v. 1.4.13 Test Build Released!Release Notes Linux 32bit43c0c0e47efb0c459a2b9c8fc28dc621e03be9be7ca0048f409bb1e55aa2fc9aLinux 64bit0ff672c900c2e337a24049f625a3ac1c3f8333bf840c05f3b8b1aff856da9731Windows 32bit7915eefb92508df6faae9fb257821103ff8df5ed3dc92a524a2ea89800a2f12aWindows 64bit55504d541ac6a5d46437810bde189ddfae7683741be16cd89f4021854e9f55f3OSXf910c3014f8b44cd8219ee4aeff774f32f92e32bd7f0c59a40ef11fc8d1b712f
|
|
|
|
dooglus
Legendary
Offline
Activity: 2940
Merit: 1333
|
|
June 26, 2015, 05:52:42 AM |
|
I'm running 4 clamd processes, all syncing from bootstrap.dat into different datadirs. None of them have "encountered a problem" yet: ==> /home/clam/.clam.bs3/debug.log <== SetBestChain: new best=9109e501431fb62e64dfce26112a22703e5feecb56501232bd4a5e8bf09c63a2 height=12200 trust=14866939091 blocktrust=1130602 date=05/28/14 02:23:26 ==> /home/clam/.clam.bs1/debug.log <== SetBestChain: new best=10a98d283d1952cf2ef5cf073faf1b9ec955f785511634fc202b5af0e9ed4bff height=19200 trust=22450387823 blocktrust=1048577 date=06/12/14 02:06:00 ==> /home/clam/.clam.bs2/debug.log <== SetBestChain: new best=2c47724a65e02a69a103f0bbf5f7e15b17e7915b92052e529b554226972edf4f height=14800 trust=17681301681 blocktrust=1157520 date=06/02/14 15:45:53 ==> /home/clam/.clam.bs4/debug.log <== SetBestChain: new best=28c04885cd701375d847853b814decb4bf1ef346f8b996703834f1cd0c6f080d height=10400 trust=12685082747 blocktrust=1231051 date=05/25/14 07:15:07 I'm surprised to see that the processes are CPU bound: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16265 clam 20 0 817072 29660 11732 S 95.0 0.4 7:41.56 clamd 16295 clam 20 0 823492 32428 11340 S 93.4 0.4 6:20.31 clamd 16187 clam 20 0 815872 34408 11760 S 88.4 0.4 9:49.87 clamd 16226 clam 20 0 837548 44116 26332 S 80.1 0.5 8:37.20 clamd I would expect the 4 of them all reading from bootstrap.dat and writing to database files to hit an hdd bottleneck, but no. 11 hours later and they're almost synced: ==> /home/clam/.clam.bs3/debug.log <== SetBestChain: new best=4c0d2aba8bee376948a003197aaeeb75f9d0ae2ec9a9470962ab09ffbc0c1ce9 height=518500 trust=46209241304519735506 blocktrust=214081308084412 date=06/19/15 16:16:16 ==> /home/clam/.clam.bs4/debug.log <== SetBestChain: new best=77ea9911d97d948ecb2db594cd4d25057fe1d172da0463f17e2dc6bbed33ba21 height=520900 trust=46720474082846055251 blocktrust=187154986340951 date=06/21/15 08:46:08 ==> /home/clam/.clam.bs1/debug.log <== SetBestChain: new best=0aa6d484cb63573b9201b62cb4a4f1aa1e0c9e51b39951c18a55f958f530caec height=520700 trust=46680370544722329270 blocktrust=191413849329254 date=06/21/15 05:30:08 ==> /home/clam/.clam.bs2/debug.log <== SetBestChain: new best=1a93ba543b1826bbfafb7f38937638ad14d1eb647d87630186e7d0c20f57c76b height=511700 trust=44734730882778317181 blocktrust=206964479678105 date=06/14/15 21:59:44 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17289 clam 20 0 1201112 255176 17416 S 91.8 3.2 433:21.03 clamd 17266 clam 20 0 1202744 258492 14580 S 85.9 3.2 434:16.70 clamd 17272 clam 20 0 1200672 246056 6472 S 84.5 3.1 434:00.44 clamd 17269 clam 20 0 1197652 252428 16520 S 83.6 3.1 434:49.66 clamd
|
Just-Dice | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | Play or Invest | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | 1% House Edge |
|
|
|
almightyruler
Legendary
Offline
Activity: 2268
Merit: 1092
|
|
June 26, 2015, 06:36:18 AM |
|
Along these lines, I seem to be getting a lot of clients connecting to me, that are not up to sync. They end up killing my bandwidth. I have to stop/start the client. Last time I checked it was 8 out of 40 connections were not even close to the current block height. I'm going to start writing down the connections and initial block height before I recycle, and see if there is a pattern.
Otherwise is there a setting to limit the number of new connections that require syncing?
I'm having problems with large syncs hogging my outbound bandwidth too, and now I see that my IP is now listed as an addnode in the OP of this thread. It should be possible to add code to drop the connection if the connecting peer reports a height well below yours, but that's a kind of rude thing to do in a peer to peer network. I doubt the official client would ever support anything like this. Perhaps a reasonable compromise would be an option to send at most X (older only?) blocks to a new peer, or perhaps some sort of outbound bandwidth rate limiting.
|
|
|
|
almightyruler
Legendary
Offline
Activity: 2268
Merit: 1092
|
|
June 26, 2015, 06:43:19 AM |
|
One thing that just popped into my head. Wouldn't a -rescan reject the previous corrupt block? That's the first thing I tried after a couple of restarts.
If the blk0001.dat file was OK, but the txleveldb/ files contained some corruption then -rescan wouldn't see the corruption. As I understand it, -rescan recreates the txleveldb/ files from the blk0001.dat file. That doesn't make sense. If only txleveldb/* was corrupt, then rebuilding it would fix the corruption (or at least reject the corrupt block), surely? I restarted the client several times, including at least one -rescan and -reindex, and I think I may have also tried -salvagewallet. My command history has been rotated out, so I can't see for sure. I just tried running the Windows binary of v1.4.12 but didn't have much luck. I put the bootstrap.dat file in place and it started syncing, but then this happens after it has loaded a few blocks:
Is that something that happens a lot on Windows still? It's part of why I stopped using Windows in the first place.
Did you check debug.log to see what happened immediately before the crash? I'm surprised to see that the processes are CPU bound: I run several *coin daemons and it's quite normal to see high CPU load when they're syncing a large number of blocks.
|
|
|
|
chilly2k
Legendary
Offline
Activity: 1007
Merit: 1000
|
|
June 26, 2015, 01:03:18 PM |
|
Along these lines, I seem to be getting a lot of clients connecting to me, that are not up to sync. They end up killing my bandwidth. I have to stop/start the client. Last time I checked it was 8 out of 40 connections were not even close to the current block height. I'm going to start writing down the connections and initial block height before I recycle, and see if there is a pattern.
Otherwise is there a setting to limit the number of new connections that require syncing?
I'm having problems with large syncs hogging my outbound bandwidth too, and now I see that my IP is now listed as an addnode in the OP of this thread. It should be possible to add code to drop the connection if the connecting peer reports a height well below yours, but that's a kind of rude thing to do in a peer to peer network. I doubt the official client would ever support anything like this. Perhaps a reasonable compromise would be an option to send at most X (older only?) blocks to a new peer, or perhaps some sort of outbound bandwidth rate limiting. I have no problem with the new clients syncing, I'm often on the other side of the equation. It would be nice if the client kept track of nodes that were syncing and only allow X number to be syncing at the same time. Once a client gets up to speed allow another to join. I make this problem even worse because I run a daemon on linux for staking, and the full client on windows to see whats going on. So I can end up with a lot of new connections. I was testing out maxconnections, and noticed I had 6 connections and 4 of those were still syncing. I think as the price of clams climbs, the number of new clients will grow too. People getting the wallet to see if they have unclaimed clams.
|
|
|
|
presstab
Legendary
Offline
Activity: 1330
Merit: 1000
Blockchain Developer
|
|
June 26, 2015, 02:54:22 PM |
|
Along these lines, I seem to be getting a lot of clients connecting to me, that are not up to sync. They end up killing my bandwidth. I have to stop/start the client. Last time I checked it was 8 out of 40 connections were not even close to the current block height. I'm going to start writing down the connections and initial block height before I recycle, and see if there is a pattern.
Otherwise is there a setting to limit the number of new connections that require syncing?
I'm having problems with large syncs hogging my outbound bandwidth too, and now I see that my IP is now listed as an addnode in the OP of this thread. It should be possible to add code to drop the connection if the connecting peer reports a height well below yours, but that's a kind of rude thing to do in a peer to peer network. I doubt the official client would ever support anything like this. Perhaps a reasonable compromise would be an option to send at most X (older only?) blocks to a new peer, or perhaps some sort of outbound bandwidth rate limiting. I have no problem with the new clients syncing, I'm often on the other side of the equation. It would be nice if the client kept track of nodes that were syncing and only allow X number to be syncing at the same time. Once a client gets up to speed allow another to join. I make this problem even worse because I run a daemon on linux for staking, and the full client on windows to see whats going on. So I can end up with a lot of new connections. I was testing out maxconnections, and noticed I had 6 connections and 4 of those were still syncing. I think as the price of clams climbs, the number of new clients will grow too. People getting the wallet to see if they have unclaimed clams. One option is to startup your windows wallet with 'connect=yourlinuxdaemonipaddress' and that should make it so you are only connected to your daemon. If you don't mind your daemon allowing peers to sync, then this would at least solve part of the problem. I have added a bit of code to HYP a while back that cutoff nodes requesting the same thing over and over. This was happening due to a fork and those other nodes on the wrong client, but I don't see why the code couldnt be modified to say something like "if this node send get blocks 100 times, ignore it for the next 60 minutes". The danger of something like this is that new nodes have a harder time syncing, that is why it would need to be an obscure rpc command to activate it.
|
|
|
|
dooglus
Legendary
Offline
Activity: 2940
Merit: 1333
|
|
June 26, 2015, 04:53:41 PM |
|
Did you check debug.log to see what happened immediately before the crash?
I didn't, but that's a good idea. I just ran the QT client 3 times, and here's the end of the debug.log after each of the 3 crashes: ProcessBlock: ACCEPTED Added time data, samples 3, offset +1 (+0 minutes) SetBestChain: new best=00000d363a91ba2ef20a15fb80d0e49bdbb091a0897feed6bb4e4c23cb327052 height=3718 trust=5295569433 blocktrust=1048596 date=05/16/14 04:23:21 ProcessBlock: ACCEPTED SetBestChain: new best=000001a56178f4f4402475b66974cd5fc8284277944f9719e15cccdab0b0d801 height=3719 trust=5296618010 blocktrust=1048577 date=05/16/14 04:23:27 ProcessBlock: ACCEPTED receive version message: version 60014, blocks=527206, us=...:..., them=...:31174, peer=...:31174 socket recv error 10054
--
SetBestChain: new best=00000ea6c282e448eda289f52f0d7a2e513300df012ef0cd8b077253309672a5 height=4008 trust=5601150626 blocktrust=1048577 date=05/16/14 06:06:01 ProcessBlock: ACCEPTED SetBestChain: new best=00000e30f6a0669b74c7dff2a137df42ca5adcab8def589be6d6c4094139067c height=4009 trust=5602205763 blocktrust=1055137 date=05/16/14 06:06:15 ProcessBlock: ACCEPTED SetBestChain: new best=00000a46eaf6f75c9423f2865837a970f5ca4d9f44cc3183dd10ec648d21e1f2 height=4010 trust=5603254340 blocktrust=1048577 date=05/16/14 06:06:26 ProcessBlock: ACCEPTED
--
SetBestChain: new best=00000ae65f90e6d818c1957e7df489958f482a4b5707dfd3628e0ba58bf0f045 height=4741 trust=6374326490 blocktrust=1048577 date=05/16/14 08:22:23 ProcessBlock: ACCEPTED SetBestChain: new best=0000080f2216d4c6dc3c7f5fbe89808d1cc108ec0c0d52eb8e4ac76591b49aa1 height=4742 trust=6375375067 blocktrust=1048577 date=05/16/14 08:22:25 ProcessBlock: ACCEPTED SetBestChain: new best=00000578f2016b4f36486c6b157fc0c9d2849a34739b45bb96f7d31b502fe211 height=4743 trust=6376430204 blocktrust=1055137 date=05/16/14 08:22:39 ProcessBlock: ACCEPTED
|
Just-Dice | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | Play or Invest | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | 1% House Edge |
|
|
|
dooglus
Legendary
Offline
Activity: 2940
Merit: 1333
|
|
June 26, 2015, 05:02:05 PM |
|
One thing that just popped into my head. Wouldn't a -rescan reject the previous corrupt block? That's the first thing I tried after a couple of restarts.
If the blk0001.dat file was OK, but the txleveldb/ files contained some corruption then -rescan wouldn't see the corruption. As I understand it, -rescan recreates the txleveldb/ files from the blk0001.dat file. That doesn't make sense. If only txleveldb/* was corrupt, then rebuilding it would fix the corruption (or at least reject the corrupt block), surely? I restarted the client several times, including at least one -rescan and -reindex, and I think I may have also tried -salvagewallet. My command history has been rotated out, so I can't see for sure. So I think I'm just confused now. You asked whether -rescan would reject the previous corrupt block. I said it would fix the corruption if the problem was only in the db and not in the blk file. You seem to agree, that rebuilding will fix the corruption, but then say that you tried rebuilding it and it didn't? We're probably just misunderstanding each other, and you don't have the corrupted blockchain any more anyway, so it doesn't really matter...
|
Just-Dice | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | Play or Invest | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | 1% House Edge |
|
|
|
dooglus
Legendary
Offline
Activity: 2940
Merit: 1333
|
|
June 26, 2015, 05:17:21 PM Last edit: June 26, 2015, 09:07:34 PM by dooglus |
|
I regularly update this old post with full dumps of the CLAM blockchain in bootstrap.dat format to make it easier for people to get their CLAM client synced up: Last night in the Just-Dice chat, Thirdspace asked about the possibility of generating and syncing from "partial" bootstrap.dat files. Suppose your client went out of sync a few weeks ago and is 20,000 blocks behind. You don't want to download and import the whole bootstrap.dat file because you already have 95% of it. Is there some way of having someone make a bootstrap.dat file with just the blocks you need? Well, the bootstrap file format is very simple: it's pretty much just the raw block data. There's no reason it has to start at block 0. So I made a commit to the CLAM client repository allowing you to run 'dumpbootstrap' and tell it which block to start the dump from. And since I had a bunch of .clam folders ready to go, I tested it: $ cc1 getblockcount 528105
$ cc2 getblockcount 527194
$ cc1 dumpbootstrap /tmp 527200 527190
$ cp /tmp/bootstrap.dat ~/.clam.bs2/
$ cc2 stop Clam server stopping
$ cc2 Clam server starting
$ cc2 getblockcount 527200 ie. I have two clamd processes running. One is up to date and one is behind (and not connected to any peers). I create an 11-block bootstrap from the up to date clamd, copy it into the out of date clamd's date folder, restart, and see that it has successfully imported just those blocks. The 11 block bootstrap file was just 10642 bytes long. So I'll publish 10k block bootstrap files, along with the big one. If you're 30k blocks behind, you can simply download the last three, append them all together in order (the format is simple enough that you can stick them end to end without breaking anything) and restart the client. See the old bootstrap.dat post for details of how to find the 10k block bootstrap files. Edit: presstab replied, then apparently deleted his reply. He made the point that there's no harm using a full bootstrap.dat file, because it will quickly skip the blocks you already have, and the only advantage of partial bootstrap files is they're smaller. I was curious to know how long it takes to skip through a bootstrap file you've already imported, so I tried it: 2015-06-26 20:29:38 ERROR: ProcessBlock() : already have block 0 00000c3ce6b3d823a35224a39798eca9ad889966aeb5a9da7b960ffb9869db35 2015-06-26 20:29:56 ERROR: ProcessBlock() : already have block 10000 00000de398b1ec72c393c5c54574a1e1784eb178d683e1ad0856c12fac34f603 2015-06-26 20:30:09 ERROR: ProcessBlock() : already have block 20000 e83f9c8d6f07222274e4a7105437ac2d297455f6b19f77766e8c528356283677 2015-06-26 20:30:22 ERROR: ProcessBlock() : already have block 30000 9c2c63f73c2134642bfe03dc4d6a53474f9f4d92395584d80b47dda52d1b37c0 2015-06-26 20:30:36 ERROR: ProcessBlock() : already have block 40000 7de664887740f7c3cb3ae194f911371a45e3ca2b022066a2da297d6c0365fa3b 2015-06-26 20:30:49 ERROR: ProcessBlock() : already have block 50000 5de01a261d21fbe84d75f9fbc62e6f09d2ec2d560a8a811e80c9bef2272c365b 2015-06-26 20:31:02 ERROR: ProcessBlock() : already have block 60000 d33bac1acd3cc8755011b982821e6c466cd6dbb7c3a0ad773e823d9fe32e12e2 2015-06-26 20:31:16 ERROR: ProcessBlock() : already have block 70000 4abb43132303df920a01fbea577ee36ba9f008133b59fb1ca98b20d7a8e7c49b 2015-06-26 20:31:29 ERROR: ProcessBlock() : already have block 80000 cd214e98efce27a727f1d916d6ef4219fc7c7c1044b4a244c7c556003171a783 2015-06-26 20:31:42 ERROR: ProcessBlock() : already have block 90000 ebb678b09629ae0657500521b73aafd4fa66f441458a14c70380c0ca05eccfbb 2015-06-26 20:31:56 ERROR: ProcessBlock() : already have block 100000 41148b9796e65ddbefea175f6372b2448fc2f6b22b66da64fc3a15d29c8ed843 2015-06-26 20:32:09 ERROR: ProcessBlock() : already have block 110000 26f975554ddc28ccac35b0dd831fa4e50178cfaca9c1426d262d1a2636510287 2015-06-26 20:32:23 ERROR: ProcessBlock() : already have block 120000 3d61e782261ea8e8636baa54f5436ac46780def46faf37a4a4f3d9f0459e1e56 2015-06-26 20:32:36 ERROR: ProcessBlock() : already have block 130000 d5dd5c72d35cb1d09a5551e8f3ef108bd33a798797100010886d63d3318f8608 2015-06-26 20:32:49 ERROR: ProcessBlock() : already have block 140000 f02386088eeeee7f88a9d5c1173550c24675936fdfc1354920e95022f4b3ec9e 2015-06-26 20:33:03 ERROR: ProcessBlock() : already have block 150000 d4ae18d81d6e4dc58aeab94c37cab4479db579ac7a735f6a214f982898dc1bbd 2015-06-26 20:33:16 ERROR: ProcessBlock() : already have block 160000 d8da7a1012e115a917baaaa55c8881cad1e38a40711b26b4db0430b08f60203d 2015-06-26 20:33:29 ERROR: ProcessBlock() : already have block 170000 28e44f6c55de5a7bb5ffb81b25038b97f9009a751802d34311198ed0b6ce17bb 2015-06-26 20:33:43 ERROR: ProcessBlock() : already have block 180000 c6f6d4c917a07fbdb778e752ad54d99a6b0f8f9e93bfc70a1ec4706ee1e4d21d 2015-06-26 20:33:56 ERROR: ProcessBlock() : already have block 190000 e9039de1956ab6c6c4556ba4b9687238c8c8bfcb5122aed7398e09e2dfca0783 2015-06-26 20:34:09 ERROR: ProcessBlock() : already have block 200000 5d1d35993e9867d1feb906030bb611de87f4d8ff48b9228044ef892cc87a7916 2015-06-26 20:34:15 ERROR: ProcessBlock() : already have block 210000 077e174e1452dcd93b39a318328bb001ef269d0789d22bfe9bfb260220f8f34f 2015-06-26 20:34:17 ERROR: ProcessBlock() : already have block 220000 5e5ceccff284fb41b4cb86b3e105c820517d320bc35f0d860fef55567cc89742 2015-06-26 20:34:18 ERROR: ProcessBlock() : already have block 230000 694757c71807298f685a14be01d787bde0f30b358a84cf34d10682368f411d7d 2015-06-26 20:34:19 ERROR: ProcessBlock() : already have block 240000 745ffb8c625ccde5b1255a4d17bd7ad44f2726eccfe022385c4e0960b6e2f38f 2015-06-26 20:34:22 ERROR: ProcessBlock() : already have block 250000 b560c121438f630401c102767587b70cb0cc7d1e0c09114dd0b91455262aa64c 2015-06-26 20:34:23 ERROR: ProcessBlock() : already have block 260000 b607a404deff7620e182a47e04ff0abd5ba3883c0b1a3b1bed011b6efdf3f854 2015-06-26 20:34:25 ERROR: ProcessBlock() : already have block 270000 a86fe2bafcb03de4b81ee8d0069f1c50cf88e7c29fe42499f35501baa840f485 2015-06-26 20:34:26 ERROR: ProcessBlock() : already have block 280000 446536d0327978d29ebecba383c6fdfc641db6f79241149d7c4c0cc2ffb37b03 2015-06-26 20:34:28 ERROR: ProcessBlock() : already have block 290000 4b57107940c668008dd36cc79cb97e4e1a3a3a6ae68b1c21885de3389838c0bf 2015-06-26 20:34:30 ERROR: ProcessBlock() : already have block 300000 144de2a2169e1a98e0b121bfdd7cdee6192dba71c10cde65e785e39f00f05c2b 2015-06-26 20:34:32 ERROR: ProcessBlock() : already have block 310000 78229e2d02dd8a62b3429ef4aa6db5d80870cc7b576528621d213b01aeb14625 2015-06-26 20:34:34 ERROR: ProcessBlock() : already have block 320000 71832dd669399489b46c174d8b9a30ab15ae9cbbb5bd9d42355ce6e91dd93058 2015-06-26 20:34:36 ERROR: ProcessBlock() : already have block 330000 65488e8a49540893fc47eb2a46ce81d1e67fa264a89a2efc5dc51b8d0cc8cb16 2015-06-26 20:34:37 ERROR: ProcessBlock() : already have block 340000 cb56b08807ec3d4e0615f0c622edbb803afddbf3895b21081a325910af0ebf07 2015-06-26 20:34:39 ERROR: ProcessBlock() : already have block 350000 55cc97c7ea1159273b4c600d358a782afd2134960a0ff6163685a4254ef2b4ed 2015-06-26 20:34:42 ERROR: ProcessBlock() : already have block 360000 8d7b4acad066bde560de850c4f26153be9b20f705b59105337de32b85b0f1a87 2015-06-26 20:34:44 ERROR: ProcessBlock() : already have block 370000 f1c5288956ab526197a1b59f03e6678fe6575911c3f0199aaf678206bebc80cf 2015-06-26 20:34:46 ERROR: ProcessBlock() : already have block 380000 6504aeac0d9c232dc7617d7d87cb6e66f94c15264feffff37a28a186cf2c500f 2015-06-26 20:34:48 ERROR: ProcessBlock() : already have block 390000 7ef1f3be4f9aac3351e72df28851c1bd3d3d84b9f81c6ef896a1b46e42ef2d67 2015-06-26 20:34:51 ERROR: ProcessBlock() : already have block 400000 6ec2869889333270e1eb549bfe5d19b6423ad8b36a05807a71d2301accfadf0b 2015-06-26 20:34:53 ERROR: ProcessBlock() : already have block 410000 0d2ab2ac4e5de4736d0936099782003eff643a245481be430bd1068536220459 2015-06-26 20:34:54 ERROR: ProcessBlock() : already have block 420000 9b1f78bce2c9231e81dcd1fabfa5170f7e62de9d7b328e1e93bbb1c4891852f9 2015-06-26 20:34:57 ERROR: ProcessBlock() : already have block 430000 2c50e406397898689e2d22d75a2a9fea7b3d01439529296904cd81c6f14d3674 2015-06-26 20:34:59 ERROR: ProcessBlock() : already have block 440000 249a244d1fbecd83ac132b78c525b68df34747cefddfc9923f9e79eebe09ade9 2015-06-26 20:35:01 ERROR: ProcessBlock() : already have block 450000 63e383c62d7a2c2dc14b1d83f824dadad28f3eda91e45e6aa8ba57b8af96e07e 2015-06-26 20:35:03 ERROR: ProcessBlock() : already have block 460000 57981293949874d9d059c9dfa3cd34c08edd3a628f3e2250cea6a5d87bddaeb5 2015-06-26 20:35:05 ERROR: ProcessBlock() : already have block 470000 e1a62fa91ab92d771874530388c3ada97461796d7947e253519c8cf3323ea80b 2015-06-26 20:35:08 ERROR: ProcessBlock() : already have block 480000 39f9b9e801e37502fc512eb6dfbbc11a76d0c74c5e9ff6eff0c2d2826baff199 2015-06-26 20:35:10 ERROR: ProcessBlock() : already have block 490000 f95c855c2595540424e812897b7338e2e201ddedaee48c955326bce2effffcdd 2015-06-26 20:35:12 ERROR: ProcessBlock() : already have block 500000 af388da4175404ebac7be210e1ed092e4e283d167505db617f009d9bc56f42fc 2015-06-26 20:35:14 ERROR: ProcessBlock() : already have block 510000 2d02bfe474e62c90f5ef54a9014d00102397fa9afe6f7f8cd0794dc415ba147e 2015-06-26 20:35:15 ERROR: ProcessBlock() : already have block 520000 3488d2266f497a49b3ac09f98b36fff94b732fde21e597b0ca5b6ea8afaf66d3 It was taking 13 seconds per 10000 blocks up until around block 200k, and then it got much much faster. Why would that be? What happened around there? I repeated the test to see if it would happen the same way, and it did: 2015-06-26 20:36:06 ERROR: ProcessBlock() : already have block 0 00000c3ce6b3d823a35224a39798eca9ad889966aeb5a9da7b960ffb9869db35 2015-06-26 20:36:23 ERROR: ProcessBlock() : already have block 10000 00000de398b1ec72c393c5c54574a1e1784eb178d683e1ad0856c12fac34f603 2015-06-26 20:36:37 ERROR: ProcessBlock() : already have block 20000 e83f9c8d6f07222274e4a7105437ac2d297455f6b19f77766e8c528356283677 2015-06-26 20:36:51 ERROR: ProcessBlock() : already have block 30000 9c2c63f73c2134642bfe03dc4d6a53474f9f4d92395584d80b47dda52d1b37c0 2015-06-26 20:37:04 ERROR: ProcessBlock() : already have block 40000 7de664887740f7c3cb3ae194f911371a45e3ca2b022066a2da297d6c0365fa3b 2015-06-26 20:37:17 ERROR: ProcessBlock() : already have block 50000 5de01a261d21fbe84d75f9fbc62e6f09d2ec2d560a8a811e80c9bef2272c365b 2015-06-26 20:37:31 ERROR: ProcessBlock() : already have block 60000 d33bac1acd3cc8755011b982821e6c466cd6dbb7c3a0ad773e823d9fe32e12e2 2015-06-26 20:37:45 ERROR: ProcessBlock() : already have block 70000 4abb43132303df920a01fbea577ee36ba9f008133b59fb1ca98b20d7a8e7c49b 2015-06-26 20:37:58 ERROR: ProcessBlock() : already have block 80000 cd214e98efce27a727f1d916d6ef4219fc7c7c1044b4a244c7c556003171a783 2015-06-26 20:38:12 ERROR: ProcessBlock() : already have block 90000 ebb678b09629ae0657500521b73aafd4fa66f441458a14c70380c0ca05eccfbb 2015-06-26 20:38:26 ERROR: ProcessBlock() : already have block 100000 41148b9796e65ddbefea175f6372b2448fc2f6b22b66da64fc3a15d29c8ed843 2015-06-26 20:38:39 ERROR: ProcessBlock() : already have block 110000 26f975554ddc28ccac35b0dd831fa4e50178cfaca9c1426d262d1a2636510287 2015-06-26 20:38:52 ERROR: ProcessBlock() : already have block 120000 3d61e782261ea8e8636baa54f5436ac46780def46faf37a4a4f3d9f0459e1e56 2015-06-26 20:39:06 ERROR: ProcessBlock() : already have block 130000 d5dd5c72d35cb1d09a5551e8f3ef108bd33a798797100010886d63d3318f8608 2015-06-26 20:39:19 ERROR: ProcessBlock() : already have block 140000 f02386088eeeee7f88a9d5c1173550c24675936fdfc1354920e95022f4b3ec9e 2015-06-26 20:39:33 ERROR: ProcessBlock() : already have block 150000 d4ae18d81d6e4dc58aeab94c37cab4479db579ac7a735f6a214f982898dc1bbd 2015-06-26 20:39:47 ERROR: ProcessBlock() : already have block 160000 d8da7a1012e115a917baaaa55c8881cad1e38a40711b26b4db0430b08f60203d 2015-06-26 20:40:00 ERROR: ProcessBlock() : already have block 170000 28e44f6c55de5a7bb5ffb81b25038b97f9009a751802d34311198ed0b6ce17bb 2015-06-26 20:40:14 ERROR: ProcessBlock() : already have block 180000 c6f6d4c917a07fbdb778e752ad54d99a6b0f8f9e93bfc70a1ec4706ee1e4d21d 2015-06-26 20:40:27 ERROR: ProcessBlock() : already have block 190000 e9039de1956ab6c6c4556ba4b9687238c8c8bfcb5122aed7398e09e2dfca0783 2015-06-26 20:40:41 ERROR: ProcessBlock() : already have block 200000 5d1d35993e9867d1feb906030bb611de87f4d8ff48b9228044ef892cc87a7916 2015-06-26 20:40:46 ERROR: ProcessBlock() : already have block 210000 077e174e1452dcd93b39a318328bb001ef269d0789d22bfe9bfb260220f8f34f 2015-06-26 20:40:48 ERROR: ProcessBlock() : already have block 220000 5e5ceccff284fb41b4cb86b3e105c820517d320bc35f0d860fef55567cc89742 2015-06-26 20:40:49 ERROR: ProcessBlock() : already have block 230000 694757c71807298f685a14be01d787bde0f30b358a84cf34d10682368f411d7d 2015-06-26 20:40:51 ERROR: ProcessBlock() : already have block 240000 745ffb8c625ccde5b1255a4d17bd7ad44f2726eccfe022385c4e0960b6e2f38f 2015-06-26 20:40:53 ERROR: ProcessBlock() : already have block 250000 b560c121438f630401c102767587b70cb0cc7d1e0c09114dd0b91455262aa64c 2015-06-26 20:40:55 ERROR: ProcessBlock() : already have block 260000 b607a404deff7620e182a47e04ff0abd5ba3883c0b1a3b1bed011b6efdf3f854 2015-06-26 20:40:56 ERROR: ProcessBlock() : already have block 270000 a86fe2bafcb03de4b81ee8d0069f1c50cf88e7c29fe42499f35501baa840f485 2015-06-26 20:40:58 ERROR: ProcessBlock() : already have block 280000 446536d0327978d29ebecba383c6fdfc641db6f79241149d7c4c0cc2ffb37b03 2015-06-26 20:40:59 ERROR: ProcessBlock() : already have block 290000 4b57107940c668008dd36cc79cb97e4e1a3a3a6ae68b1c21885de3389838c0bf 2015-06-26 20:41:01 ERROR: ProcessBlock() : already have block 300000 144de2a2169e1a98e0b121bfdd7cdee6192dba71c10cde65e785e39f00f05c2b 2015-06-26 20:41:03 ERROR: ProcessBlock() : already have block 310000 78229e2d02dd8a62b3429ef4aa6db5d80870cc7b576528621d213b01aeb14625 2015-06-26 20:41:04 ERROR: ProcessBlock() : already have block 320000 71832dd669399489b46c174d8b9a30ab15ae9cbbb5bd9d42355ce6e91dd93058 2015-06-26 20:41:06 ERROR: ProcessBlock() : already have block 330000 65488e8a49540893fc47eb2a46ce81d1e67fa264a89a2efc5dc51b8d0cc8cb16 2015-06-26 20:41:08 ERROR: ProcessBlock() : already have block 340000 cb56b08807ec3d4e0615f0c622edbb803afddbf3895b21081a325910af0ebf07 2015-06-26 20:41:10 ERROR: ProcessBlock() : already have block 350000 55cc97c7ea1159273b4c600d358a782afd2134960a0ff6163685a4254ef2b4ed 2015-06-26 20:41:12 ERROR: ProcessBlock() : already have block 360000 8d7b4acad066bde560de850c4f26153be9b20f705b59105337de32b85b0f1a87 2015-06-26 20:41:14 ERROR: ProcessBlock() : already have block 370000 f1c5288956ab526197a1b59f03e6678fe6575911c3f0199aaf678206bebc80cf 2015-06-26 20:41:16 ERROR: ProcessBlock() : already have block 380000 6504aeac0d9c232dc7617d7d87cb6e66f94c15264feffff37a28a186cf2c500f 2015-06-26 20:41:18 ERROR: ProcessBlock() : already have block 390000 7ef1f3be4f9aac3351e72df28851c1bd3d3d84b9f81c6ef896a1b46e42ef2d67 2015-06-26 20:41:20 ERROR: ProcessBlock() : already have block 400000 6ec2869889333270e1eb549bfe5d19b6423ad8b36a05807a71d2301accfadf0b 2015-06-26 20:41:22 ERROR: ProcessBlock() : already have block 410000 0d2ab2ac4e5de4736d0936099782003eff643a245481be430bd1068536220459 2015-06-26 20:41:24 ERROR: ProcessBlock() : already have block 420000 9b1f78bce2c9231e81dcd1fabfa5170f7e62de9d7b328e1e93bbb1c4891852f9 2015-06-26 20:41:27 ERROR: ProcessBlock() : already have block 430000 2c50e406397898689e2d22d75a2a9fea7b3d01439529296904cd81c6f14d3674 2015-06-26 20:41:29 ERROR: ProcessBlock() : already have block 440000 249a244d1fbecd83ac132b78c525b68df34747cefddfc9923f9e79eebe09ade9 2015-06-26 20:41:31 ERROR: ProcessBlock() : already have block 450000 63e383c62d7a2c2dc14b1d83f824dadad28f3eda91e45e6aa8ba57b8af96e07e 2015-06-26 20:41:33 ERROR: ProcessBlock() : already have block 460000 57981293949874d9d059c9dfa3cd34c08edd3a628f3e2250cea6a5d87bddaeb5 2015-06-26 20:41:35 ERROR: ProcessBlock() : already have block 470000 e1a62fa91ab92d771874530388c3ada97461796d7947e253519c8cf3323ea80b 2015-06-26 20:41:37 ERROR: ProcessBlock() : already have block 480000 39f9b9e801e37502fc512eb6dfbbc11a76d0c74c5e9ff6eff0c2d2826baff199 2015-06-26 20:41:39 ERROR: ProcessBlock() : already have block 490000 f95c855c2595540424e812897b7338e2e201ddedaee48c955326bce2effffcdd 2015-06-26 20:41:41 ERROR: ProcessBlock() : already have block 500000 af388da4175404ebac7be210e1ed092e4e283d167505db617f009d9bc56f42fc 2015-06-26 20:41:42 ERROR: ProcessBlock() : already have block 510000 2d02bfe474e62c90f5ef54a9014d00102397fa9afe6f7f8cd0794dc415ba147e 2015-06-26 20:41:44 ERROR: ProcessBlock() : already have block 520000 3488d2266f497a49b3ac09f98b36fff94b732fde21e597b0ca5b6ea8afaf66d3 tldr: it takes about 6 minutes, but skips the 2nd half much faster than the 1st half. Edit: it's not because the blocks are smaller that they are skipped more quickly. If anything they're bigger: 113973730 bootstrap-000.dat 4773168 bootstrap-001.dat 4493088 bootstrap-002.dat 5541516 bootstrap-003.dat 4872122 bootstrap-004.dat 4408922 bootstrap-005.dat 7284141 bootstrap-006.dat 4637522 bootstrap-007.dat 4392465 bootstrap-008.dat 4438838 bootstrap-009.dat 4258087 bootstrap-010.dat 4304484 bootstrap-011.dat 4272753 bootstrap-012.dat 4283469 bootstrap-013.dat 4584873 bootstrap-014.dat 4310057 bootstrap-015.dat 4652678 bootstrap-016.dat 4416905 bootstrap-017.dat 4238363 bootstrap-018.dat 4236413 bootstrap-019.dat 4355254 bootstrap-020.dat 5830952 bootstrap-021.dat 4930082 bootstrap-022.dat 4854884 bootstrap-023.dat 17798965 bootstrap-024.dat 7117526 bootstrap-025.dat 6036061 bootstrap-026.dat 7176316 bootstrap-027.dat 11303056 bootstrap-028.dat 9329719 bootstrap-029.dat 7952169 bootstrap-030.dat 7027990 bootstrap-031.dat 8260798 bootstrap-032.dat 7720261 bootstrap-033.dat 9812284 bootstrap-034.dat 17565304 bootstrap-035.dat 12039460 bootstrap-036.dat 12940822 bootstrap-037.dat 14238319 bootstrap-038.dat 14080726 bootstrap-039.dat 13556671 bootstrap-040.dat 11211604 bootstrap-041.dat 23640962 bootstrap-042.dat 10525737 bootstrap-043.dat 10998702 bootstrap-044.dat 10741589 bootstrap-045.dat 15336451 bootstrap-046.dat 15650616 bootstrap-047.dat 10236644 bootstrap-048.dat 10454751 bootstrap-049.dat 8864684 bootstrap-050.dat 8427256 bootstrap-051.dat Maybe this is a clue? inline bool IsProtocolV2(int nHeight) { return nHeight > 203500; } Edit: ok, case solved: uint256 GetHash() const { if (nVersion > 6) return Hash(BEGIN(nVersion), END(nNonce)); else return GetPoWHash(); }
uint256 GetPoWHash() const { return scrypt_blockhash(CVOIDBEGIN(nVersion)); } To tell whether we already have a block we find in a bootstrap file, we hash it and see if we have a block with that hash. Before block 203501 blocks were hashed using scrypt, which is designed to be slow. After block 203500 we switched to sha256 which is fast. Notice how blocks 0 and 10000 have hashes that start with a lot of zeroes - that's because they were proof-of-work blocks. $ for i in bootstrap-0[12]?.dat; do echo $i $(od -An -t x1 $i | head -1 | awk '{print $9}'); done bootstrap-010.dat 06 bootstrap-011.dat 06 bootstrap-012.dat 06 bootstrap-013.dat 06 bootstrap-014.dat 06 bootstrap-015.dat 06 bootstrap-016.dat 06 bootstrap-017.dat 06 bootstrap-018.dat 06 bootstrap-019.dat 06 bootstrap-020.dat 06 bootstrap-021.dat 07 bootstrap-022.dat 07 bootstrap-023.dat 07 bootstrap-024.dat 07 bootstrap-025.dat 07 bootstrap-026.dat 07 bootstrap-027.dat 07 bootstrap-028.dat 07 bootstrap-029.dat 07
|
Just-Dice | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | Play or Invest | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | 1% House Edge |
|
|
|
SuperClam (OP)
|
|
June 27, 2015, 09:00:28 AM |
|
... To tell whether we already have a block we find in a bootstrap file, we hash it and see if we have a block with that hash. Before block 203501 blocks were hashed using scrypt, which is designed to be slow. After block 203500 we switched to sha256 which is fast. Notice how blocks 0 and 10000 have hashes that start with a lot of zeroes - that's because they were proof-of-work blocks. ...
Very interesting to see the efficiency gain of switching over to sha256 in practice. The change made sense on it's face; but seeing it in action adds a certain pleasure to it.
|
|
|
|
Trent Russell
Full Member
Offline
Activity: 132
Merit: 100
willmathforcrypto.com
|
|
June 27, 2015, 08:45:08 PM |
|
I wrote a short paper on an idea about how multisig can be used to trade risk. I used clams (in particular, clamSpeech was important) to do the examples. Here's the pdf: http://willmathforcrypto.com/multisigderivs.pdfHere's a post about it: https://bitcointalk.org/index.php?topic=1102062As a side note, the first example was boring because the price early last week was too stable. Later it got more fun. I think Alice really regrets shorting clams on Wednesday. (That's only a joke about the price spike. I was both Alice and Bob in the examples, so there were no real losses.)
|
|
|
|
Bitcoiner2015
|
|
June 29, 2015, 03:54:32 PM |
|
I have a few noob questions. I have my clams in my wallet, they are staking but even though I have quite a few, it is only expected to stake every 49710 days! how can that be? I moved them to my wallet on Saturday, but the age hasn't increased. They are still staking at 1 per CLAM, does the age only increase when I have the client open? Can they only stake when it is open?
Does it matter that I use 1.4.3.0? The new ones want to re-index, so I am not in a hurry to update!
|
|
|
|
tspacepilot
Legendary
Offline
Activity: 1456
Merit: 1081
I may write code in exchange for bitcoins.
|
|
June 29, 2015, 04:40:40 PM |
|
I have a few noob questions. I have my clams in my wallet, they are staking but even though I have quite a few, it is only expected to stake every 49710 days! how can that be? I moved them to my wallet on Saturday, but the age hasn't increased. They are still staking at 1 per CLAM, does the age only increase when I have the client open? Can they only stake when it is open?
Does it matter that I use 1.4.3.0? The new ones want to re-index, so I am not in a hurry to update!
I'm pretty sure that in all of the POS coins you can only stake when your wallet is open. If you're not connected, you're not in the race. I'm not an expert (and many others in this thread actually are experts) but if I understand correctly, POS is a lot like POW in that you are racing against others to find a hash that fulfills certain criteria.
|
|
|
|
|