Bitcoin Forum
June 17, 2021, 08:57:38 AM *
News: Latest Bitcoin Core release: 0.21.1 [Torrent]
  Home Help Search Login Register More  
  Show Posts
Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 »
1  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: May 31, 2021, 05:41:24 PM
The racing problems- I'm catching them while running go build -race and reorging blocks and loading wallets at the same time.

Now there is a mistake at line 231 in merkle.go - there needs to be commits_mutex.RUnlock()

That's all. I've also tested the nested comb trades. They work properly.

I'm ok with the final version - can be released (if there is nothing else) -_-
2  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: May 22, 2021, 08:57:05 AM
Here are changes for fixing the racing problems.

1. segmentmerkle.go remove all uses of segments_merkle_mutex RLock() and
RUnlock(), toplevel callers will have to lock that

code omitted

2. anonminize.go, add 2 mutexes rlock and runlock around segments_coinbase_backgraph:
for combbase := range bases {

segments_coinbase_backgraph(backgraph, make(map[[32]byte]struct{}), target, combbase)


3. loopdetect.go, add 2 mutexes rlock and runlock to loopdetect() function:
func loopdetect(norecursion, loopkiller map[[32]byte]struct{}, to [32]byte) (b bool) {

var type3 = segments_stack_type(to)
b = segments_stack_loopdetect(norecursion, loopkiller, to)
var type2 = segments_merkle_type(to)
b = segments_merkle_loopdetect(norecursion, loopkiller, to)
var type1 = segments_transaction_type(to)
b = segments_transaction_loopdetect(norecursion, loopkiller, to)
} else if type1 == SEGMENT_ANY_UNTRICKLED {
} else if type1 == SEGMENT_UNKNOWN {
return b

4. merkle.go commits mutex MUST be taken when call merkle_scan_one_leg_activity() here:
var allright1 = merkle_scan_one_leg_activity(q1)
var allright2 = merkle_scan_one_leg_activity(q2)

if allright1 && allright2 {
reactivate_txid(false, true, tx)

return true, e[0]
and add 2 mutexes rlock and runlock:

if newactivity {
if old, ok1 := e0_to_e1[e[0]]; ok1 && old != e[1] {

fmt.Println("Panic: e0 to e1 already have live path")

e0_to_e1[e[0]] = e[1]


var maybecoinbase = commit(e[0][0:])
if _, ok1 := combbases[maybecoinbase]; ok1 {
segments_coinbase_trickle_auto(maybecoinbase, e[0])

segments_merkle_trickle(make(map[[32]byte]struct{}), e[0])


5. mine.go add write locking:

        if *tx == (*txidto)[0] {
                segments_transaction_next[actuallyfrom] = *txidto
                return false

change 2 add toplevel locking:


var maybecoinbase = commit(actuallyfrom[0:])
if _, ok1 := combbases[maybecoinbase]; ok1 {
segments_coinbase_trickle_auto(maybecoinbase, actuallyfrom)

segments_transaction_trickle(make(map[[32]byte]struct{}), actuallyfrom)


change 3:


var val = segments_transaction_data[*tx][i]


change 4:

if oldactivity == 2097151 {
var actuallyfrom = segments_transaction_data[*tx][21]

segments_transaction_untrickle(nil, actuallyfrom, 0xffffffffffffffff)

delete(segments_transaction_next, actuallyfrom)


6. stack.go surround stack trickle with mutexes:


segments_stack_trickle(make(map[[32]byte]struct{}), hash)


7. txrecv.go tx_receive_transaction_internal(), take both mutexes:



3  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: May 19, 2021, 09:03:30 PM
Ah, the thread thing.

Sorry for not having an exact solution you can apply right now.

It's caused by segments_transaction_mutex and
segments_merkle_mutex mutexes. They protect mainly the maps that tell you
where money should go from transaction address, for merkle transaction (comb trade)
the map is named e0_to_e1 and for haircomb transaction the map is named
segments_transaction_next. The key in both maps is some kind of used (spent) address,
and the value is the next address where all that money should move next.
Actually segments_transaction_next contains the txid too.

Because the logic is the same - the locking should be the same but it isn't, that's the bug.
When you look at segmenttx.go and segmentmerkle.go - that is the money trickling

There are two avenues to fix this one would be simply surround each read from the map(s)
with RLock RUnlock.
This option sounds right for consensus-non-critical paths like inside:

This is easy.

Another part is to NOT surround each read from the map with RLock RUnlock but instead
the whole invocation of the trickling  is guarded at the highest level.
This part sounds right for consensus critical like balance calculation. This will prevent
somebody to add new transactions while the money is still being propagated along the graph.
Once the dust settles, the queued new transaction adding gets the green light.

Example from txrecv.go, there are other similar top-level places that call into trickling code.
if newactivity == 2097151 {


segments_transaction_next[actuallyfrom] = txidandto



var maybecoinbase = commit(actuallyfrom[0:])
if _, ok1 := combbases[maybecoinbase]; ok1 {
// ...invoke coinbase trickling in case the haircomb was a coinbase
segments_coinbase_trickle_auto(maybecoinbase, actuallyfrom)

//..invoke cash trickling here:
segments_transaction_trickle(make(map[[32]byte]struct{}), actuallyfrom)


The reason why both read mutexes are taken is that transaction can pay to merkle
transaction which could cause transaction money trickling become merkle tx money trickling.

4  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: May 15, 2021, 12:54:28 PM
I've been testing busily and found numerous problems. Let's start with the simpler ones.

used_key.go - used_key_add_new_minimal_commit_height - when deleting from slices l is always 31

instead of:

l := len(v) - 1

there needs to be the length of the actual slice:

l := len(used_height_commits[min_height]) - 1


l := len(used_commit_keys[min_commit]) - 1

Explanation: taking length of v which is a hash (always of size 32) is not the intention.

newminer.go - handle_reorg_direct - off by one error in direct mode causes "holes" in database

the fix is rather simple:

iter := commitsdb.NewIterator(&util.Range{Start: new_height_tag(uint64(target_height)+1), Limit: new_height_tag(uint64(height+1))}, nil)

newminer.go - miner - when reorging we need to provide the height of previous block (not of the reorged one) to become the topmost block

       // Flush
       var commit_fingerprint hash.Hash
       if dir == -1 {
               commit_fingerprint = miner_mine_commit_pulled("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
                       new_flush_utxotag(uint64(parsed_block.height)-1), 0)
       } else if dir == 1 {
               commit_fingerprint = miner_mine_commit_pulled("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
                       new_flush_utxotag(uint64(parsed_block.height)), 0)

Explanation:  the current reorged one 's previous block height should become the topmost by posttag() inside miner_mine_commit_internal(). If we don't do this,
the check "error: mined first commitment must be on greater height" will wrongly prevent one block after a reorg to get mined in miner mode.

There are a threading problems as well, that I will explain in a later post.

5  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: May 09, 2021, 11:48:00 AM
well I see, in my opinion repairing 1 block doesn't make sense, purely because
a) BTC node isn't guaranteed to be present
b) even if BTC node is present, how do you know it won't send the wrong information again
c) even if it sends the right information, you will need to download all the blocks after that blocks
anyway, for example in case a previously unseen commitment abc...123 was fixed by being "removed" from block 500000 (because it isn't there), all
the 500000+ blocks need to be inspected to confirm that abc...123 doesn't appear there again to "add it" (make it previously unseen in a later block).
d) so you can pretty much repair only errors that are fixed by "adding" commitments, and you will still need to fix up later blocks to remove from them
e) all of this makes it pretty narrow scoped as opposed to node operator simply copying over the "right" database to the node from a known good backup.

here is my final fix for the high severity problem. Only difference between the preliminary fix and this is the removal of the mutex commits_mutex locks+unlocks in the two cases (merkle_mine+merkle_unmine) where it's being held already and it would just cause a deadlock.

6  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: May 08, 2021, 09:09:34 PM
I'm investigating a max severity crasher issue in comb trades facility (merkle.go)

In the mean time, all users must cease using comb trades for any sort of commerce.

I have a preliminary fix, here:


it's a rather large overhaul of the facility.
7  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: May 08, 2021, 11:39:39 AM
Yeah, In case the local node is a full node, the sync would take nearly the same time.
In case the local node is a pruned node, some blocks would have to be pulled over the network, this sync would be slower and capped by the local internet speed.


☑ Successful / unsuccessful claiming works.
☑ Transaction works.
☑ Naive double spend is caught correctly.
☑ Liquidity stack loops working.
☑ Coin loop detection worked ok so far.

2 minor severity issues - both existing in the Natasha version too

Issue 1 - Brain wallet keys not added to the used key feature.

Problem: when creating a brain wallet using the HTTP call, the keys that get
generated aren't added to the used key feature. This means once the keys become
used, node restarted and brain wallet re-generated, it will not be visible that
the used key is already spent.

In function: wallet_generate_brain

Solution: copy the "if enable_used_key_feature" lines from the normal key generator
(key_load_data_internal) into wallet_generate_brain too.

Issue 2 - Used keys balance should be forced to be 0 COMB on the user interface on detected outgoing key spend

Problem: when a key is spent, but the transaction is not added to the node using the link, the node will keep
displaying the old balance - the old key balance (nonzero) will be visible even in the event of 1+  confirmations.

Discussion: This is not a consensus issue, if the user attempts double spending using the nonzero balance,
the double spend will get caught and dropped normally.

In function: wallet_view

Solution: In the wallet printing loop, move the used_key_feature block above printing the wallet key+balance row. Inside the used_key_feature block,
set the balance (bal) to zero if outgoing spend got recognized. Finally, print the outgoing spend row below the walled key+balance row from a temp variables.

While we're fixing this, we may also hide the pay button in case of reorg to save user from accidentally double-spending.
Reference wallet.go
8  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: May 05, 2021, 05:32:10 PM
the change 2 minutes ago looks good

☑ default credentials are blank
☑ user warned to configure both programs with the same credentials when comms fails
☑ we'll find any bugs during beta testing

feel free to tag the beta!

9  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: May 05, 2021, 03:20:17 PM
I will ask you differently, do you wanna take responsibility for angry users
who had their BTC wallets wiped clean by malware running on other computers on their network, because they
have typed "user" & "pass" into their bitcoin.conf, just like your software recommended.

I also considered the solution in which we generate a random long user + pass on startup,
but it's worth shit because it changes every startup and nobody will be arsed to change
it every time (in bitcoin.conf).

What we want is zero configuration. That will be possible by a middle man software. As follows:

1. user downloads bitcoin from and starts it
2. user downloads a middle man software that don't exist today but will later (once we do it)
3. user downloads haircomb core
4. user starts all 3 programs above
5. haircomb core starts syncing no config needed.

this will be possible because:
1. haircomb core will request blocks from port 8332 (RPC PORT)
2. middle man SW that listens on port 8332 will redirect the requests to port 8333 where the Bitcoin
is already listening on the peer to peer network. (In the correct format). This is true by default
on Bitcoin core, without any configuration.
3. Bitcoin normally transmits the blocks to middleman (in a special format, NOT in JSON)
4. Middleman transits the required blocks to the comb core (in JSON)

What are the possible obstacles to this zero configuration?Huh

1. Bitcoin already running on 8332. The middle man will then fail to run. The middle man program
will recommend to delete both BTC and COMB config files.

2. Old version of comb ( the one we are developing right now) will fail to run without a config file!! (this problem needs to be solved RN)

3. Old version of comb recommending the wrong thing when config file have blank credentials or not found. It must not fail with error and it must keep connecting to the middle man with blank credentials despite of the credentials being blank.

4. Middleman software not being available (this will be solved soon)

5. Blank credentials not being testable - sure, not on BTC, but the fake block simulator will serve you something even with blank credentials.

10  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: May 05, 2021, 10:23:31 AM
blank credentials by default are needed when:

 - when running offline/no reason/intention to sync
 - when syncing using a future tool (to be developed), that tool would run on :8332
   and serve blocks from the Bitcoin P2P network. That won't require the BTC RPC/server
   option, and by extension our config file won't be needed.

in any case new_miner_start() must go, even with blank credentials

the blank credentials cannot be entered into bitcoin's config file. that's exactly
what we need! the user who is capable to edit config files won't be able to do the
wrong thing but will instead be forced to set identical strong credentials in both files.

can't comment on merged main page non synced messages, I don't see the code.
11  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: May 04, 2021, 09:20:25 PM
last things

1. make log file optional and specify the log file name in config.txt, if none or by default don't log.
2. config file should also be optional, we will change the default port to 2121 again, users are used to it.
3. Default rpc password / username should be empty string. It can be used no problem with BTC if you set
that in BTC config (if someone is in rush and doesn't care about security/syncing they won't be making a config file).
4. up the version counter for beta in deployment.go (need to increase it every time when releasing something,
because otherwise there would be versions in the wild that we won't be able to tell what it is later, it's ok to
skip versions)

minor things

message about use 546 sats to claim, change to 330 sats, (it's cheaper).

the message on the main page "COMB height is greater than BTC" is by itself ok but should be reworded,
probably something like "COMB height is different than BTC".

check_btc_is_caught_up() had no boolean result, check it before release, go build should pass

remove fmt.Println(hash), fmt.Println(segments_stack) from stack.go, it just
spams every time I sweep coins. It's not needed.

in general, turn our debug statements that we added so far into log printing or something,

When releasing beta, the less printing on console the better. Except when the coin
crashes. When a new dev joins he/she can put their own debug prints and see them,
not search for their own prints in a pile of spam.

Testnet tools:

To decode bc1 address into witness program:

To encode witness program back, but into testnet tb1 address (can be used to claim tesntet comb).

Bitcoin command for testnet (prune needs to be high, because with low prune the BTC
will escape from and Comb will get fall behind unable to sync the blocks - "wrong height" ):

./bitcoin-qt -server -prune=55000 -rpcuser= -rpcpassword= -rpcport=8332 -testnet

Can generate brainwallet for testnet using:<number of keys>/<brainpass>

We've already synced with testnet. Let's hit faucet, get some testnet bitcoins, and claim some testnet comb.
12  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: May 03, 2021, 07:55:37 PM
new  parse_wait_for_block looks good!

I went ahead and implemented the proposed format into the leveldb as well as the use of leveldb.Batch. The leveldb options
are also in use namely to disable compression and enable Sync (this is needed for db to survive power loss - was not tested actually
doing the computer power loss, but should work).





Overall I've fixed a few bugs:

1. In miner(), when reorging with direction -1 the second topmost block must be compared with previous hash, not the topmost, and the termination condition should be one_or_no_blocks()
2. miner() again, it should Put the block only with direction +1
3. rare race condition in mine_blocks. A local copy of next_download variable is needed to terminate the for loop, because if we use the global one, the goroutines that we spawn may increase that same global next_download, which could then lead to two goroutines to download a certain height and mine the same block twice.

What remains is, use btc_is_caught_up_mutex on the main page, remove mining subrouter s3 from main.go, clear stale code from commitfile.go
13  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: May 02, 2021, 02:46:37 PM
I would've thought that the keys would have been ordered in numerical order, not "alphabetical" (so to speak lol), but that makes sense.

huh I think it just orders in raw bytes, it would be shame to have blocks wrong-ordered IF leveldb sorts by UTF-8.

The problem with this is that BTC likes to stall on RPC responses if it's busy validating blocks. If all we're doing is alerting the user to the BTC connection status, then I can just modify the current loop to only run if the timestamp has been longer than X seconds so that the other calls can also trigger it.

Yeah those stalls in the GUI are really annoying. I think I fixed it somehow, you know that outer loop in new_miner_start (newminer.go)

This is a standalone test of BTC behavior I needed to check. It can be integrated to comb.

Basically I sleep 1/10seconds, then call waitfornewblock RPC. That RPC is almost exactly what we need:

1. it waits until a block added to chain with configurable timeout
2. it returns height+hash in one call! awesome.

One last annoyance is the db completely getting erased when BTC is syncing + at lower height + at the same
chain branch that comb is.

That can be fixed by not reorging in case 1 when btc_is_caught_up=false. Basically we would wait for BTC to reach it's headers height and THEN reorg if needed.

Tell me what you think that is, if there is a worry that the longest valid (heaviest) blockchain would be actually few block shorter than the one comb is on.
14  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: May 01, 2021, 05:27:25 PM
1. metadata and the complexity on initial commits db load is caused by not using
the proper keys for information.

- Block key in database should be just height uint64 in bytes, not prefixed by 99999.
- Commitment key should be the 128bit utxo tag in bytes (it too starts with 64bit height).

Then you can load everything using one NewIterator loop over the whole db because
block and its commitments are sorted by height and are alternating. You don't need any maps/lists or lookups whatsoever.

So when you see a block (a 64bit key) you flush the previous block (if any) and open a new Sha256 writer for checksum.
When you see a commitment (a 128bit key) you keep mining it as well hashing it into checksum writer.
Now, when you see a 1+ higher block you verify the previous block's checksum matches what you squeeze out of Sha256 writer.
Eventually, the whole db will be loaded or you will experience some kind of an error.
Make sure to check the checksum and flush last block when complete database is loaded (if the db contained at least 1 block of course).

In case of an error:

- A new function needs to be written that will clear commit_cache, commit_tag_cache
while commit_cache_mutex is taken.

- delete all commits and the block at the errored height using a new iterator with just that height as prefix.

- Then you should flip to deletion mode, that is continue the original iterator loop,
just keep deleting everything having a 64bit/128bit key at higher than the errored height.

2. the whole ping_btc_loop+miner_command_channel+miner_output_channel thing should be deleted.
It just makes our normal goroutine stuck communicating a pointless thing inside set_connected(false) when I restart BTC.
btc_is_connected should be a mutexed timestamp integer and set_connected(true) should set it to current time each time BTC responds.
Then, if btc_is_connected is a recent timestamp (not older than 2 seconds) we are CONNECTED.

3. slow CommitLvlDbUnWriteVal is still in place

4. get_block_info_for_height was not fixed. It can't contain loop and must get hash from internal source when dir = -1 NOT from the BTC.

5. what about "case 1" (ourhash_at_btc_height == hash)? Use switch u_config.reorg_type to use handle_reorg_direct in that scenario too.

6. When parsing block, we need to copy PBH (string `json:"previousblockhash"`) to the parsed block.
The function miner (func miner) needs to be modified to return error. It will return error when
PBH is not equal to the topmost block hash in case we are mining with direction 1 and there is at least one (previous=topmost) block already.
The above mentioned previous block error needs to be handled in all three places, there are two places in downloader(): handle them using 
stop_run();return. The remaining place is in mine_blocks(), there, just break the loop in case of this error.

7. I don't know whether direct / miner reorg_type should be the default. That depends on you and your confidence about which one is more
production ready.

15  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: April 30, 2021, 06:55:45 AM
With interest I looked at the code.

do we intend to support both bidirectional mining and the new and much more
performant handle_reorg function? I suppose the answer is yes, I mean both
codes do pretty much the same thing and it makes sense to let the user choose.

And, supporting both codes using a config option will mean in case we fuckup
we can just tell users to reconfigure their clients instead of upgrading.

Now here is an example of problem that I have in mind (when using the old bidirectional unmining):

1. get_block_info_for_height() is wrong. That code absolutely must request
the block that needs to be reorged by it's block height and hash read from the db/ram.
Because BTC node might switched to different (better) block chain branch.
At that point a call to getblockhash(500000) will return block 500000 from the better
chain that BTC is on, not from the worse chain that we are on (assuming the reorg is
deeper than 500000).

2. the infinite loop inside get_block_info_for_height() should be removed,
we should just return error on any failure, that will terminate the 6 goroutines and
the downloader will reconsider what to do next after 1 second in the main loop.

But being a new function, I notice the imperfection in handle_reorg(), namely
it corrupts the aes checksum, because it erases commitments in random order,
I think the simplest fix would be to sort each block's commitments by their
utxo tag in decreasing order. So, once temp_commits_map gets populated:

for height := range temp_commits_map {
sort.Slice(temp_commits_map[height], func (i, j int) (less bool) {
return utag_cmp(
&temp_commits_map[height][j].tag) > 0;
} )

3. I also think I know why the bidirectional unmining is now slow because there is the
function CommitLvlDbUnWriteVal which loops over the entire db I think it can be
eliminated instead:

CommitLvlDbUnWrite(key, serializeutxotag(ctag))

4. we need to set direction_mine_unmine to UNMINE when reorging (this was previously
done by adding 5000000 to height but thats ugly), in miner_mine_commit_internal:
var direction_mine_unmine = utag_mining_sign(tag)
if dir == -1 && direction_mine_unmine == UTAG_MINE  {
direction_mine_unmine = UTAG_UNMINE

5. At this point when you set your fake block simulator to quickly reorg blocks,
it will eventually return to the initial situation with checksum of zero

also the database at that point should be cleared. But I still see inside db
the blocks hashes starting at 9999. These need clearing too.

16  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: April 28, 2021, 04:53:17 AM
okay if you can't batch unwrite, we will need to clean up at startup.

take a look at this newminer.go

and here is a test RPC server that serves test blocks, the initial height is 500000
17  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: April 26, 2021, 04:41:28 AM

what needs to happen when reorg back to a specific height (target height):

1. lock the commit_cache_mutex and commits_mutex,
2. make the  utxo tag correspond to the target height, set it's txnum=0, outnum=0, direction=false.
3. Run a for loop from the max height down towards target height:
4. - loop over the commits map. For each commit (key) whose height is equal to the currently unrolled height:
5. - - delete it from the combbases map, if it was there also call segments_coinbase_unmine() at that commit and unrolled height.
6. - - delete it from the commits map.
7. - - if used keys feature is enabled call used_key_commit_reorg(key, currently_unrolled_height)
8. - - call merkle_unmine(key)
9. - - also call the block of code in mine.go starting with txleg_mutex.RLock() and ending with txleg_mutex.RUnlock(), probably refactored to a separate function.
10. - - set unwritten = true if unwritten at least 1 commit
11. - don't do the thing below for every commit (key) anymore, but just for every height:
12. - if unwritten && enable_used_key_feature {used_key_height_reorg(reorg_height);}
13. don't do the thing below for every height anymore, but just once for the entire long range reorg:
14. commit_rollback = nil // to be sure
15. commit_rollback_tags = nil // to be sure
16. lazyopen = false // to be sure
17. resetgraph() // to reflow the entire balance graph
18. Truncate from LEVELDB everything above target height using a batch.
19. adios (unlock the 2 mutexes from step 1)

the nice thing is that you will be able to reorg back to any height, calling this new function once, not just 1 block back.

18  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: April 24, 2021, 03:21:39 PM
I have here a skeleton for the sleep-less channel-less downloading of blocks, could it be integrated? I don't think its a big change.

Your explanation about sequential reorg is fine then, let's keep linear.

To finalize the format I think we need to also pull block headers since genesis using getblockheader I think.

Thinking about it, perhaps to simplify things, it would be ok to have a mutex protected global in memory map keyed by blockhash containing the block height.

You know, what you previously did in the level db, but just in memory. In the level db, on disk, there would be the other direction (from height to full header).

Then on startup, the leveldb can be looped over and values cached to memory.

Advice about the migration to the new utxo tag:

1. put the on disk utxo tag struct to the code
2. write a function convert the on disk utxo tag to the old utxo tag (the height is just casted to uint32, based on height before/after fork fill the txnum and outnum or split the output order num uint32 into two uint16 dividing/remaindering by 10000 and return them in the old struct)
2. when downloaded the block, generate the on disk utxo tag and store it next to the commitment.
3. when going to mine the block, don't pass the utxo tag as string but simply as on disk utxo tag struct
4. the strictly_monotonic_vouts_bugfix_fork_height if inside miner_mine_commit_internal can then be removed.
5. all the remaining functions just look at height
6. finally put the old version of the utxo tag to the main map (commits map[[32]byte]utxotag) where the old code does
7. but serialize the new on-disk version of the utxo tag to level db.
19  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: April 23, 2021, 12:01:24 PM
yes I am right about the run boolean, there was really a race condition bug, I found it using:

 go build -race

The reader of that boolean must take the mutex too, not just the writer.

Sorry I was wrong, you aren't actually looping over the map[string]interface{}, just lookup one value. Thus it's safe.

Yes we need block height ON DISK to be uint64. This is because there will be fast altcombs (with block time 1 second or faster). If we don't do this, we are just repeating the Nakamoto 32bit timestamp bug. (Bitcoin Timestamp Year 2106 Overflow)

Utxo tag IN MEMORY can stay in it's current format for now. ([2]uint32). Can be bumped to match on disk format later.

I also think that the LEVELDB integers (inside keys) should be actual binary (big endian=starting by zeroes from the left) not in Binary coded decimals.
This will ensure that the transaction counter and output counter (both uint16) will be capped at 65535 not 9999.

Inside leveldb blocks (uint64 keys), you can store both the new 32byte block checksum and block header (80byte) concatenated.

The new 32byte block checksum will be SHA256 of Key 1 CAT Commitment 1 CAT Key 2 Commitment 2 CAT etc (all the keys and previously unseen commitments inside the block)

He=BTC Full node

1. read his best hash. if same as our best hash, end, otherwise start loop:
2. read his height
(optional) if his height is 0, feed him all our block headers (by submitheader) starting from genesis+1 and goto 2 (continue loop).
3. read his best hash, if different than read previously goto 2 (continue loop). else goto 4 (break loop).
4. read our block header at his height+do the header's hash.
5. if our hash at his height == his best hash from steps 1 & 3, reorg to his height, end.
6. if his height was less or equal to ours, launch the seek reorg height-by-height backwards procedure, then reorg to it's result. then fast forward, end.
7. read his hash at our height, if equal to ours top hash, fast forward, end.
8. launch the seek reorg height-by-height backwards procedure, then reorg to it's result, then fast forward, end.

seek reorg height-by-height backwards:
1. keep checking his hash at decreasing height until it matches our hash at that height.
2. return the height of highest match.
note: can be done using bisection just keep trying the central height between "highest match"+1 and "lowest non-match"-1.
if the central height was a match, increase "highest match" integer. If the central height was not a match, decrease "lowest non-match" integer. in both cases set the integer to central height and pick a new central height. Terminate when "highest match"+1 == "lowest non-match". This search is efficient even without goroutines.

fast forward:
1. check his hash at our height +1, +2, +3, ....
2. download that block(s).
3. apply them in order from lowest.
4. if applying any block fails, terminate all goroutines and end.
note: can be done by multiple goroutines, if the goroutine's downloaded block is
at current height +1 she will try applying it. otherwise she puts it into a shared map keyed by height and tries
applying the current height+1 block from the map (if it's there).
once the goroutine successfully applies her lowest block, she inspects the map to see if there's another lowest block.
if yes she applies that block too, if no, she goes to check another hash and download another block.
note 2: no need to have unlimited goroutines, just have like 6 of them and wait for them to apply everything using a wait group.
note 3: works without the use of channels, just needs a mutex protected shared map and two shared integer counters (all protected by the same mutex).

20  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN][COMB] Haircomb - Quantum proof, anonymity and more on: April 21, 2021, 08:22:09 AM

the config file didn't work out of the box on Linux, a one-liner was needed to add break in the parsing.

Now, the concept of pulling blocks over RPC is solid, I didn't know it was practical or doable. Great job there.

That said the implementation has it's flaws, I fixed two of them.

First of all the shutting down of goroutines using the run boolean had a race, so I've added a RWMutex to guard it.
I've changed the boolean to integer so that later when we want to start it again we just keep increasing it on every startup or shutdown.
The goroutine will just compare it's own copy of the integer to know if it should run or not.

Secondly, the parsing of the bitcoin block had a serious problem. The block/transaction was getting parsed into map[string]interface{}.
Problem is maps don't guarantee ordering by key when getting looped.
This could've caused reordering of commitments in some conditions. Whoops.

So I've put there a Block struct that contained slice of tx and tx contains slice of outputs. Problem solved, as slices are ordered.

You should recreate your databases after this fix just to be sure.

Another problem was that you weren't using pointer receivers when programming in an object oriented way. To highlight the problem:
func (a A) X() {}
fixed as:
func (a *A) X() {}
The lack of pointer receivers makes copy of the a each time which makes the original not writable, and could lead to other subtle glitches.

Further potential improvements

* Remove the mining/miner api. Only keep the height view set to 99999999 to make sure Modified BTC Does not troll us when we run at port :2121 set on config.
* Write complete block in batch instead of writing every commitment separately. Basically, you can open new leveldb.Batch object then write to it all the commitments. Then, even if computer hard shutdowns due to power loss the leveldb should guarantee that the final block either did or didn't get written completely.
* Change utxo tag to 128bit (Commit position is basically a position of the current commitment is in the block sequentially taking both seen and unseen commitments into account):
type UtxoTag struct {
    Height uint64
    CommitPositionInBlock uint32
    TransactionNum uint16
    OutputNum uint16
* Change on-disk block key to Height uint64 as opposed to Block hash. Then you can distinguish block keys and Commitment keys by their length.
* Implement the block-specific checksum. (Probably a SHA256 of all the commits in the block concatenated in their order). The block-specific checksum can be prepended to the block value whose key is uint64 on-disk:
* Implement storage of block headers. BTC header having 80byte should be recreated from the information provided via the RPC api and put to the database. This makes is feasible to resume the download using a specialized tool that I have in development (the tool will also require access to an API endpoint that will return ALL or the HIGHEST block header).

Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!