nixed
Jr. Member
Offline
Activity: 76
Merit: 8
|
|
May 06, 2021, 01:49:42 PM Last edit: May 06, 2021, 02:14:25 PM by nixed |
|
Haircomb Core v0.3.4-beta.1 is up here: https://github.com/nixed18/combfullui/releases/tag/0.3.4-beta.1Build (Optional)Do the same steps as the normal combfullui ( https://bitcointalk.org/index.php?topic=5195815.msg54605575#msg54605575) but with two differences: 1.1. Rather than cloning a github repo, click on "Code" and download the zip file located here: https://github.com/nixed18/combfullui/tree/338d2775d1a5d894259ed1c6b728e251ef432b5a. Move that folder into the location you want to build COMB and extract it, and continue with the build instructions. 2. Type in "go get github.com/syndtr/goleveldb/leveldb" to install LevelDB Set up bitcoin.conf1. Navigate to the directory where you have stored your BTC blockchain. By default it is stored in C:\Users\YourUserName\Appdata\Roaming\Bitcoin on Windows. You'll know you're there when you see blocks and chainstate folders, along with some .dat files for stuff like the mempool, peers, etc. 2. Look for a file named "bitcoin.conf". If one doesn't exist, make one by right clicking the whitespace and going New>TextFile. Then rename this file to "bitcoin.conf" 3. Open "bitcoin.conf" in Notepad and add the following two entries, replacing XXXXX with what you want your log info to be. This is only used for your BTC node's RPC access. rpcuser=XXXXX rpcpassword=XXXXX 4. Save and exit Set up config.txt1. Navigate to the directory you installed the Haircomb beta. 2. Create a text file called "config.txt". 3. Open "config.txt" in Notepad, and add the following lines, replacing the XXXXX with the same values that you used in your "bitcoin.conf". btcuser=XXXXX btcpass=XXXXX 4. Save and exit. If Haircomb was open during this edit, you must restart the program for your changes to take effect. RunAssuming you're using an unmodded BTC, you can run either BTC or Haircomb first, it doesn't matter. While Haircomb is running, it'll keep checking if there's a BTC server running on your machine, and if so, will attempt to connect with it. When you run BTC, either run bitcoind.exe OR run bitcoin-qt.exe with "-server" as an argument. It is also compatible with Natasha's modded BTC, but just remember to launch Haircomb BEFORE the modded BTC. Watashi has provided instructions and resources to run the beta using the BTC testnet, those can be found here: https://bitcointalk.org/index.php?topic=5195815.msg56935798#msg56935798The current version's default port is 2121, this can be changed in the config.txt file with the entry "port=XXXX", replacing XXXX with a valid port number. Selecting the direct reorg handle to test can be done by inserting "reorg_type=miner" into your config.txt
|
|
|
|
nixed
Jr. Member
Offline
Activity: 76
Merit: 8
|
|
May 06, 2021, 01:53:37 PM |
|
Watashi, the program you're working on, am I correct in assuming it's similar to the other program you published a while ago in that it would remove the need for a full node to be running to sync from, and would instead request blocks from peers? If so, then am I also correct in assuming that it will likely take longer to do a full commits build?
|
|
|
|
watashi-kokoto
|
|
May 08, 2021, 11:39:39 AM |
|
Yeah, In case the local node is a full node, the sync would take nearly the same time. In case the local node is a pruned node, some blocks would have to be pulled over the network, this sync would be slower and capped by the local internet speed. TESTING☑ Successful / unsuccessful claiming works. ☑ Transaction works. ☑ Naive double spend is caught correctly. ☑ Liquidity stack loops working. ☑ Coin loop detection worked ok so far. 2 minor severity issues - both existing in the Natasha version tooIssue 1 - Brain wallet keys not added to the used key feature.Problem: when creating a brain wallet using the HTTP call, the keys that get generated aren't added to the used key feature. This means once the keys become used, node restarted and brain wallet re-generated, it will not be visible that the used key is already spent. In function: wallet_generate_brain Solution: copy the "if enable_used_key_feature" lines from the normal key generator (key_load_data_internal) into wallet_generate_brain too. Issue 2 - Used keys balance should be forced to be 0 COMB on the user interface on detected outgoing key spendProblem: when a key is spent, but the transaction is not added to the node using the link, the node will keep displaying the old balance - the old key balance (nonzero) will be visible even in the event of 1+ confirmations. Discussion: This is not a consensus issue, if the user attempts double spending using the nonzero balance, the double spend will get caught and dropped normally. In function: wallet_view Solution: In the wallet printing loop, move the used_key_feature block above printing the wallet key+balance row. Inside the used_key_feature block, set the balance (bal) to zero if outgoing spend got recognized. Finally, print the outgoing spend row below the walled key+balance row from a temp variables. While we're fixing this, we may also hide the pay button in case of reorg to save user from accidentally double-spending. Reference wallet.gohttps://pastebin.com/raw/cLdG5pB3
|
|
|
|
|
nixed
Jr. Member
Offline
Activity: 76
Merit: 8
|
|
May 09, 2021, 12:46:50 AM |
|
I haven't done any real diving into the tx portion of COMB yet, so I'm afraid I won't be much help here.
I ask about the block pulling because I'm curious if it makes sense to aim for, in the future, segmented orphan repair. Right now if a block is corrupted the entire chain after said block is discarded; using the current block-by-block fingerprinting, as well as including the hash of the previous block's metadata, it seems viable to allow Haircomb to just discard only the corrupted blocks, and redownload them.
I'm not sure if I'm being paranoid or making up corruption scenarios that don't exist though, so I dunno if it's actually worth doing or not right now.
|
|
|
|
watashi-kokoto
|
|
May 09, 2021, 11:48:00 AM |
|
well I see, in my opinion repairing 1 block doesn't make sense, purely because a) BTC node isn't guaranteed to be present b) even if BTC node is present, how do you know it won't send the wrong information again c) even if it sends the right information, you will need to download all the blocks after that blocks anyway, for example in case a previously unseen commitment abc...123 was fixed by being "removed" from block 500000 (because it isn't there), all the 500000+ blocks need to be inspected to confirm that abc...123 doesn't appear there again to "add it" (make it previously unseen in a later block). d) so you can pretty much repair only errors that are fixed by "adding" commitments, and you will still need to fix up later blocks to remove from them e) all of this makes it pretty narrow scoped as opposed to node operator simply copying over the "right" database to the node from a known good backup. here is my final fix for the high severity problem. Only difference between the preliminary fix and this is the removal of the mutex commits_mutex locks+unlocks in the two cases (merkle_mine+merkle_unmine) where it's being held already and it would just cause a deadlock. merkle.go https://pastebin.com/raw/SR2Y83Qttxlegs.go https://pastebin.com/raw/vgfRjNYF
|
|
|
|
nixed
Jr. Member
Offline
Activity: 76
Merit: 8
|
|
May 09, 2021, 02:42:33 PM Last edit: May 09, 2021, 03:32:53 PM by nixed |
|
well I see, in my opinion repairing 1 block doesn't make sense, purely because a) BTC node isn't guaranteed to be present b) even if BTC node is present, how do you know it won't send the wrong information again c) even if it sends the right information, you will need to download all the blocks after that blocks anyway, for example in case a previously unseen commitment abc...123 was fixed by being "removed" from block 500000 (because it isn't there), all the 500000+ blocks need to be inspected to confirm that abc...123 doesn't appear there again to "add it" (make it previously unseen in a later block). d) so you can pretty much repair only errors that are fixed by "adding" commitments, and you will still need to fix up later blocks to remove from them e) all of this makes it pretty narrow scoped as opposed to node operator simply copying over the "right" database to the node from a known good backup. here is my final fix for the high severity problem. Only difference between the preliminary fix and this is the removal of the mutex commits_mutex locks+unlocks in the two cases (merkle_mine+merkle_unmine) where it's being held already and it would just cause a deadlock. merkle.go https://pastebin.com/raw/SR2Y83Qttxlegs.go https://pastebin.com/raw/vgfRjNYFI was thinking more along the line of some unseen problem causing a corruption in the commits db, not a wrong input from the BTC. So the scenario would assume that, at one point, Haircomb did have a full, correct commits db file, and then something happened to cause one or more blocks to become corrupted. But you're right, it makes a lot more sense just to restore form a backup in this case. EDIT: Updated the github and have a new release for the patches, I haven't had a chance to properly test it yet though.
|
|
|
|
watashi-kokoto
|
|
May 15, 2021, 12:54:28 PM |
|
I've been testing busily and found numerous problems. Let's start with the simpler ones. used_key.go - used_key_add_new_minimal_commit_height - when deleting from slices l is always 31 instead of: there needs to be the length of the actual slice: l := len(used_height_commits[min_height]) - 1 and l := len(used_commit_keys[min_commit]) - 1 Explanation: taking length of v which is a hash (always of size 32) is not the intention. newminer.go - handle_reorg_direct - off by one error in direct mode causes "holes" in database the fix is rather simple: iter := commitsdb.NewIterator(&util.Range{Start: new_height_tag(uint64(target_height)+1), Limit: new_height_tag(uint64(height+1))}, nil) newminer.go - miner - when reorging we need to provide the height of previous block (not of the reorged one) to become the topmost block // Flush var commit_fingerprint hash.Hash if dir == -1 { commit_fingerprint = miner_mine_commit_pulled("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF", new_flush_utxotag(uint64(parsed_block.height)-1), 0) } else if dir == 1 { commit_fingerprint = miner_mine_commit_pulled("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF", new_flush_utxotag(uint64(parsed_block.height)), 0) }
Explanation: the current reorged one 's previous block height should become the topmost by posttag() inside miner_mine_commit_internal(). If we don't do this, the check "error: mined first commitment must be on greater height" will wrongly prevent one block after a reorg to get mined in miner mode. There are a threading problems as well, that I will explain in a later post.
|
|
|
|
nixed
Jr. Member
Offline
Activity: 76
Merit: 8
|
|
May 19, 2021, 02:59:07 PM |
|
I committed the changes, I'll wait until you go over the threading problem before building a new release. I'm waist deep in some work stuff but I'll do my best to keep up lol.
|
|
|
|
watashi-kokoto
|
|
May 19, 2021, 09:03:30 PM |
|
Ah, the thread thing. Sorry for not having an exact solution you can apply right now. It's caused by segments_transaction_mutex and segments_merkle_mutex mutexes. They protect mainly the maps that tell you where money should go from transaction address, for merkle transaction (comb trade) the map is named e0_to_e1 and for haircomb transaction the map is named segments_transaction_next. The key in both maps is some kind of used (spent) address, and the value is the next address where all that money should move next. Actually segments_transaction_next contains the txid too. Because the logic is the same - the locking should be the same but it isn't, that's the bug. When you look at segmenttx.go and segmentmerkle.go - that is the money trickling code. There are two avenues to fix this one would be simply surround each read from the map(s) with RLock RUnlock. This option sounds right for consensus-non-critical paths like inside: segments_transaction_loopdetect() segments_transaction_backgraph() segments_merkle_loopdetect() segments_merkle_backgraph() This is easy. Another part is to NOT surround each read from the map with RLock RUnlock but instead the whole invocation of the trickling is guarded at the highest level. This part sounds right for consensus critical like balance calculation. This will prevent somebody to add new transactions while the money is still being propagated along the graph. Once the dust settles, the queued new transaction adding gets the green light. Example from txrecv.go, there are other similar top-level places that call into trickling code. if newactivity == 2097151 {
Lock(segments_transaction_mutex)
segments_transaction_next[actuallyfrom] = txidandto
Unlock(segments_transaction_mutex)
RLock(segments_transaction_mutex) RLock(segments_merkle_mutex)
var maybecoinbase = commit(actuallyfrom[0:]) if _, ok1 := combbases[maybecoinbase]; ok1 { // ...invoke coinbase trickling in case the haircomb was a coinbase segments_coinbase_trickle_auto(maybecoinbase, actuallyfrom) }
//..invoke cash trickling here: segments_transaction_trickle(make(map[[32]byte]struct{}), actuallyfrom)
RUnlock(segments_merkle_mutex) RUnlock(segments_transaction_mutex)
}
The reason why both read mutexes are taken is that transaction can pay to merkle transaction which could cause transaction money trickling become merkle tx money trickling.
|
|
|
|
watashi-kokoto
|
|
May 22, 2021, 08:57:05 AM |
|
Here are changes for fixing the racing problems. 1. segmentmerkle.go remove all uses of segments_merkle_mutex RLock() and RUnlock(), toplevel callers will have to lock that instead. code omitted2. anonminize.go, add 2 mutexes rlock and runlock around segments_coinbase_backgraph: for combbase := range bases { segments_transaction_mutex.RLock() segments_merkle_mutex.RLock() segments_coinbase_backgraph(backgraph, make(map[[32]byte]struct{}), target, combbase) segments_merkle_mutex.RUnlock() segments_transaction_mutex.RUnlock()
} 3. loopdetect.go, add 2 mutexes rlock and runlock to loopdetect() function: func loopdetect(norecursion, loopkiller map[[32]byte]struct{}, to [32]byte) (b bool) { segments_transaction_mutex.RLock() segments_merkle_mutex.RLock() var type3 = segments_stack_type(to) if type3 == SEGMENT_STACK_TRICKLED { b = segments_stack_loopdetect(norecursion, loopkiller, to) } var type2 = segments_merkle_type(to) if type2 == SEGMENT_MERKLE_TRICKLED { b = segments_merkle_loopdetect(norecursion, loopkiller, to) } var type1 = segments_transaction_type(to) if type1 == SEGMENT_TX_TRICKLED { b = segments_transaction_loopdetect(norecursion, loopkiller, to) } else if type1 == SEGMENT_ANY_UNTRICKLED { } else if type1 == SEGMENT_UNKNOWN { } segments_merkle_mutex.RUnlock() segments_transaction_mutex.RUnlock() return b }
4. merkle.go commits mutex MUST be taken when call merkle_scan_one_leg_activity() here: commits_mutex.RLock() var allright1 = merkle_scan_one_leg_activity(q1) var allright2 = merkle_scan_one_leg_activity(q2)
if allright1 && allright2 { reactivate_txid(false, true, tx) } commits_mutex.RUnlock() return true, e[0] and add 2 mutexes rlock and runlock: if newactivity { segments_transaction_mutex.Lock() segments_merkle_mutex.Lock() if old, ok1 := e0_to_e1[e[0]]; ok1 && old != e[1] {
fmt.Println("Panic: e0 to e1 already have live path") panic("") }
e0_to_e1[e[0]] = e[1] segments_merkle_mutex.Unlock() segments_transaction_mutex.Unlock() segments_transaction_mutex.RLock() segments_merkle_mutex.RLock() var maybecoinbase = commit(e[0][0:]) if _, ok1 := combbases[maybecoinbase]; ok1 { segments_coinbase_trickle_auto(maybecoinbase, e[0]) }
segments_merkle_trickle(make(map[[32]byte]struct{}), e[0]) segments_merkle_mutex.RUnlock() segments_transaction_mutex.RUnlock()
} 5. mine.go add write locking: if *tx == (*txidto)[0] { segments_transaction_mutex.Lock() segments_transaction_next[actuallyfrom] = *txidto segments_transaction_mutex.Unlock() return false } change 2 add toplevel locking: segments_transaction_mutex.RLock() segments_merkle_mutex.RLock()
var maybecoinbase = commit(actuallyfrom[0:]) if _, ok1 := combbases[maybecoinbase]; ok1 { segments_coinbase_trickle_auto(maybecoinbase, actuallyfrom) }
segments_transaction_trickle(make(map[[32]byte]struct{}), actuallyfrom)
segments_merkle_mutex.RUnlock() segments_transaction_mutex.RUnlock() change 3: segments_transaction_mutex.RLock()
var val = segments_transaction_data[*tx][i]
segments_transaction_mutex.RUnlock() change 4: if oldactivity == 2097151 { segments_transaction_mutex.Lock() var actuallyfrom = segments_transaction_data[*tx][21]
segments_transaction_untrickle(nil, actuallyfrom, 0xffffffffffffffff)
delete(segments_transaction_next, actuallyfrom)
segments_transaction_mutex.Unlock() } 6. stack.go surround stack trickle with mutexes: segments_transaction_mutex.RLock() segments_merkle_mutex.RLock()
segments_stack_trickle(make(map[[32]byte]struct{}), hash)
segments_merkle_mutex.RUnlock() segments_transaction_mutex.RUnlock() 7. txrecv.go tx_receive_transaction_internal(), take both mutexes: segments_transaction_mutex.Lock() segments_merkle_mutex.Lock() and segments_merkle_mutex.Unlock() segments_transaction_mutex.Unlock()
|
|
|
|
nixed
Jr. Member
Offline
Activity: 76
Merit: 8
|
|
May 29, 2021, 02:44:08 PM Last edit: May 29, 2021, 03:44:57 PM by nixed |
|
Sorry I wasn't very helpful on this end, I'll commit the changes and build a new release later today.
EDIT: Jesus christ, I spent so long just looking at the mining code I forgot how complex the guts of this thing actual are. I gotta get more familiar with it.
I've added the changes and committed it to github, if I didn't mess anything up then I'll build a new release. How did you figure out that this was an issue? Was it just code scanning, or did you generate a crash during testing?
|
|
|
|
watashi-kokoto
|
|
May 31, 2021, 05:41:24 PM |
|
The racing problems- I'm catching them while running go build -race and reorging blocks and loading wallets at the same time.
Now there is a mistake at line 231 in merkle.go - there needs to be commits_mutex.RUnlock()
That's all. I've also tested the nested comb trades. They work properly.
I'm ok with the final version - can be released (if there is nothing else) -_-
|
|
|
|
nixed
Jr. Member
Offline
Activity: 76
Merit: 8
|
|
May 31, 2021, 09:49:12 PM |
|
The racing problems- I'm catching them while running go build -race and reorging blocks and loading wallets at the same time.
Now there is a mistake at line 231 in merkle.go - there needs to be commits_mutex.RUnlock()
That's all. I've also tested the nested comb trades. They work properly.
I'm ok with the final version - can be released (if there is nothing else) -_-
Fixed. I've committed and uploaded a build; https://github.com/nixed18/combfullui/releases/tag/v0.3.4Now to update the documentation, then on to light client server stuff.
|
|
|
|
watashi-kokoto
|
|
June 27, 2021, 06:59:20 AM Last edit: June 29, 2021, 03:41:33 AM by watashi-kokoto |
|
Hello, the comb downloader started to sync correctly, so I thought I would share it. https://bitbucket.org/watashi564/combdownloader/src/master/If possible, It would be great to have it moved over to github and make a release! The usage is pretty simple, the user just runs combdownloader.exe and the community series combfullui.exe and any recent version of bitcoin core. It will make 8 connections to the bitcoin core (this could perhaps be decreased) Then it will start pulling blocks into the comb wallet. There is a distinct possibility to use this in a "lite" mode, that is without having bitcoin core installed. Although then the user would need to download over +100gb of data still, of which +100 mb would be retained, so I don't know what the benefit would be. To use in lite mode perhaps the ip address in main.go could be made configurable. That way you can connect to any online bitcoin core in the bitcoin swarm. EDIT: Important! please do not use this comb downloader just yet. There is an off-by-one error in the code, it syncs block 481824 into the slot for height 481825 etc...Each block gets synced at the wrong height (1 block higher) The wallet appears to be working fine despite of this error, and you will not lose any funds. Problem will manifest if the user switches to a correct syncing method, that will cause a loss of one block chain block. If you've used this comb downloader please delete your commits folder, to recover from the problem.
|
|
|
|
nixed
Jr. Member
Offline
Activity: 76
Merit: 8
|
|
July 02, 2021, 02:11:32 PM |
|
Hello, the comb downloader started to sync correctly, so I thought I would share it. https://bitbucket.org/watashi564/combdownloader/src/master/If possible, It would be great to have it moved over to github and make a release! The usage is pretty simple, the user just runs combdownloader.exe and the community series combfullui.exe and any recent version of bitcoin core. It will make 8 connections to the bitcoin core (this could perhaps be decreased) Then it will start pulling blocks into the comb wallet. There is a distinct possibility to use this in a "lite" mode, that is without having bitcoin core installed. Although then the user would need to download over +100gb of data still, of which +100 mb would be retained, so I don't know what the benefit would be. To use in lite mode perhaps the ip address in main.go could be made configurable. That way you can connect to any online bitcoin core in the bitcoin swarm. EDIT: Important! please do not use this comb downloader just yet. There is an off-by-one error in the code, it syncs block 481824 into the slot for height 481825 etc...Each block gets synced at the wrong height (1 block higher) The wallet appears to be working fine despite of this error, and you will not lose any funds. Problem will manifest if the user switches to a correct syncing method, that will cause a loss of one block chain block. If you've used this comb downloader please delete your commits folder, to recover from the problem. Cool! How does it handle the comb client trying to pull commits over the RPC? Also just confirming, but it can maintain an up-to-date commits file without hosting a local node if you plug in an IP of a trusted BTC node? I know some people on the telegram were saying it's a bit of an inconvenience to run a BTC full node just to use Haircomb, even if it's just because of the HD space it takes up. Setting it up so they don't need to, even if it DOES still take bandwidth to download the commits, would still probably be pretty attractive to them.
|
|
|
|
watashi-kokoto
|
|
July 08, 2021, 12:07:07 PM |
|
I understand that the remote btc node can be pretty attractive, precisely because of haircomb's small disk consumption. We just should be careful about the security model, because there are real security disadvantages.
Now, the details: It's a two ended shop. On the back end it maintains 8 connections to the bitcoin wire network. The first connection pulls and difficulty-validates all the btc headers (this is what the system then trusts is the longest chain). There's also a blockhash checkpoint built-in (can be made configurable). All the connections are capable of pulling in complete raw btc blocks. They're pre segwit-blocks, so they're capped at 1MB (there are missing signatures). Validation is SPV at best, unable to catch blocks with doublespends/theft as invalid, but still powerful enough to recognize bcash blocks as invalid. Tx merkle tree is also checked.
pulling commits over the RPC is done imitating actual bitcoin rpc api. There's just the minimum information needed only, basically just the relevant comb outputs, plus some info from the block header, so it's kind of small.
To connect experimentally to a remote node, change the ip in main.go, domain name is fine too. Just decrease the connections from 8 to like 2, real bitcoin nodes don't like when you connect to them too many times. Ipv6 can also be used [ipv6]:8333. The first connection j == 0 will be the header puller.
Possible improvements:
1. make the addnode ip configurable 2. make the checkpoint configurable 3. make the connection count configurable 4. cruiser connection, this will be the initial connection that won't be used to pull anything, just to discover IPs of nodes, once enough IPs are known, N real connections are made to random nodes and to different ones on disconnect. 5. currently reorgs deeper than 2 blocks are impossible.. fix this?
|
|
|
|
nixed
Jr. Member
Offline
Activity: 76
Merit: 8
|
|
July 08, 2021, 04:18:40 PM |
|
What's currently preventing deep reorgs?
Looking at the previous direct RPC block pulling method, the limiting factor was how fast the BTC node could spit out block data. Theoretically, if you had access to multiple BTC nodes that you trusted, you could get a MUCH faster full build of your commits file, right? That's pretty sick if it's true, though again the question is how to enable that while maintaining as much security as possible.
Another question that pops up is the port management; the combunity release connects to port 8332, and from the reading I've done the combdownloader hosts on 8332, so that makes sense. But what happens when BTC tries to host on 8332, like it normally would? Do you have to launch the downloader before the BTC, or modify BTC's default RPC port to something other than 8332? Or have I missed something about how the downloader is operating?
|
|
|
|
watashi-kokoto
|
|
July 09, 2021, 07:52:48 AM |
|
deep reorgs are mostly prevented by a lack of testing and code, it really begs to be tested on testnet/regtest, see what happens on reorg and just fix things.
Of course when testing on testnet/regtest you need to temporarily disable the difficulty checking and set the correct ports.
performance wise, it's currently capped by fiber speed and remote node's disk speed. Haircomb core pulls 5 blocks at the same time and this causes comb downloader to also pull them concurrently from arbitrary connections.
If we create connection to enough distinct nodes the total disk speed of them will almost surely dominate the fiber/dsl speed of the internet connection This will cause the internet connection to be maxed up and that's insane yeah.
Security wise it'll be alright. The headers are checked for hashing difficulty when pulled. Blocks are checked for merkle tree transaction presence when pulled. Obviously it's a some kind of SPV model. There is no guarantee the longest chain is actually a bitcoin chain, because adversary can mine an invalid block with a double spend and serve it to us if we trust it. Mining an invalid blocks has a cost associated.
For casual use when the user just wants to try claiming haircomb it'll be fine. The security can be improved by syncing a local pruned node and trusting it's headers, while pulling blocks remotely.
I think only the first app that starts using port 8332 can do so. So for comb downloader to work you have to start bitcoin not in rpc mode.
|
|
|
|
Yliz
Newbie
Offline
Activity: 1
Merit: 0
|
|
August 07, 2021, 11:47:08 PM |
|
Forgive me if some of this stuff has been explained or if I'm wrong, but I feel like there's not enough simplified information about what Haircomb does or why it'd be valuable, at least for people who aren't very experienced with these things. If I understand claiming, you send a specific amount to your own generated BTC address, but the transaction also needs to be at the front of a BTC block in order to successfully claim. So that'd mean right now we can claim with the highest transaction fee, although some day miners can just put their own transaction at the front of a block to claim it themselves since they organize the blocks. Is that correct? That seems like it could help a ton with mining incentive, but what about the use of COMB itself? It's private, but at what cost? COMB holders basically each have their own piece of the ledger and each need to have synced BitcoinCore running in order to use COMB. That doesn't seem ideal when the core is ~400GB. Even at a smaller size, it's not as easy to use as other cryptocurrencies. Is it possible for that to change at some point or is that just how it is since we'd each own our transaction data? Is there a way for it to be easier to use in the future? Then there's liquidity stacks. They seem really interesting - a transaction with infinite outputs could help a ton with scaling in certain scenarios, but starting that first liquidity stack still requires BTC transactions and can't scale beyond BTC's capabilities. Is there any way the initial input could scale better? There's also deciders/decider's purse that I see, I'm not sure how that correlates with any of this. Do I understand COMB for the most part? I'd like to know where I'm wrong and what could fix the problems I think I see. This token is very interesting, but it's hard to understand.
|
|
|
|
|