Hi, i'm ETFbitcoin and i would like to apply to be a merit source, mainly because :
1. Not enough appreciation for newcomers who bring rough, but interesting/great idea for Bitcoin development
2. I'm out of sMerit, even if i only give 1-2 merit for a post/thread
Based on tool made by Piggy at
https://bitcointalk.org/index.php?topic=4393868.0 &
https://albertoit.github.io/Merit-Explorer-SQL/, i'm top 5 merit sender on "Development & Technical Discussion" boards, top 10 on "Bitcoin Technical Support" and top 50 overall
1 based on transaction count.
I'm from Indonesia, so i can participate in Bahasa Indonesia (Indonesian) boards and try to encourage more technical discussion in my board.
I'm not sure if i'm can categorized as somewhat established member, but i :
1. Sometimes help others with basic/common technical problem
2. I actively report spam on few board with stats
989 posts with 98% accuracy (936 good, 27 bad, 26 unhandled)3. Starting try to make people aware with unpopular/upcoming Bitcoin technology such as Dandelion
Here are 10 posts/threads which i believe haven't received enough merit or/and could be implemented on Bitcoin development with further development. P.S. i'm fully aware some of the idea is controversial.
#1Proof of Collaborative Work
A proposal for eliminating the necessity of pool mining in bitcoin and other PoW blockchains
MotivationFor bitcoin and the altcoins which are based on common PoW principles, centralization of mining through using pools, is both inevitable and unfortunate and puts all the reasonings that support the security of PoW in a paradoxical, fragile situation.
A same problem does exist with PoS networks. Things can get even worse there because of the fact that most PoS based systems enforce long run deposit strategies for miners that is highly discouraging for them to migrate from one pool to another because of the costs involved.
Satoshi Nakamoto's implementation of PoW that is the core of bitcoin client software is based on a winner-takes-all strategy which is the fundamental factor behind two critical flaws:
mining variance and
proximity premium which are the most important forces that participate in forming
pooling pressure.
Until now, both mining variance and proximity premium are known to be unavoidable and hence pooling pressure is considered an inherent flaw for bitcoin and other PoW based currencies.
In this proposal, we are suggesting an alternative variant of PoW in which the traditional
winner-takes-all is replaced with a
collaborator-takes-share strategy.
The problem of solo mining becoming too risky and impractical for small mining facilities appeared in 2010, less than 2 years after bitcoin had been launched. It was the worst timing ever, although
Satoshi Nakamoto made a comment on bitcointalk about first pool proposals, it was among the latest posts Satoshi made and he just disappeared few days later from this forum, forever, without making a serious contribution to the subject.
This way, the confused community came out with an unbelievable solution for such a critical problem, a second layer centralized protocol, named pooling, boosted by greed and ignorance, supported by junior hackers who as usual missed the forest.
Bitcoin was just 2 years old when
pooling age began and eventually dominated almost all the hashpower of the network.
A quick review of
Slush thread in which Satoshi has made the above referenced reply, could reveal how immature and naive this
solution was and has been discussed and how it has been adopted: In a rush with an obvious greed.
Nobody ever mentioned the possibility of an algorithm tweak to keep PoW decentralized. Instead everybody was talking about how practical was such a centralized service while the answer was more than obvious:
Yes! you can always do everything with a centralized service, don't bother investigating.
Anyway, in the thread, one couldn't find any arguments about the centralization consequences or the possibility of alternative approaches including the core algorithm improvements
I think it is not fair. PoW is great and can easily be improved to eliminate such a paradoxically centralized second layer solution. This proposal, Proof of Collaborative Work (PoCW) is an example of inherent possibilities and capacities of PoW. I didn't find any similar proposal and it looks to be original but If there is a history, I'll be glad to be informed about.
The Idea is accepting and propagating works with hundreds of thousands times lower difficulties and accumulating them as a proof of work for a given transaction set, letting miners with a very low shares of hash power ( say of orders like 10
-6) to participate directly in the network and yet experience and monitor their performance on an hourly basis.
Imo, now, after almost a decade being passed, Moore law has done enough to make it feasible utilizing more bandwidth and storage resources and it seems to me kinda hypocritic to make arguments about 'poor miners' and pretending to be concerned about centralization threats and making excuses so for rejecting this very specific proposal that although increases the demand for such resources, can radically disrupt current situation with pools and centralized mining.
This proposal is mainly designed for bitcoin. For the sake of convenience and letting the readers to have a more specific perception of the idea, I have deliberately used constants instead of adjustable parameters.
Outlines - An immediate but not practically feasible approach can be reducing blocktime (along with proportional reduction in block reward). Although this approach, as mentioned, can not be applied because of network propagation problems involved, but a very excellent consequence would be its immediate impact on the scalability problem if employed, we will use it partially (reducing blocktime to 1 minute compared to current 10 minutes period).
- As mentioned earlier (and with all due respects to Core team), I don't take objections about the storage and network requirements implications and consequences of reducing blocktime as a serious criticism. We should not leave mining in hands of 5 mining pools to support a hypothetical poor miner/full node owner who can not afford installing a 1 terabyte HD in next 2 years!.
- Also note, blocktime reduction is not a necessary part of PoCW, the proposed algorithm, I'm just including it as one of my old ideas (adopted from another forum member who suggested it as an alternative to infamous block size debate and later has been developed a bit more by me) which I think deserves more investigation and discussion.
- PoCW uses a series of mining relevant data structures to be preserved on the blockchain or transmitted as network messages
- Net Merkle Tree: It is an ordinary Merkle hash tree of transactions with the exception that its coinbase transaction shows no block reward (newly published coins) instead the miner charges all transaction fees to his account (supports SegWit)
- Collaboration Share: it is a completely new data structure composed of following fields:
- 1- The root of a Net Merkle Tree
- 2- Collaborating miner's wallet address
- 3- A nonce
- calculated difficulty using previous block hash padded with all previous fields, it is always assumed to be at least as hard as 0.0001 compared to current block difficulty
- Coinbase Share: it is new too and is composed of
- 1- A Collaborating miner's wallet address
- 2- A nonce
- 3- A computed difficulty score using the hash of
- previous block's hash padded with
- current block's merkle root, padded with
- Collaborating miner's address padded with the nonce field
- 4- A reward amount field
- Shared Coinbase Transaction: It is a list of Coinbase Shares
- First share's difficulty score field is fixed to be 2%
- For each share difficulty score is at least as good as 0.0001
- Sum of reward amount fields is equal to block reward and for each share is calculated proportional to its difficulty score
- Prepared Block: It is an ordinary bitcoin block with some exceptions
- 1- Its merkle root points to a Net Merkle Tree
- 2- It is fixed to yield a hash that is as difficult as target difficulty * 0.05
- Finalization Block: It is an ordinary bitcoin block with some exceptions
- 1- Its merkle root points to a Net Merkle Tree
- 2- It is fixed to yield a hash that is as difficult as target difficulty * 0.02
- 3- It has a new field which is a pointer to (the hash of) a non empty Shared Coinbase Transaction
- 4- The Shared CoinBase Transaction's sum of difficulty scores is greater than or equal to 0.95
- Mining process goes through 3 phases for each block:
- Preparation Phase: It takes just few seconds for the miners to produce one or (barely) 2 or 3 Prepared Blocks typically. Note that the transaction fees are already transferred to miner's wallet through coinbase transaction committed to the Net Merkle Tree's root for each block.
- Contribution Phase: Miners start picking one valid Prepared Block's Merkle root, according to their speculations (which become more accurate as new shares are submitted to the network) about it to get enough shares eventually, and producing/relaying valid Contribution Shares for it.
As the sum of the difficulty scores for a given Prepared Block's Merkle root grows we expect an exponential convergence rate for the most popular Merkle root to be included in Contribution Shares. - Finalization Phase: After the total scores approaches the 0.93 limit, rational Miners would begin to produce a Finalized block
- Verification process involves:
- Checking both the hash of the finalized block and all of its Shared Coinbase Transaction items to satisfy network difficulty target cumulatively
- Checking reward distribution in the shared coinbase transaction
- Checking Merkle tree to be Net
- UTXO calculation is extended to include Shared Coinbase Transactions committed to finalized blocks on the blockchain as well
- Attacks/forks brief analysis:
- Short range attacks/unintentional forks that try to change the Merkle root are as hard as they are in traditional PoW networks
- Short range attacks/unintentional forks that preserve the Merkle root but try to change the Shared CoinBase Transaction has zero side effects on the users (not the miners) and as of redistributing the shares in favor of the forking miner, they are poorly incentivized as gains won't go anything further than like %2-%10 redistribution ever.
- Long Range attacks with a total rewrite agenda will fail just like Traditional PoW
- Long Range attacks with partial coinbase rewrite are again poorly incentivized and the costs won't be justified
ImplementationThis is a radical improvement to classical PoW, I admit, but the costs involved are fair for the huge impacts and benefits. I have reviewed the bitcoin Core's code and found it totally feasible and practical form the sole programming perspective. Wallets could easily be upgraded to support the new algorithm as well, but a series of more complicated issues, mostly political are extremely discouraging but it is just too soon to give up and go for a fresh start with a new coin, or just manage for an immature fork with little support, imo.
Before any further decisions, it would be of high value to have enough feedback from the community. Meanwhile I'll be busy coding canonical parts as a BIP for bitcoin blockchain, I think it takes like 2-3 weeks or even a bit more because I'm not part of the team and have to absorb a lot before producing anything useful, plus, I'm not full time, yet
I have examined the proposed algorithm's feasibility as much as I could, yet I can imagine there might be some flaws overlooked, and the readers are welcome to improve it. Philosophical comments questioning the whole idea of eliminating pools don't look to be constructive tho. Thank you.
Major Edits and Protocol Improvements:- June 10, 2018 09:30 pm Inspired by a discussion with @ir.hn
- A Prepared Block should be saved in the fullnodes for a long period of time enough to mitigate any cheating attempt to avoid Preparation Phase and using non-prepared, trivially generated Net Merkle Roots.
- Full nodes MAY respond to a query by peers asking for a block's respected Prepared Block if they have decided to save the required data long enough
- For the latest 1000 blocks preserving such a data is mandatory.
- For blocks with an accumulated difficulty harder than or equal to the respected network difficulty, it would be unnecessary to fulfil the above requirement.*
- Prepared Block and Preparation phase terms replaced the original Initiation Block and Initiation Phase terms respectively to avoid ambiguity
Notes:
* This is added to let miners with large enough hash powers choose not to participate in collaborative work.
- reserved for future upgrades
- July 3, 2018 02:20 pm inspired by discussions with @anunimint
- A special transaction was added to Shared CoinBase Transaction to guarantee/adjust proper reward for the finder of Prepared Block and some enhancements were made to include both block reward and transaction fees (partially) in the final calculations.
Note:
This change is made to both guarantee a minimum reward for miners of Prepared Blocks and incentivize them for including more transactions with higher fees
- reserved for future upgrades
Archive link : https://archive.fo/29FhFShort comment/info : While there are many obstacle, removing pool from Bitcoin mining could prevent potential centralization/censorship by powerful multiple pools working together.
#2Bitcoin wiki has a pretty good step-by-step explanation of how to go from a public key to a base58_encoded address which contains the values for each step of the way[1]. But unfortunately I could not find anything similar for Bech32_encoding. Additionally I found the reference implementations a bit confusing[2]! The information is out there[3] but I feel like having it step-by-step like "[1]" can make it a lot easier specially for developers. For example during unit testing I was getting a different address (
bc1qp63uahgrxged4z5jswyt5dn5v3lzsem6c0qqhg8) for below public key and I wasn't sure where the bug was coming from, this visualization helped me [4] realize I was appending the version byte before converting the bits instead of after. So hopefully these steps can help someone like me looking for them.
How to create a Bech32 address from a public key:
1. Having a compressed[5] public key (0x02 or 0x03 followed by 32 byte X coordinate):
0279be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798
2. Perform SHA-256 hashing on the public key:
0f715baf5d4c2ed329785cef29e562f73488c8a2bb9dbc5700b361d54b9b0554
3. Perform RIPEMD-160 hashing on the result of SHA-256:
751e76e8199196d454941c45d1b3a323f1433bd6
4. The result of step 3 is an array of 8-bit unsigned integers (base 2^8=256) and Bech32 encoding converts this to an array of 5-bit unsigned integers (base 2^5=32) so we "squash" the bytes to get:
in hex:
0e140f070d1a001912060b0d081504140311021d030c1d03040f1814060e1e16
in numbers:
14 20 15 07 13 26 00 25 18 06 11 13 08 21 04 20 03 17 02 29 03 12 29 03 04 15 24 20 06 14 30 22
5 bits binary:
01110 10100 01111 00111 01101 11010 00000 11001 10010 00110 01011 01101 01000 10101 00100 10100 00011 10001 00010 11101 00011 01100 11101 00011 00100 01111 11000 10100 00110 01110 11110 10110
5. Add the witness version byte in front of the step 4 result (current version is 0):
000e140f070d1a001912060b0d081504140311021d030c1d03040f1814060e1e16
6. Compute the checksum by using the data from step 5 and the H.R.P (bc for MainNet and tb for TestNet)
7. Append the checksum to result of step 5 (we now have an array of 5-bit integers):
000e140f070d1a001912060b0d081504140311021d030c1d03040f1814060e1e160c0709110b15
8. Map each value to its corresponding character in Bech32Chars (
qpzry9x8gf2tvdw0s3jn54khce6mua7l) 00 -> q, 0e -> w,...
qw508d6qejxtdg4y5r3zarvary0c5xw7kv8f3t4
9. A Bech32_encoded address consists of 3 parts: HRP + Separator + Data:
bc1qw508d6qejxtdg4y5r3zarvary0c5xw7kv8f3t4
The final result from step 9 is the same as example in BIP173[6]
References:
[1]
https://en.bitcoin.it/wiki/Technical_background_of_version_1_Bitcoin_addresses[2]
https://github.com/sipa/bech32[3]
https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki[4]
https://en.bitcoin.it/w/images/en/4/48/Address_map.jpg[5] Only compressed public keys are allowed:
https://github.com/bitcoin/bips/blob/master/bip-0143.mediawiki#restrictions-on-public-key-type[6]
https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki#examples Archive link : https://archive.fo/h1S2DShort comment/info : While i haven't try make script based on this guide to test it, developer guide/wiki should able to use this guide
#3Superspace: Scaling Bitcoin Beyond SegWitSteve Kallisteiros
August 25, 2018
Abstract. SegWit, or Segregated Witness, a soft fork successfully activated mid-2017 on Bitcoin blockchain, provided among other benefits a backward-compatible increase to the block size limit, all while not introducing consensus breaking changes to Bitcoin protocol and not resulting in a hard fork network partitioning. The potential space increase is estimated at being close to 2 MB total for practical purposes. In this paper, the author argues that it is possible, using the same mechanism, increase the block size further up to any agreed-upon limit, while still providing the same security guarantees SegWit does.Complete paper:
https://files.zazzylabs.com/LNx2ud/superspace.pdf===
Keep in mind, it's only the first draft. I would really appreciate your feedback.
Archive link :
https://archive.fo/yZqOZShort comment/info : While it's over complicated and only increase maximum transaction in block, it could be used as last resort if hard-fork isn't option. Changing address prefixes from "bz" to "bc1p" (use next witness version)
#4If you have ...
- The seed (set of words)
- The master private key (a long string starting with "xrpv")
Then you can get all the addresses that can ever exist with any path that you like as long as it comes after this extended key.
If you have ...
- Master public key (a long string starting with "xpub")
Then you can only get addresses that are not hardened
If you have ...
- Individual private key(s)
- Individual public key(s)
- Individual address(es)
Then you can't associate them together or find their master key.
If you have ...
- A master public key + a derived non-hardened private key
Then there may be a way for an attacker to find the master key.
Archive link : https://archive.fo/vfZEcShort comment/info : A neat list to find all correlated address with/without private key from piece of information that you have.
#5How far can possibly bitcoin blockchain height diverge from the ideal one- block-per-10-minutes measure?
MotivationDuring
a discussion with @gmaxwell about a hypothetical DoS vulnerability, in response to my suggestion for blacklisting peers that send block locators unreasonably large, he objected to my proposal by asserting that a loyal node may become maliciously bootstrapped and fooled to commit to a very long chain with trivial difficulty, so blocking it won't be helpful. I responded with a moderate version, ... but after further assessments, I realized that the whole situation is somewhat abnormal, the possibility of having honest nodes with an unreasonable perception of chain height.
In bitcoin, 10 minutes block time interval is not imposed synchronously nor integrally. Protocol adjusts the difficulty every 2016 blocks to keep the generation pace at 10 minutes target but a steady growth in network hash power makes it a possibility for chain height to exceed the ideal number based on 10 minutes generation rate. Actually maximum block height of the network as of this writing is #535461 while it is supposed to be ~504300 showing a +6% divergence.
Not satisfied with the situation, I asked myself:
Why should we accept a proposed chain with an unreasonable longitude like 10 times or 1000 times longer than normal in the first place?
Why shouldn't we simply reject such proposals and shut down the peers who made them? Putting difficulty requirements aside, is it possible to have a chain orders of magnitude longer than normal?
In the malicious block locator scenario, a peer node, very commonly a spv, sends us a
getheaders message with a payload of hundreds of thousands stupid hashes as block locator and instead of being rejected and banned we hopefully and exhaustively try to locate them one by one.
At first, @gmaxwell didn't believe it to be an attack vector because of the cpu bound nature of the processing involved but finally he
made a pull request out of that discussion and it was committed to the source. Bitcoin core now puts a MAX_LOCATOR_SZ hardcoded to be equal to 101. So, we have block locator problem fixed now.
A block locator in bitcoin is a special data structure that spontaneously represents (partially) a node's interpretation of block headers in the chain. The spontaneous nature of block locator generation algorithm guarantees its length to be of O(log(n)) where n is the maximum block height in the chain, it is exactly 10+(log
2(n)) . For current block height of 535461 a legitimate block locator should carry a maximum of 29 hashes and a node sending 30 hashes is claiming a chain with 1,073,741,824 height which is almost twice longer and it would be 2 million times
longer if block locator was holding 50 hashes. Yet we were worried about block locators holding thousands of hashes which represent chains with astronomical heights.
Although the issue is fixed now, the underlying theoretical base didn't get the attention it deserves: A boundary for the divergence of actual block chain's height from what we could trivially calculate by dividing the minutes elapsed from an accepted checkpoint block's timestamp by 10 (normal block interval).
At the time of this writing, the divergence is 6+% but we have no measure to consider 60%, 600% , 6,000,000,000% , ... divergence indexes infeasible, until now.
In this article I'm trying to establish a mathematical framework for further research on this problem, by introducing a relation between hash power increase requirements for a specific divergence within a relatively large window of time.
I also found it useful to include a background section for interested readers, if you are familiar with the subjects, just skip this section.
BackgroundDifficulty adjustment: How bitcoin keeps the paceTwo rules are applied in bitcoin protocol for regulating the rate by which blocks are generated:
1- Increase/decrease difficulty every 2016 blocks by comparing the actual time elapsed with an expected 2 week period.
2- Don't Increase/decrease difficulty with a factor greater than 4.
The latter constraint is commonly overlooked because such a large retargeting is very unlikely to happen with the huge inertia of the currently installed hash power but as we are studying extreme conditions, the 4 times increase factor is of much importance.
Longest Chain Rule: How Bitcoin chooses between forksIn bitcoin and its clones, LRU is not interpreted naively as selecting the chain with biggest height as the main, instead it is defined as the chain with most work needed to generate it.
To calculate accumulative work done on each chain:
1-work for each block is calculated as:
floor(2^256 / (target + 1)) This is done in chain.cpp of bitcoin source code via
GetBlockProof function:
arith_uint256 GetBlockProof(const CBlockIndex& block)
122 {
123 arith_uint256 bnTarget;
124 bool fNegative;
125 bool fOverflow;
126 bnTarget.SetCompact(block.nBits, &fNegative, &fOverflow);
127 if (fNegative || fOverflow || bnTarget == 0)
128 return 0;
129 // We need to compute 2**256 / (bnTarget+1), but we can't represent 2**256
130 // as it's too large for an arith_uint256. However, as 2**256 is at least as large
131 // as bnTarget+1, it is equal to ((2**256 - bnTarget - 1) / (bnTarget+1)) + 1,
132 // or ~bnTarget / (bnTarget+1) + 1.
133 return (~bnTarget / (bnTarget + 1)) + 1;
134 }
2- During loading/adding a block to
block index in memory, not only the block work is computed by calling the above function but also an accumulated chain work is aggregated in
nChainWork with this ocde:
pindex->nChainWork = (pindex->pprev ? pindex->pprev->nChainWork : 0) + GetBlockProof(*pindex);
which is executed both in
LoadBlockIndex and
AddToBlockIndex.
The forks are compared based on
nChainWork of BlockIndex item of their respective last block and once a chain is found
heavier than current active chain a reorg will be performed mainly by updating UTXO and pointing to new chain as the active fork.
Between two difficulty adjustments, this the-most-difficult-chain-is-the-best-chain approach is identical to the-longest-chain-is-the-best-chain originally proposed by Satoshi but when we have to choose between two forks with at least one difficulty adjustment (typically in both forks) there is no guarantee for the results to be identical.
Block timestamp:How bitcoin keeps track of timeIn bitcoin few rules apply to block time and current time. This is an important topic for the purpose of this article, defining an upper bound for the height of forks, because to impose such a boundary a node should eventually have a measure of time and it is not simply its own system clock for obvious reasons.
1- Bitcoin defines a concept named network-adjusted time. It is defined to be the median of times reported by all the peers a node is connected to.
2- In any case network-adjusted time shouldn't offset node's local time more than 70 minutes.
3- Blocks should be marked with a timestamp greater than the median of previous 11 blocks and less than 2 hours after node's network-adjusted-time.
These constraints make it very hard for an adversary to manipulate block times and forging fake chains with fake difficulty and height which is our concern.
AbstractThe short-range difficulty adjustment policy of bitcoin causes a divergence of expected chain height (expected from ideal 10 minutes block time) because of increased hashpower during each epoch, we show that this divergence is exponentially difficult by proving a functional dependency between the ratios by which hash power and the divergence increase:
F = (n/N)^nTo prove this relation, we primarily show the maximum divergence caused by introducing any amount of new hashpower to the network in a specific period of time is achieved when its distribution in epochs, is representable by a
geometric progress.
The 4 times maximum threshold imposed by difficulty adjustment algorithm of network is deliberately ignored to to avoid complicating the discussion. It is a justifiable simplifying assumption in the context of this article because a 4+ times per epoch increase in hashpower is not feasible anyway, as we shortly discuss at the end of the article.
After proving the exponential dependency, we briefly illustrate and discuss its relevance to block locator size problem and other interesting potentials for dealing with similar problems generally.
Lemma:Suppose at the end of a difficulty adjusting epoch we have already chosen as a check point and has started at time t0 we have calculated an average hashpower C as current network hashpower and begun to introduce Q as a relatively large hashpower in a period of time t, greater than or equal to 1 ideal difficulty adjustment epoch time T0. The number of epochs n that occur in period [t0, t0+t], is at its maximum when Q is distributed in a way that the network hashrate would go through a geometrical progression With: Ci = C*qi
and we have Sum(Ci-Ci-1)=Q the increased hash power:
proof:For a network state at the end of a given epoch that is in equilibrium with hashpower C and an arbitrary distribution of new power Q in n epochs we have the sequence
C, C*k1, (C*k1)*k2, ..., (C*ki-1)*ki, ..., (C*kn-1)*kn
defined recursively as :
Ci=Ci-1*ki , C0 = C
Observing that for n+1>i>0:
Ci=C*K1*k2*...*ki=C*K1k2...*ki
and in the end of application of power Q, we have a total hashpower of C+Q that is equal to the last term of the sequence
We have to prove for n+1>i>0 we have k
1=k
2=...=k
n=q for the n epochs to occur in the least possible actual time. For this, we note that in the end of each epoch the difficulty adjustment algorithm of bitcoin (4 times limit temporarily being ignored) resets block generation pace to T
0 for future epochs, so in each epoch we have:
ti= Ci/Ci-1 = (C*K1k2...ki-1)/ (C*K1k2...ki) = 1/ki
For total elapsed time t during n epochs, we have:
T = T0+T0*1/k1+T0*1/k2+... +T0*1/kn
Then:
T/T
0 =1+1/k
1+1/k
2+...+ 1/k
n = 1+ f(k)/(k
1k
2...k
n-1k
n = 1+f(k)/ ((C+Q)/C) )
Where f(k) is the sum of the sequence:
ai = k1k2...kn/ki
, now we need to have this sum in its minimum. We first calculate the product of the terms as prod(a
i) = (k
1k
2...k
n)
n-1 = ((C+Q)/C)
n-1where ((C+Q)/C) and n-1 are constants and the sum is minimum when a
1=a
2=...=a
n hence:
k1=k2=...=kn
Now we have proof because for the minimum time to be elapsed the power sequence definitively should be rewritten as:
Ci = Ci-1*q = C0*qi
where C
0 = C and n+1>i>1 which is a sequence with geometric progression.
Theorem Minimum hashpower growth factor F = (Q+C)/C needed for bitcoin blockchain
* to grow by an abnormal ratio n/N is determined by F=(n/N)^n
where C is current hashpower of the network and Q is the absolute increase during a period of N*T
0.
*Note: Bitcoin difficulty adjustment algorithm applies 4 times maximum threshold that is ignored deliberately for practical purposes
ProofFor a given F = (Q+C)/C Using the geometric progress lemma above we have:
t=T
0*n/q ==>
t/T0=N=n/q ==> n/N = qand
Q=C*(q
n-1) ==>
(Q+C)/C=F=qneliminating q:
F=(n/N)n
DiscussionThe exponential relationship with the ratio of power increase and the ratio by which blockchain height diverges from normal N=t/T
0 is an important key to understand how impractical would be for a network like bitcoin with tens of thousands penta hash inertia (C) to diverge from its normal behavior in terms of longitude (chain height).
Let's have a graphical illustration of this function. We replace n with N+d where d is the number of abnormally added epochs. So F=((N+d)/N)
(n+d)This form of the equation illustrates how F (relative hashpower) should grow to have N+d epochs instead of N, normal epochs.
figure 1: F/d diagram with various N values for F up to 30 times hashpower increase
Figure 1 illustrates how hard is achieving to 2-3 epochs divergence in 3,6,12,24 standard epoch times (two weeks long each) e.g for reaching to 3 epochs divergence within 24 weeks (12 epochs) even 40 times increase in network hashpower doesn't suffice.
Figure 2 (below) illustrates the same function in larger scale (1000 times hashpower increase). To diverge 6 epochs within 48 weeks we need 800 times increase in network hash power and within 24 weeks even with 1000 times increase divergence can't approach 6.
figure 2: F/d diagram with various N values for F up to 1000 times hashpower increase
This exponential behavior is very interesting and potentially can be used as a basis for confronting malicious bootstrap attacks and issues like what we mentioned in the beginning of this article: bogus block locator issue.
As of the 4 times maximum threshold bitcoin uses for difficulty adjustment, obviously having a 4 times or more increase in each epoch for the network is infeasible specially when it comes to large windows of time, like 1-2 years that cause exponential network hash power increase up to infeasible quantities. Hence, ignoring that threshold practically won't affect this analysis.
Archive link : https://archive.fo/wzCf9Short comment/info : An interesting analysis which shows actual mean block time is only 9.4 minutes, not 10 minutes mainly due to difficulty adjustment every 2 weeks. Simply calculate
(timestamp of current block height - timestamp of block #1) / current block height if you doubt it.
#6Hi together,
I'm an IT-student and writing a thesis about atomic swaps on BTC and BTC-like blockchains. For the thesis I decided to use BTC, LTC, BCH and DCR. These chains have a somehow similar codebase and the same scripting language (I'm not a professional, so there might be differences, but they are not that serious). And they all have a high enough marketcap to be relevant for atomic swaps.
So the goal of the thesis is to find hashed timelock contracts (HTLCs) and connect matching HTLCs from different chains to get the atomic swap. Therefore I first searched the web for anything on atomic swaps [1] and analyzed the input script of this transaction [2] to get a basic understanding how atomic swaps work and what they look like.
Then I wrote a go program to search for any script longer than simple P2PKH scripts. This gave me a list of many different scripts which I analyzed by hand to only take the HTLC ones. (Besides many multisig scripts, there is not much to find on BTC^^)
At this point I found multiple different types of HTLCs as listed below. Afterwards I crawled* BTC again saving all transactions with HTLC scripts, storing the interesting data like tx-id, input value, pubKeyHashes, the secrets and their hashes. I found about one hundret HTLCs on BTC so far.
I did the same for LTC and found about 400 HTLCs.
As far as I understood, the secrets of HTLCs have to be the same on both chains. So I wrote another go program to match the found HTLCs from BTC and LTC and got around 30 matches. The next steps would then be to crawl BCH and DCR and also match the HTLCs found there.
* Crawling in this case means that I start to search the blockchain backwards (to get the newest first, the beginning years are not that interesting in this case^^) until the beginning of 2017. So about 18 months. As stated in [1] the first known atomic swap between BTC and LTC was made on 19th April 2017 (or April 19th 2017 or 19.4.2017 or whatever you like). So there is not much sense in crawling any further.
My questions now are the following:
- Why are there so many different types? Is it compatibility with other chains? Or what?
- What are the differences between these types (besides length and hashing algorithm)?
- What are the advantages and disadvantages of these types?
- Why are there so many HTLCs on LTC and so few on BTC?
- Do you know other such HTLC scripts?
- Can you provide interesting resources on this topic?
I'm open to any constructive input and hope you have a few answers for me. Thank you in advance.
Type 1: sha256 secret, length=97byte
63 if
82 size
01 data1
20
88 equalverify
a8 sha256
20 data32
<secret_hash 32byte>
88 equalverify
76 dup
a9 hash160
14 data20
<pubkey_hash1 20byte>
67 else
04 data4
<timelock 4byte>
b1 checklocktimeverify
75 drop
76 dup
a9 hash160
14 data20
<pubkey_hash2 20byte>
68 endif
88 equalverify
ac checksig
Type 2a: sha256 secret, length=94byte
63 if
a8 sha256
20 data32
<secret_hash 32byte>
76 dup
a9 hash160
14 data20
<pubkey_hash1 20byte>
88 equalverify
ac checksig
67 else
04 data4
<timelock 4byte>
b1 checklocktimeverify
75 drop
76 dup
a9 hash160
14 data20
<pubkey_hash2 20byte>
88 equalverify
ac checksig
68 endif
Type 2b: sha256 secret, length=93byte
63 if
a8 sha256
20 data32
<secret_hash 32byte>
88 equalverify
76 dup
a9 hash160
14 data20
<pubkey_hash1 20byte>
67 else
04 data4
<timelock 4byte>
b1 checklocktimeverify
75 drop
76 dup
a9 hash160
14 data20
<pubkey_hash2 20byte>
68 endif
88 equalverify
ac checksig
Type 3: ripemd160 secret, length=81byte
63 if
a6 ripemd160
14 data20
<secret_hash 20byte>
88 equalverify
76 dup
a9 hash160
14 data20
<pubkey_hash1 20byte>
67 else
04 data4
<timelock 4byte>
b1 checklocktimeverify
75 drop
76 dup
a9 hash160
14 data20
<pubkey_hash2 20byte>
68 endif
88 equalverify
ac checksig
Type 4a: hash160 secret, length=86byte
63 if
03 data3
<timelock 3byte>
b1 checklocktimeverify
75 drop
76 dup
a9 hash160
14 data20
<pubkey_hash2 20byte>
88 equalverify
ac checksig
67 else
76 dup
a9 hash160
14 data20
<secret_hash 20byte>
88 equalverify
ad checksigverify
82 size
01 data1
21 -> 33
88 equalverify
a9 hash160
14 data20
<pubkey_hash1 20byte>
87 equal
68 endif
Type 4b: hash160 secret, length=82byte
63 if
03 data3
<timelock 3byte>
b1 checklocktimeverify
75 drop
76 dup
a9 hash160
14 data20
<pubkey_hash2 20byte>
88 equalverify
ac checksig
67 else
76 dup
a9 hash160
14 data20
<secret_hash 20byte>
88 equalverify
ad checksigverify
a9 hash160
14 data20
<pubkey_hash1 20byte>
87 equal
68 endif
Type 5a: hash160 secret, length=81byte
63 if
a9 hash160
14 data20
<secret_hash 20byte>
88 equalverify
76 dup
a9 hash160
14 data20
<pubkey_hash1 20byte>
67 else
04 data4
<timelock 4byte>
b2 checksequenceverify
75 drop
76 dup
a9 hash160
14 data20
<pubkey_hash2 20byte>
68 endif
88 equalverify
ac checksig
Type 5b: hash160 secret, length=78byte
63 if
a9 hash160
14 data20
<secret_hash 20byte>
88 equalverify
76 dup
a9 hash160
14 data20
<pubkey_hash1 20byte>
67 else
01 data1
<timelock 1byte>
b2 checksequenceverify
75 drop
76 dup
a9 hash160
14 data20
<pubkey_hash2 20byte>
68 endif
88 equalverify
ac checksig
Type 6: hash160 secret, length=79byte
63 if
54 <timelock op>
b1 checklocktimeverify
75 drop
76 dup
a9 hash160
14 data20
<pubkey_hash2 20byte>
88 equalverify
ac checksig
67 else
76 dup
a9 hash160
14 data20
<secret_hash 20byte>
88 equalverify
ad checksigverify
a9 hash160
14 data20
<pubkey_hash1 20byte>
87 equal
68 endif
Type 7: multiple ripemd160 secrets, length=80 + n*23byte
63 if
a6 ripemd160
14 data20
<secret_hash1 20byte>
88 equalverify
a6 ripemd160
14 data20
<secret_hash2 20byte>
...
88 equalverify
a6 ripemd160
14 data20
<secret_hash_n 20byte>
88 equalverify
21 data33
<signature1 33byte>
ac checksig
67 else
04 data4
<timelock 4byte>
b1 checklocktimeverify
75 drop
21 data33
<signature2 33byte>
ac checksig
68 endif
Type 8: multiple ripemd160 secrets, length=81 + n*23byte
74 depth
60 16
87 equal
63 if
a6 ripemd160
14 data20
<secret_hash1 20byte>
88 equalverify
a6 ripemd160
14 data20
<secret_hash2 20byte>
...
88 equalverify
a6 ripemd160
14 data20
<secret_hash15 20byte>
88 equalverify
21 data33
<signature1>
67 else
03 data3
<timelock 3byte>
b1 checklocktimeverify
75 drop
21 data33
<signature2>
68 endif
ac checksig
[1]
http://www.cryptovibes.com/crypto-news/charlie-lees-atomic-swap-between-litecoin-and-bitcoin-was-a-success/[2]
https://insight.bitpay.com/tx/0bb5a53a9c7e84e2c45d6a46a7b72afc2feffb8826b9aeb3848699c6fd856480 Archive link : https://archive.fo/TXnnoShort comment/info : Interesting research since few people thought atomic-swaps could bring better privacy while atomic-swaps transaction on both coins have same secret which could lead to de-anonymization. He also release his thesis and script on below posts (
https://bitcointalk.org/index.php?topic=4686439.msg45980025#msg45980025), even though the paper is in German and the script only works with their own daemon.