Bitcoin Forum
November 04, 2024, 09:32:32 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: Decoupling transactions and POW  (Read 2181 times)
TierNolan (OP)
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
April 19, 2013, 12:22:58 PM
 #21

Thanks for the explanation. I think I mostly understand what you propose, but I don't understand some of the mechanics of how the tx free blocks are generated - what's the proof of work? Also, how is does a block become a tx block?

They are the same as normal blocks.

Each block header has a merkle root in it.  This can be scrambled by changing an ignored field in the coinbase transaction.

Normal blocks have a pointer to the previous (main chain) block in their header and all transactions included must be valid according to the rules.  So, if there were 4 normal blocks, then the chain would look like this:

Root <- A <- B <- C <- D

A miner would then create a valid (unsolved) "E" block and look at the merkle root.  If the 10 least significant bits of the merkle root are the same as the 10 least significant bits of D's hash, then the miner has a valid block (ignoring the POW requirement).  Otherwise, he scrambles the merkle root by updating the coinbase and checks again.  It would take on average 1024 attempts to get that to work.  This would be pretty fast even on a CPU.

He then tries to solve the block as normal, by submitting it whatever hashing power he has available.

If he solves the block to normal difficulty, then the block becomes block E and he broadcasts it to the network, as normal.

If he solves it to 1/64 of the difficulty, then he broadcasts just the header as a sub-chain block.

This is a valid sub-chain link, since it points at D and the LSBs of the merkle root match D's.  It is now block D1, instead of E.

Root <- A <- B <- C <- D <- D1

He then has to update his header.  He keeps it pointing at D, but now he has to get the LSBs of the merkle root to match D1's instead.  The main previous is still to D, but the 10-bit reference points to D1.  This is still a candidate E block but also a candidate D2 block.  It is a valid E block according to the standard chain link rules too.

On average there will be 64 sub-chain blocks per main chain block, so the chain would be something like

Root <- A <- B <- C <- D <- D1 <- D2 .... <- D71 <- E

The miner would also need to update his header if other miners broadcast their headers.  That would happen about once every 10 seconds.  If he doesn't then his blocks would lose orphan tie breaks.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1007


Poor impulse control.


View Profile WWW
April 19, 2013, 12:34:56 PM
 #22

Thanks for taking the time to explain that in a way I could follow. One other question - how do you suggest the block reward be distributed? Every block, or only in blocks with tx fees?


Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
TierNolan (OP)
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
April 19, 2013, 12:37:37 PM
 #23

One other question - how do you suggest the block reward be distributed? Every block, or only in blocks with tx fees?

To keep it backward compatible, it has to be main blocks only.

However, a rule like p2pool could be used to pay for the sub-links.  A main block which doesn't pay to the sublinks would not count.

As I said, I think the fee isn't required.  The incentive not to broadcast is pretty low.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1111


View Profile
April 19, 2013, 01:42:26 PM
 #24

What's the implication for mining pools? Do they need to communicate with miner much frequently?

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
TierNolan (OP)
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
April 19, 2013, 01:56:41 PM
 #25

What's the implication for mining pools? Do they need to communicate with miner much frequently?

The header target would update around once every 10 seconds.  However, other than that, there is almost no change.  Pools already tell their clients that the difficulty is much lower than it really is.

If they set it to 1/1024, then clients would send in all block headers that are better than 1/1024 difficulty and then the pool would select the ones that are better than 1/64 for the sub-chain.

Smart miners would have to be told what the merkle root rule is, so they can generate valid headers.  This is a 10-bit number that they would need to be told about every 10 seconds.  Normal clients would be told an 80 byte header target every 10 seconds.

If the main chain block reward is 16 points, then not participating the sub-chain could get their blocks orphaned with high probability, even if they distribute their block to 100% of the network first.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1026



View Profile
April 19, 2013, 02:07:04 PM
 #26

I'm pretty sure this would lead to more forks, not less.  The time between sub-blocks is short relative to network latency, so the sub-chain would be wildly more divergent than the current blockchain.  Essentially, pockets of miners would be working on their own sub-chains, with no way to reconcile with the broad network until a real block hit.  At best, it would be pointless, at worst, it would lead to chaos.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
Deafboy
Hero Member
*****
Offline Offline

Activity: 482
Merit: 502



View Profile WWW
April 19, 2013, 02:17:31 PM
 #27

This does not make any sense.
Faster (lower difficulty) blocks = less secure blocks.
TierNolan (OP)
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
April 19, 2013, 04:17:33 PM
 #28

This does not make any sense.
Faster (lower difficulty) blocks = less secure blocks.

There are 3 security problems

1) Random reversal

This is the where a fork manages to overtake the main chain due to luck.  If the fork has 10% of the hashing power and is 5 blocks behind, then the odds of it ever overtaking the main block is 0.1%.

This is where the 6 confirms rule comes into things.  After 6 confirms, the transaction is assumed mostly safe from reversal.

2) Brute forcing the new fork

This is where someone throws lots of hashing power at the problem.  The value of 50% of the hashing power of the network would be related to the minting and tx fees being paid.

If you have a 1000 BTC transaction, then you should wait at least 1000 / 25 blocks before considering it confirmed.  If you don't, in theory, someone could pay 1000 BTC for an alt chain. 

That assumes that there is enough non-bitcoin hashing power available to win the race.  I don't think the current client does this calculation.

3) Latency

If block propagation latency is high relative to the block time, then it is in the best interests on a miner not to propagate their block.  The only reason to propagate is to get other miners to start building on your block.  If it is to slow, then you should hold it back.  That way you get to build on a longer chain than everyone else.  The end result is all miners hold back their chains and thus the most powerful single miner ends up with the longest chain.  This is why the block rate is 10 minutes.

----------------------------

If your transaction is low value, say < 1 BTC, then problem 2 is not a problem.  Even after 1 confirmation, it wouldn't be worth it to reverse your transaction.

Therefore, problems 1 and 3 are the most important for fast-confirmation applications. 

The latency problem is helped because the sub-chain is only 80 bytes to propagate per block, so can be very fast.  Also, propagation is helped because it is fast to confirm (you don't have to check the transactions referred to by the merkle root are valid).

The main problem is therefore random reversals.  This has nothing to do with the total POW.  The more blocks the better, as long as latency is kept low.  The subchain achieves that, though not quite as well as a very fast chain would, but it suffers less from latency,

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
doobadoo
Sr. Member
****
Offline Offline

Activity: 364
Merit: 250


View Profile
April 20, 2013, 06:20:49 AM
 #29


Perhaps an alt chain or something.  A validator can create initial reputation by submitting proof of work to the alt chain.

Validators can then "stamp" blocks as valid.  However, if another node then proves the block is wrong, then the validator's reputation goes to zero.

The updated proposal doesn't decouple things though, it works as now, just there are extra block headers being published.

Bitcoin txns are based on proof not trust, or reputation.

Substituting trust for proof will give bad results.

Why exactly would you mine a bunch of non tx blocks?  is this designed to create more block reward events so that coins can be distributed to solo miners? 

Or are you trying to create more movements of nature so that the tx window happens closer to 10 minutes.

You know i often wondered if the Poisson distribution of block generation time could be fixed by expanding the degrees of freedom of the hash algo.  The hash algo looks for 1 none (one movement of nature).  If it were to look for many nonces (say 32), we'd get much closer to 10 minutes.  We certainly wouldn't have 1% of blocks being found in less than 10 sec.

Not to hijack the thread, but would simply changing the algo to:

1)  A sha256 hash below the difficulty target
2) A sha256(sha256) below 50%, iterated 31

Thus a correct hash would have an initial hash less than the target, and then a (1/2)^31 chance of having successive hashes being values less than the median of the potential output range of sha256.

Could this "successive" hash requirement accomplish the same thing?  Getting the hash window closer to a 10 minute average?

"It is, quite honestly, the biggest challenge to central banking since Andrew Jackson." -evoorhees
TierNolan (OP)
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
April 20, 2013, 08:01:57 AM
 #30

Bitcoin txns are based on proof not trust, or reputation.

Substituting trust for proof will give bad results.

The validator thing isn't part of the updated proposal.

If someone pays some BTC to become a validator and they lose their status if they lie, then that creates an incentive not to lie.

The validators were just to get fast answers.  It would be coupled with a way to prove that they are wrong.  If someone submits proof, then it would cancel the validator's status.

Quote
1)  A sha256 hash below the difficulty target
2) A sha256(sha256) below 50%, iterated 31

The reason for the proposal was to have faster sub-blocks.  This is because you need at least 6 blocks, no matter what their difficulty to protect against random double spends.  It wasn't to help pools have lower variance.

However, your proposal is interesting.  Maybe it could be used by pool. 

I was thinking of something much more complex for the problem.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1007


Poor impulse control.


View Profile WWW
April 20, 2013, 09:23:16 AM
 #31

Or are you trying to create more movements of nature so that the tx window happens closer to 10 minutes.

You know i often wondered if the Poisson distribution of block generation time could be fixed by expanding the degrees of freedom of the hash algo.  The hash algo looks for 1 none (one movement of nature).  If it were to look for many nonces (say 32), we'd get much closer to 10 minutes.  We certainly wouldn't have 1% of blocks being found in less than 10 sec.

Not to hijack the thread, but would simply changing the algo to:

1)  A sha256 hash below the difficulty target
2) A sha256(sha256) below 50%, iterated 31

Thus a correct hash would have an initial hash less than the target, and then a (1/2)^31 chance of having successive hashes being values less than the median of the potential output range of sha256.

Could this "successive" hash requirement accomplish the same thing?  Getting the hash window closer to a 10 minute average?

Are you suggesting that the current method of block finding be replaced by the above? So in order to solve a block you need to have a sha256 below target and then 31 x sha256(sha256()) below median.

Unless I misunderstand you, that's still a single probability for any given target. This means that solving a block is still bernoulli trial, so the times between block solves will be exponentially distributed (with a varying lambda for hashrate changes) and the number of blocks solved in any given time period will still be a non-homogenous poisson process.

If you want the distribution of the non-homogenous Poisson process to be less widely distributed you have to increase lambda - more blocks in a given time period.

Unless I misunderstand what you mean, in which I'd really like to understand what you mean.



Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
TierNolan (OP)
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
April 20, 2013, 09:31:49 AM
 #32

Unless I misunderstand you, that's still a single probability for any given target. This means that solving a block is still bernoulli trial, so the times between block solves will be exponentially distributed (with a varying lambda for hashrate changes) and the number of blocks solved in any given time period will still be a non-homogenous poisson process.

He suggested extra nonces.  I think he meant the process to be:

h[0] = hash(header)
h[1] = hash(h[0];n[1])
h[2] = hash(h[1];n[2])
....
h[31] = hash(h[30];n[2])

You send the block and also n[1] to n[31].

You can solve the block by solving h[0] < difficulty and then h[1] < difficulty.

The nonces would have to be made larger though.  Maybe variable length byte arrays could be used.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1007


Poor impulse control.


View Profile WWW
April 20, 2013, 09:47:03 AM
 #33

Unless I misunderstand you, that's still a single probability for any given target. This means that solving a block is still bernoulli trial, so the times between block solves will be exponentially distributed (with a varying lambda for hashrate changes) and the number of blocks solved in any given time period will still be a non-homogenous poisson process.

He suggested extra nonces.  I think he meant the process to be:

h[0] = hash(header)
h[1] = hash(h[0];n[1])
h[2] = hash(h[1];n[2])
....
h[31] = hash(h[30];n[2])

You send the block and also n[1] to n[31].

You can solve the block by solving h[0] < difficulty and then h[1] < difficulty.

The nonces would have to be made larger though.  Maybe variable length byte arrays could be used.

In your notation, what I thought he means:

if
h[0] < target & h[31] < median of possible sha256 hashes
then
block solved

This still only has one probability, which will only vary with the target and not with each block. This means the process will have no effect on the distribution of block solve times.







Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
TierNolan (OP)
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
April 20, 2013, 10:53:42 AM
 #34

This still only has one probability, which will only vary with the target and not with each block. This means the process will have no effect on the distribution of block solve times.

Yeah, inherently, new nonces are needed so you can do it one step at a time.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
TierNolan (OP)
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
April 20, 2013, 11:23:16 AM
 #35

An even larger extension to this would be to have the sub-chain made up of single transactions.

You get your transaction into that sub-chain and it is locked down.

The sub-chain links would only check that there is no double spending.  This still requires RAM, but at least no elliptic curve crypt or script checks.

The next full block would include all valid transactions, in the order that they appear in the sub-chain.  The 2nd double spent transaction and any transaction without valid crypt or script would be skipped.

The chain would effectively be

F = full block
E = easy header
T = transaction

F <- T <- T ... <- T <- E <- T <- T <- ... T <- F

The reputation system could be used to prevent transaction spam.  A merchant could create an id that is cancelled if any invalid transactions are submitted.

A simple (low security) way to do it would be to find a random number that hashes to some difficulty and publish the result.

A transaction link would then include
- merchant id
- random number that hashes to the target
- hash(new random number)

This would count as submitting the transaction and also redirecting the "token" to the new random number.

A dishonest node could change the merchant id and target.  However, assuming most of the network are honest, the honest tx will be the one to make it into the sub-sub chain.

The key is to keep verification as simple as possible.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
doobadoo
Sr. Member
****
Offline Offline

Activity: 364
Merit: 250


View Profile
April 20, 2013, 06:06:24 PM
Last edit: April 20, 2013, 06:20:28 PM by doobadoo
 #36

Are you suggesting that the current method of block finding be replaced by the above? So in order to solve a block you need to have a sha256 below target and then 31 x sha256(sha256()) below median.
Yes, and admittedly, i'm not sure it accomplishes what i want.  I just dont want to expand the size of the block header by say 600 bytes  (32 nonces ~ 20 bytes each)
Quote
Unless I misunderstand you, that's still a single probability for any given target. This means that solving a block is still bernoulli trial, so the times between block solves will be exponentially distributed (with a varying lambda for hashrate changes) and the number of blocks solved in any given time period will still be a non-homogenous poisson process.
your probably right, the nonce is where the entropy comes from, without an increase in entropy inside the system, nothing has actually happened (corollary to 2nd thermodynamic law).

We need more probability trials to take place during the block generation to move the mean closer to 10 minutes.  I just figured that by finding a correct initial hash (target within limited intereval), then feeding that into sha256 as if it were a nonce, i would be creating a second trial.  But if trial 2 is not independent of the outcome of trial 1 then we are in the same event.
Quote
If you want the distribution of the non-homogenous Poisson process to be less widely distributed you have to increase lambda - more blocks in a given time period.

Unless I misunderstand what you mean, in which I'd really like to understand what you mean.
No, thats what i mean, but you wouldn't have to publish the result hashes in the header, just the nonces and maybe the last hash.

Sha256 (prev-block-hash + N0) < target = H1
sha256(H1 + N1) < target = H 2
...
Sha256 (H31 + Tx Merkle Root + Time Stamp ... + N31) < target

my only concern is that you might have to publish N0-N31, which would add like hundreds of bytes to headers...unless that's trivial with broadband.  

Heck, just increasing the trials to 10 would do half the job. Is it worth 100-200 bytes extra header weight?

"It is, quite honestly, the biggest challenge to central banking since Andrew Jackson." -evoorhees
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1007


Poor impulse control.


View Profile WWW
April 21, 2013, 05:31:34 AM
 #37

We need more probability trials to take place during the block generation to move the mean closer to 10 minutes.  I just figured that by finding a correct initial hash (target within limited intereval), then feeding that into sha256 as if it were a nonce, i would be creating a second trial.  But if trial 2 is not independent of the outcome of trial 1 then we are in the same event.

The mean is already 10 minutes - you mean reduce the standard deviation? Then, yes, you need multiple independent trials. But you really don't want to make the block arrival time more predictable for reasons I outline below.

An even larger extension to this would be to have the sub-chain made up of single transactions.

The subchain is really just the main chain with only one tx instead of many, right?

If you assign every nth block to be a full block, you have the problem that the standard deviation of the expected arrival time of the full block is now much lower.

There are two cases which are of interest.


1. When the generated bitcoin value of the full tx block is approximately that of the 1 tx blocks (as would be the case currently), then there is little disincentive for miners to mine the 1 tx blocks. However you have the problem that since all blocks are part of the main chain, the solving full tx block is still more likely to be an orphan race, which will encourage fewer tx to be included in the block.

Solution: Mandate a minimum block size. However, then each tx block is more likely to be an orphan than a 1tx block. If electricity costs are high, then it may be the case that some hashes will have a negative expected value, so you're discentivising (is that a word?) tx block creation.

2. When the value of full tx block is much greater than that of the 1 tx blocks then there is incentive not to mine the 1tx blocks. If the standard deviation of the arrival time of the full tx block is reduced and electricity costs are appreciable then it's possible to act to increase the expected value of each hash by not working on 1tx blocks.

Solutions:
a. Make the probability of any block to be a full tx block be p.
If p does not vary, then there is no disincentive to mine the 1tx blocks but the time between full tx blocks is back to being exponentially distributed.
If p varies, then you have a trade off between the variation of the expected value of a hash and the distribution of the times between blocks. However any variation in the expected value of a hash will invite abuse.

b. Make every nth block a full tx block, but distribute the tx fees in some fashion so that the creator of tx block and the creators of the next n-1 blocks have some share of the fees. This has been covered elsewhere, and there are a number of ways this can be achieved.


I think a good summary is:
1. If that if you make the expected value of a block vary (either by reward or the likelihood of it being orphaned), and
2. Electricity costs are significant so that shutting down your mining device is more rewarding than mining, then
3. The expected value of each hash varies and some blocks are more likely to be mined than others. So,
4. If t = the critical time at which the expected value of a hash worth either more or less, and
5. You make t more predictable, then
6. You provide a way to allow miners to maximise their earnings by shutting down their mining devices until the expected value of a share is positive again.

So you need to make each block equal in size and reward to each other block. This means that if you want a simple and provably fair system, you can't have sub chains - each block in the chain must be equal in value and similar in size to every other block in the chain or hashes will not have equal expected values of generated bitcoin.

I might still not be fully understanding what you meant, but if I have then I'm pretty sure my conclusions are solid.

 

Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
TierNolan (OP)
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
April 21, 2013, 08:10:17 AM
 #38

An even larger extension to this would be to have the sub-chain made up of single transactions.

The subchain is really just the main chain with only one tx instead of many, right?

I was thinking that transactions would specify the hash of the previous transaction or an easy header.

In this case the sub-chain is made up of full blocks, easy blocks (otherwise valid full block headers that have lower POW) and transactions.

Quote
If you assign every nth block to be a full block, you have the problem that the standard deviation of the expected arrival time of the full block is now much lower.

It isn't every nth block, it is every nth block on average.  Effectively, the sub-chain is merge mined.

Quote
1. When the generated bitcoin value of the full tx block is approximately that of the 1 tx blocks (as would be the case currently), then there is little disincentive for miners to mine the 1 tx blocks. However you have the problem that since all blocks are part of the main chain, the solving full tx block is still more likely to be an orphan race, which will encourage fewer tx to be included in the block.

There is no bitcoin generated by anything other than full blocks.  Otherwise, it is a change that isn't backwards compatible.

The transactions will have a tx fee though.  The higher your fee the more likely others will build on your tx.

Quote
Solution: Mandate a minimum block size. However, then each tx block is more likely to be an orphan than a 1tx block. If electricity costs are high, then it may be the case that some hashes will have a negative expected value, so you're discentivising (is that a word?) tx block creation.

Miners don't mine the tx chain.  That would effectively be paid for by tx fees.

It is in the best interests of merchants/customers to add their tx to the longest chain, since that will inherently have the highest fees.  Some merchants might decide not to build on zero fee transactions.

As I said there is a trade-off.  You can make full blocks worth more than easy headers.  Similarly, easy headers would be worth more than transactions.

Quote
2. When the value of full tx block is much greater than that of the 1 tx blocks then there is incentive not to mine the 1tx blocks.

Right, but they are paid by the tx fee.

Quote
Solutions:
a. Make the probability of any block to be a full tx block be p.

The easy blocks work that way.

Quote
If p does not vary, then there is no disincentive to mine the 1tx blocks but the time between full tx blocks is back to being exponentially distributed.

Right, I don't really see how you could get it to work otherwise.

Quote
If p varies, then you have a trade off between the variation of the expected value of a hash and the distribution of the times between blocks. However any variation in the expected value of a hash will invite abuse.

If the easy headers are valid main blocks, except with lower difficulty, then the variance can't be fixed anyway.

Quote
I think a good summary is:
1. If that if you make the expected value of a block vary (either by reward or the likelihood of it being orphaned), and

Probably the orphaned thing would be the only option that is backward compatible.

Quote
2. Electricity costs are significant so that shutting down your mining device is more rewarding than mining, then

Another option would be cooling.  For example, the server room could be cooled at a constant rate, so on average a certain amount of electricity is used.  However, you can burst a lot of extra heating.

Quote
So you need to make each block equal in size and reward to each other block. This means that if you want a simple and provably fair system, you can't have sub chains - each block in the chain must be equal in value and similar in size to every other block in the chain or hashes will not have equal expected values of generated bitcoin.

I might still not be fully understanding what you meant, but if I have then I'm pretty sure my conclusions are solid.

I think the problem is that the thread is discussing different objectives.  I don't think main block variance is that important.

The sub-chain was mainly to allow transaction ordering to be guaranteed, even before the next easy block, in order to protect against double spends.

Having a transaction chain locks that down pretty quickly.  A transaction could optionally specify a previous transactions and is only valid if all transactions in the transaction chain back to the previous main block are included, except those that are invalid for some reason.  No checking would be performed except hashes.

In fact, with that system, easy blocks could almost be dropped entirely.  Your client keeps track of the transaction chain and keeps trying to submit the transactions with a different previous link until it wins.  That still requires large bandwidth per client though, as the tx rate increases.

Maybe the merchant submits to a provider that keeps up with the block chain.  The provider doesn't have to check chain authenticity, they just need to check hash links.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
organofcorti
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1007


Poor impulse control.


View Profile WWW
April 21, 2013, 08:42:53 AM
 #39

I think I really haven't understood what your goal is or how you're achieving it. Sorry about that. And I thought the idea of making the tx blocks appear more regularly was a goal? Never mind, must have had my wires crossed.



Bitcoin network and pool analysis 12QxPHEuxDrs7mCyGSx1iVSozTwtquDB3r
follow @oocBlog for new post notifications
TierNolan (OP)
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
April 21, 2013, 09:44:50 AM
 #40

I think I really haven't understood what your goal is or how you're achieving it. Sorry about that. And I thought the idea of making the tx blocks appear more regularly was a goal? Never mind, must have had my wires crossed.

The goal is to allow faster confirmations.

You inherently need 6 blocks after a transaction to consider it confirmed.

The main proposal is that when miners find a a block that is 1/64 th of the required difficulty, they publish that header.

A tweak to merkle root rules allows a miner to specify a previous sub-chain header.  This is done before they start mining it.

If they hit the main difficulty, then it is a standard compliant main chain header.

This gives a sub-chain with a 64X block rate, so it takes 1 minute for 6 confirms.

There is a tradeoff in how you do rewards.  If the sub-chain blocks count as the same POW as the main chain blocks, then publishing sub-chain headers is poor strategy.  If you keep them to yourself, you can publish them and your final block as a single unit, and it massively strengthens your block's effective POW.  If you publish a main block + 7 sub-blocks, your block jumps forward 8 steps.  The miner with the most total power would win nearly every main block.

However, if you make the sub-chain links to cheap, then it doesn't have any strength to maintain ordering.

I think 16 points of POW for a main chain block and 1 point of POW for a sub-chain block seems reasonable.  That gives only a weak incentive to withhold sub-chain blocks.  As long as a miner with within 16 sub-chain links of the head of the chain, their main block will jump to the front of the queue.  Miners who have to be able to mine 16 sub-blocks before the others can mine 16 before they can win by withholding blocks.

It actually means that you need 16 (+6?) sub-chain blocks to be sure of a confirmation, so 2.5 minutes.

Maybe 4 points for a main chain block would be better, in that case.  Withholding is only worth it if you can generate 4 sub-chain blocks before the rest of the network can generate 4. 

However, there is a danger.  If most miners withhold their sub-chain blocks, then honest miners are harmed.

Since miners tend to be mining most blocks, another reason to publish sub-chain blocks is to lock in your block reward.  If a miner won a block 10 blocks back, they would want to keep it locked in, so they don't lose their coinbase transaction.

The second proposal/extension is to have transactions also form a chain.  You can only incorporate a transaction into a block if there is a chain from the previous main block through the transaction to the new block.

Giving it more thought, maybe only certain transactions would be marked with the fast flag.  If you are willing to wait for 6 confirmations, then don't bother flooding the fast transaction chain.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!