Bitcoin Forum
May 03, 2024, 10:11:42 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 5 6 »  All
  Print  
Author Topic: Getting rid of pools: Proof of Collaborative Work  (Read 1854 times)
aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 09, 2018, 11:45:54 PM
Last edit: June 09, 2018, 11:58:38 PM by aliashraf
 #21

Yes we are getting synchronized I suppose.  Smiley

As of the risks you mentioned in forgetting the Prepared Block:

Firstly, it is to be kept in mind that this block's Net Merkle Root points to a coinbase transaction which charges the fees to the miner.'s wallet address. He should wait (praying for it to get enough contribution) until it is included as the Net Merkle Root of the finalized block.

Secondly, I have made a bit more assessments about the selfish mining attack which I described in my last post and I have come to a very interesting result: If we add a simple restriction for full nodes to reject just the last block in the chain where there is not a valid Prepared Block with 5% difficulty around.
This simple restriction will lead to a very excellent defence to this attack without enforcing much overhead. As a result we will have a new definition for the latest block in the chain: it is the one that has a twin (with same Net Merkle Root, 5% difficulty and same parent) present in the network. When a new round of mining is going to take place, miners would just point to the latest block that has such a twin.

It looks to me a waste of space  to keep this twins alive for more than a few rounds because  I don't see any reason to use Prepared Blocks for bootstrapping and/or partial resynchronization yet.

1714774302
Hero Member
*
Offline Offline

Posts: 1714774302

View Profile Personal Message (Offline)

Ignore
1714774302
Reply with quote  #2

1714774302
Report to moderator
Each block is stacked on top of the previous one. Adding another block to the top makes all lower blocks more difficult to remove: there is more "weight" above each block. A transaction in a block 6 blocks deep (6 confirmations) will be very difficult to remove.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714774302
Hero Member
*
Offline Offline

Posts: 1714774302

View Profile Personal Message (Offline)

Ignore
1714774302
Reply with quote  #2

1714774302
Report to moderator
1714774302
Hero Member
*
Offline Offline

Posts: 1714774302

View Profile Personal Message (Offline)

Ignore
1714774302
Reply with quote  #2

1714774302
Report to moderator
ir.hn
Member
**
Offline Offline

Activity: 322
Merit: 54

Consensus is Constitution


View Profile
June 10, 2018, 03:19:03 AM
 #22

So you are saying that the preparation block's hash is included in the net merkle tree.  Sounds good.

I'm not really sure about the twin idea, one block is a part of the blockchain and the other block is floating around somewhere?  If they are exactly the same then wouldn't they just be the same block?  I'm not sure how you can have two exactly identical blocks, unless one is a slight bit different.

aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 10, 2018, 06:49:34 AM
Last edit: June 10, 2018, 08:31:49 AM by aliashraf
 #23

So you are saying that the preparation block's hash is included in the net merkle tree.  Sounds good.

I'm not really sure about the twin idea, one block is a part of the blockchain and the other block is floating around somewhere?  If they are exactly the same then wouldn't they just be the same block?  I'm not sure how you can have two exactly identical blocks, unless one is a slight bit different.

I afraid I haven't explained my ideas regarding this issue in previous post thoroughly.  Actually it is not possible to have Prepared Block's hash in the respected Net Merkle Tree (it leads to recursive definition with no practical exit condition).

Anyway, I decided to include it in the proposal formally and I hope it could help.

Please refer to the starting post a few minutes later.
aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 10, 2018, 07:11:40 PM
Last edit: June 10, 2018, 07:39:39 PM by aliashraf
 #24

@anunymint
You are so sharp, I mean too sharp.  Grin

Really? Just like 2 minutes after I referred you here, you digested the whole idea and closed the case?

First of all, switching between pools does not change anything and I prefer not to waste the readers' time to argue about it. Just reconsider your claim and please, take your time.

Secondly, in this proposal, miners do have to validate transactions they are committing to and making any objection about it, is just saying that they have to leave validation to a limited number of trusted nodes, validators, authorities, whatever,  like what is the case with pool mining.   Shocked

You are full of surprise and this is another weird thing you are suggesting,  and again I can't imagine even arguing about such claims.  

I'm familiar with this literature tho, this is the outcome of any crisis: journalistic revisionism full of  deconstructional claims, ... just because something is going wrong.

I like it, seriously, it is inspiring but ultimately we have to adopt and improve instead of giving up and trying to invent a strange thing that is at the same time objective, scalable, decentralized, sharded, ... and the miners do not need to validate the transactions they are committing to, while they are consuming their resources in the mining process!

Anyway, I referred you here to reject one specific claim you have made about PoW that you are claiming it is doomed to mining variance and inevitability of pool mining as a result and you conclude that it is the most important factor that makes PoW networks infeasible for sharding schemas.

As I see you are not going to share and tell us how it would help, PoCW I mean, to implement sharding, I'll just try "reverse engineering" here:  

When you are saying that bitcoin won't let sharding because it is doomed to centralized pool mining, logically I come to this conclusion:
this variant of PoW (current proposal, Proof of Contributive Work) resolves this issue, and makes it feasible to implement sharding on a PoCW network. Right?
aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 11, 2018, 11:31:35 AM
Last edit: June 11, 2018, 12:02:14 PM by aliashraf
 #25

Quote from: anunymint  date=1528667760

In my design, there would be no such problem with liveness threshold, as the miners have very good luck to contribute frequently even with small fractions of hashpower.

Oh I thought you might reply with that, and I forgot to preempt it by saying that if you count proof-of-work shares instead of winning block solutions for shard participation, then I was thinking that would trivially enable a Sybil attack but on further thought I think the vulnerability is different than my first thought.

Thinking that out a bit more, that would induce the mining farm to send all its shares as shares instead accumulating them in house and releasing only the block solutions. This would mean that small miners can’t realistically even validate that shard participation is honest. So the centralization risk (that eliminating pools was supposed to fix) is essentially not ameliorated.
I quoted above reply partially form this topic

I have merited @anunymint for this post as it is a serious objection and I appreciate it.  Smiley

Actually I was waiting for this to come and obviously I have the answer right in my pocket Wink

Talking about block validation costs, we usually forget about what a validator does in this process specifically.
In Nakamoto's implementation of PoW, which is installed in bitcoin and inherited by most of the altcoins either conceptually or directly through forking his code, when a peer claims a block, the receivers have a lot of job to verify its validity and make proper decision, accoridngly.
It involves checking
  • 1- The block is well formed and queried properly:                                                     very trivial, cpu bound
  • 2- The block header's HAshPrevBlock field to be the most recent block in the chain:    very trivial, cpu bound
  • 3- The block hash is as good as expected (regarding current calculated difficulty):      very trivial, cpu bound
  • 4- The Merkle path (actual block pyload) encompasses valid transactions:                  very cumbersome, I/O bound

In my proposal, miners have to go through this process for each Prepared Block (say 2-3 times in each round), no matter how heavy is the traffic!

This is the magic of the protocol, no need to re-check every submitted share's Merkle path (transaction set) because they are not even a conventional block, they are Collaboration Shares, they share the same merkle root with an already validated Prepared Block.

So, the main traffic, submitted shares, is validated in like few microseconds for each share, no considerable overhead is involved here.

Even when it comes to Finalized Blocks, nodes have not to go through the Merkle path validation as a rule. It is one of the most beautiful features of the protocol and I'm not aware of any competing alternative even close to this.

So, the question of whether a 'small miner' is able to contribute in PocW because of validation overhead becomes almost equivalent with asking whether he is able to solo mine in traditional PoW because of it, given the variance dilemma is no longer a discouraging factor?

it is what my proposal tries to fix, removing the  variance problem without adding a considerable validation overhead.

Quote
This creates another security problem in general which is that validation comes closer to the cost of mining, so it limits who can participate in validation. If you make each proof-of-work share 100X more difficult than the average network cost of computing one proof-of-work trial, then only miners with significantly more than 1% of the hashrate can possibly afford to do validation. If you instead make the share difficulty too high, then the variance of the small miners is too high to make it into the shards with a reliable liveness outcome.
Already resolved.
Quote
... Btw, AFAIR you were not the first person to propose that shares be credited instead of just block solutions at the protocol layer. Seems I vaguely recall the idea goes back to even before I arrived on BCT in 2013. Don’t recall how well developed the proposal was.
Highly appreciate it if you would kindly refer me to the source.
Quote
Also I think miners with more than 1% of the network hashrate would realize that it’s in their interest to agree to orphan any blocks that have share difficulties that is less than some reasonable level so that these miners will not be forced to do all this wasteful validation of low difficulty shares. Thus they would censor the small miners entirely. So there would seem to be a natural Schelling point around effectively (as I have described) automatically turning off your protocol change. The network would see your protocol change as malware from an economic perspective.
Although it is not a big deal to have big miners solo mine, I have looked somewhat closer to this issue:
No, there would be no incentive to solo mine and abstain to collaborate, even for big miners (while it is supported directly by the latest tweaks and upgrade I have committed before your post).

Analogically speaking:
In traditional PoW, a big miner, owning a large farm, may choose to solo mine because he doesn't suffer from mining variance that much to consider paying fees and taking risks by joining pools for a cure.
In PocW there is almost no costs involved, the whole network is a giant, decentralized pool and charges no fees and implies no risk while providing a stable payout and performance index.

aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 11, 2018, 08:28:53 PM
Last edit: June 11, 2018, 09:18:08 PM by aliashraf
 #26

Quote from: anunymint  date=1528667760

In my design, there would be no such problem with liveness threshold, as the miners have very good luck to contribute frequently even with small fractions of hashpower.

Oh I thought you might reply with that, and I forgot to preempt it by saying that if you count proof-of-work shares instead of winning block solutions for shard participation, then I was thinking that would trivially enable a Sybil attack but on further thought I think the vulnerability is different than my first thought.

Thinking that out a bit more, that would induce the mining farm to send all its shares as shares instead accumulating them in house and releasing only the block solutions. This would mean that small miners can’t realistically even validate that shard participation is honest. So the centralization risk (that eliminating pools was supposed to fix) is essentially not ameliorated.
I quoted above reply partially form this topic

I have merited @anunymint for this post as it is a serious objection and I appreciate it.  Smiley

Actually I was waiting for this to come and obviously I have the answer right in my pocket Wink

Talking about block validation costs, we usually forget about what a validator does in this process specifically.
In Nakamoto's implementation of PoW, which is installed in bitcoin and inherited by most of the altcoins either conceptually or directly through forking his code, when a peer claims a block, the receivers have a lot of job to verify its validity and make proper decision, accoridngly.
It involves checking
  • 1- The block is well formed and queried properly:                                                     very trivial, cpu bound
  • 2- The block header's HAshPrevBlock field to be the most recent block in the chain:    very trivial, cpu bound
  • 3- The block hash is as good as expected (regarding current calculated difficulty):      very trivial, cpu bound
  • 4- The Merkle path (actual block pyload) encompasses valid transactions:                  very cumbersome, I/O bound

In my proposal, miners have to go through this process for each Prepared Block (say 2-3 times in each round), no matter how heavy is the traffic!

This is the magic of the protocol, no need to re-check every submitted share's Merkle path (transaction set) because they are not even a conventional block, they are Collaboration Shares, they share the same merkle root with an already validated Prepared Block.

So, the main traffic, submitted shares, is validated in like few microseconds for each share, no considerable overhead is involved here.

Even when it comes to Finalized Blocks, nodes have not to go through the Merkle path validation as a rule. It is one of the most beautiful features of the protocol and I'm not aware of any competing alternative even close to this.

So, the question of whether a 'small miner' is able to contribute in PocW because of validation overhead becomes almost equivalent with asking whether he is able to solo mine in traditional PoW because of it, given the variance dilemma is no longer a discouraging factor?

it is what my proposal tries to fix, removing the  variance problem without adding a considerable validation overhead.

Appreciated the merit and I like spirited debate. The entire point is to make sure we have the correct analyses. So I must be frank about my available time and the quality of your explanations. Please put more effort into helping me understand your point.
You are welcome.  Smiley
Reading your reply to the end I become more convinced about your time issues. It is really a problem right now, you have time to write but not to read I suppose  Tongue

Quote

You have a lot of verbiage here. I don’t have the time to go learn your non-standard terminology, e.g. “transaction set”. Transactions? We’re talking about mining shares here. Is that a typo? Please reorganize your response into a succinct and very coherent one that doesn't require the reader to go wade through your thought typos and private language.

You should be able to explain your idea in a very coherent way that is very easy for an expert to grok without having to go try reverse engineer your specification which employs your private language.

What do you mean by standard? What standard? ISO has issued something?
There is no way to explain a new idea by the old language. Technology is the language itself, improving technology implies extending the terminology, manipulating it and redefining a lot of words and phrases.

Being a critic or an expert who does research on innovative ideas, requires spending time on understanding the way terms and words have been redefined and used to create the new idea.
It is what we do, we create terms and concepts, nothing more, absolutely nothing more.

And yet, the combination 'transaction set' is not such a complicated, innovative term of mine, it is used once or twice in my posts as a complementary description for non-expert readers who may be confused by 'Merkle path' that I use more.

Quote
Bottom line is that every mining share that will be recorded in the blockchain (in a Merkle tree or whatever) and is rewarded by the blockchain, has to be validated by all nodes to be sure that someone isn't cheating and being rewarded for shares that were invalid.

Bottom line is you should read  Grin

I explain again here, just for you but I swear to god, it is the last time I do this for a critic who doesn't read:

The shares under discussion are called Collaboration Share in this proposal, (I wish ISO or you are not offended) these are NOT conventional blocks like what solo or pool miners submit to the blockchain or pool service.

Most importantly, the Merkle tree these shares commit to, is NOT a conventional bitcoin Merkle tree your poisoned terminology is used to, instead they commit to a variant which I (with all due respects to ISO) call it Net Merkle Tree, the coinbase transaction that is committed to this tree has no block reward (i.e sum of inputs equals sum of outputs).

This way, miners repeatedly are dealing with different shares that have a same Merkle root , no need  to fetch the transactions from the mempool or the peers, verifying their signatures, checking them against the UTXO, ... ever and ever, it has been already done, once they have decided to contribute to this specific Net Merkle tree, once!

I think we are done, here. Just think about it, this is the most innovative part but not the most complex one. Just take a deep breath and instead of rushing for the kb, use the mouse and check the starting post of this topic.

Quote
Thus I maintain that my objection was correct, until you coherently explain how I was incorrect.
done.

Quote

It’s not just the fees. It’s the I/O cost of moving all those shares over the network. ...
The shares, unlike blocks, (remember? they are not blocks, they are just Collaborative Shares ) won't cost too much bandwidth. When an ordinary block is transmitted there is a lot of overhead regarding its payload to be loaded instantly or incrementally by each peer. Collaborative Shares do not cause such an overhead just like 50 bytes (not decided an exact data structure yet, tho) once, no handshake, no query, no more data.

Quote
...a mining farm can easily remain anonymous, but your proposal would make that more difficult.
Remaining anonymous is as easy as ever, connect to a trusted full node and do whatever you want.

Quote
... Also the cost of change in data center infrastructure and ASIC hardware to accommodate this change. Also the smaller miners are disproportionately affected by the lower economies-of-scale to deal with all these costs as well as the cost of actual validation which I did not see a coherent rebuttal to above.

You may have a coherent rebuttal. I will await it.
Well, the validation story is over (I hope) and FYI, this proposal does not involve any upgrade/change in infrastructures or ASICs it is just about software upgrade.

EDIT:
Quote
The following are essentially in the same direction as your idea (crediting all the blocks in a tree is the same as rewarding mining shares), but I also think I had seen specific mentions of the idea of putting the mining shares in the block chain. I know I had that idea already before you mentioned it, because I had read about is and even thought about it myself. I had dismissed it because of the validation asymmetries are lost. I vaguely remember in past discussion it was also shot down because of the bandwidth costs and impact of orphan rate due to propagation delays (at least that was the thinking at the time more than 5 years ago). Vitalik also blogged about Ghost and pointed out some its problems.

https://bitcointalk.org/index.php?topic=396350.0

https://bitcointalk.org/index.php?topic=359582.0

https://bitcointalk.org/index.php?topic=569403.0
I forgot this part when posting my reply, sorry.

Of course putting shares in the blockchain is not  far from imagination, the problem is how to deal with it without messing with resources and capacities, and it is what this proposal is about.
I checked the links, unfortunately they are not even close, Ghost (1st and 2nd proposals being about it) is a story of its own that attempts to change the 'longest chain' fork rule and is out of context, the third proposal about signing every single hash by the miner is just an anti-pool mobe with no solution for the core problem: mining variance.
So, I have to maintain that my work is original.
aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 11, 2018, 09:30:40 PM
 #27

@anunimint
Please edit your latest reply, som quote tags is missing there. I just don't quote and simply reply to your most important point in that post:

You say that your objection is not about signatures, UTXO, etc of the Markle Path and the transactions included in the block but about its hash being qualified enough!

Are you kidding? Running a SHA256 hash takes few microseconds even for an average cpu!

An ASIC miner does it in  few nanoseconds!

Am I missing something or you are just confused somewhat?
aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 11, 2018, 09:38:34 PM
 #28

Are you kidding? Running a SHA256 hash takes few microseconds even for an average cpu!

An ASIC miner does it in  few nanoseconds!

Am I missing something or you are just confused somewhat?

No you’re not thinking. Think about what you just wrote. Everything is relative. And go back to my original objection and the point about “100X”.

So you are serious!
Really? One nano second 100X is just 0.1 microsecond and 1 microsecond 100X is 0.1 millisecond.

Come on, you have to take it back, your objection about validation crisis, there is no crisis, just take it back.
aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 11, 2018, 09:52:15 PM
 #29

Actually,  I guess handling  more than 180,000 shares per minute (3,000 shares per second) by a full node with a commodity pc is totally feasible.
With parameters I have proposed in this version, there would be less than 20,000 shares per minute in the worst scenario however.
aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 11, 2018, 10:32:28 PM
 #30

Are you kidding? Running a SHA256 hash takes few microseconds even for an average cpu!

An ASIC miner does it in  few nanoseconds!

Am I missing something or you are just confused somewhat?
...
Also please remember my objection was in terms of unbounded validators for OmniLedger shards. I never was coming over to refute your proposal for use not in OmniLedger.
Oh you did, we are not discussing OmniLedger here, but thank you, you are taking back your objection, well, somehow, it is progress.
Quote
It may be plausible to use your proposal up to some cutoff on the smallest hashrate miner allowed. I have not actually computed it.
progress, progress.
Quote
So you are serious!
Really? One nano second 100X is just 0.1 microsecond and 1 microsecond 100X is 0.1 millisecond.

The absolute time is irrelevant. It is the relativity that matters. Please take some time to understand what that means. I shouldn’t have to explain it.
I have been studying theoretical physics for a while and I'm somehow an expert in relativity theory and yet I can't find any relativity related issue here.
Quote
The variance on the small miners is so incredibly high because their hashrate is so minuscule compared to the network hashrate.
Mining in its contribution phase is not halted for validation, validation is done in the full node and is parallel with mining (generating hash), validation helps in transition between phases and is not strictly a part of mining process.
Quote
Therefore if you require the network to validate the small difficulty share that a small miner is capable of producing within 10 minutes, then that means all miners must validate all those small shares produced by all the hashrate!

The Bitcoin network hashrate is currently 40 million THPS. A single Antminer S9 is 14 THPS. So we’d need more than 2.5 million shares per second to be validated.
Actually, a s9 will produce one share every 30 minutes or so! let's calculate :
1.4*1010 / 4*1016 = 3.5 *10-7 = 0.00000035

In PoCW, we need every share to be 0.0001 times easier than what the difficulty dictates , to generate one share per minute (block time) as an average, a miner should have the total hashrate, our s9 got just 0.0000035 should produce one block every 290000 minutes (1 / 0.00000035) by improving difficulty 10,000 times it turns to be one share like every 30 minutes.
Again, no flood, no crisis.
Quote
But you also have this problem to consider:

Essentially all you’re doing is lowering the block period, which is known to be insecure as it approaches a smaller multiple of the propagation delay in the network. So I am also thinking your proposal is flawed for that reason. I think that was the reason it was shot down in the past. As I wrote in one of my edits, Vitalik had explained some of these flaws in GHOST.
reducing block time to 1 minute is not a part of this proposal from the algorithmic point of view, but I vote in favor of it and can void any argument against it, Ethereum uses 15 second block time with an average of uncle blocks lower than 10% , I believe even a 30 second blocktime is feasible.
aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 12, 2018, 04:48:19 AM
Merited by anunymint (1)
 #31

@anunymint

I understand, you are a good writer and a respected crypto advocate, I have shown more than once my respect for you. But it just happens, the level of noise and wired claims and the number of white papers and proposals about Proof of Something other than work is annoyingly high and it was my fault from the first hand to start a thread and try to convince people, guys like you specially, that it is not alike.

I have to apologize for my too much expectation and getting too intimate and hurting you. I didn't mean it.

As I've just mentioned, it is too much expectation form advocates (who are already vaccinated against the said noise and hypes) to take this proposal as a serious one and try to digest it thoroughly (why should they?).

You might be right, I'm not potent enough for presenting a proposal with such an ambiguous agenda like shifting Nakamoto's Winner takes all  tradition with a collaborative proof of work alternative, as a serious paradigm shift and encouraging people to spend some time on digesting it.

But it is what I got and it makes me angry sometimes with myself primarily and with the whole situation secondly, not with you. You are just one other advocate, you are not alone, people are busy investigating PoS or pumping bitcoin, nobody cares. I'm sick of it.

And when you came on board and I started getting more optimistic, my expectations got too high and I went out of rail. Sorry.

Imo, despite the bitterness, we have made some progress and I sincerely ask you to schedule some time and take a closer look to the proposal, I assure you,  every single objection you have made here is already addressed by the starting post or through the replies I have made. Thank you for participation and sorry for the inconvenience.  Smiley
aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 12, 2018, 08:48:30 AM
Last edit: June 12, 2018, 09:14:12 AM by aliashraf
 #32

miner charges all transaction fees to his account <--- why is a miner paying transaction fees?
First of all, glad to see you being back and showing your commitment, I appreciate it:  Smiley
miner is not paying, he is charging, not being charged. I mean he rewards his wallet with transaction fees, (only transaction fees and not the block reward)
Quote
Quote
calculated difficulty using previous block hash padded with all previous fields <--- padded? how does a hash provide a difficulty adjustment?
who said anything here about difficulty adjustment? It is about calculating the difficulty of the share by
1- padding some fields to each other: previous block hash + other fields of the structure (Net Merkle root+the Meiner's wallet address+nonce)
2- performing a sha2 hash
3- evaluating the difficulty of the hash
Quote
Quote

A computed difficulty score using the hash of ...
A calculated difficulty score is the ratio of the difficulty of the share compared to the target difficulty. It is typically less than 1 but the greater scores (if any) will be set to 1.
Quote
Quote

For each share difficulty score is at least as good as 0.0001 <--- why is a difficulty good or bad? criteria?
being good means being close to the target difficulty.
Quote
Quote

Sum of reward amount fields is equal to block reward and for each share is calculated proportional to its difficulty score <--- Do you mean weighted sum? Huh? Needs better explanation.
Yes, it deserves more explanation. It is about the structure of Shared Coinbase transaction. It is a magical structure that we use for both proving the work of the contributor (sum of the scores/difficulties of all the items MUST satisfy the required difficulty target) and for distributing the reward (each share gets the fraction proportional to its score/difficulty).
Quote
Quote

It is fixed to yield a hash that is as difficult as target difficulty * 0.05  <--- how so? Where? What algorithm?
It is about Prepared Block  difficulty target that should be set to 0.05 of the calculated network difficulty. Nothing new in terms of algorithm just a matter of protocol, just like how traditional PoW enforces the difficulty for the blocks.
Quote
Quote

It is fixed to yield a hash that is as difficult as target difficulty * 0.02

Mining process goes through 3 phases for each block: <--- these sections are not a sufficient explanation of the algorithm. You expect the reader to read your mind. Help us out here and explain how this thing you invented works

Ok I'll do my best:

Unlike the situation with traditional PoW, in PoCW miners should go through three phases (they better do so unless they want to solo mine which is not of their interests or commit an attack against the network which is not feasible as long as they have not the majority):

Phase 1: Miners SHOULD try to find a block with at least 5% good hash and while rewarding the transaction fees into their wallets through a coinbase transaction (free of reward, just tr fees) committed to the merkle tree that its root  is committed to the block header. It is called the Preparation phase

Phase 2: After the network reaches to a state that one or two or three competing instances of such a block have been mined and propagated in the network miners MAY eventually realise that the window for mining such a block is closing because of the risks involved in not getting to the final stage because of the competition.
Instead they accept the fact that they won't be rewarded for transaction fess and choose to produce/mine Collaboration shares for one of the above mined blocks (i.e. putting their Merkle root in the data structure named Collaboration Share which can (later) trivially be translated to Coinbase share and being used for difficulty evaluation and reward distribution at the same time(if the miner happened to choose the most popular Prepared Block) .
I have extensively discussed with @ir.hn this phase and have shown that it is an exponentially convergent process and in the midsts of the process we will be witnessing the whole network being busy to produce shares for the same Net Merkle Tree root.
It is called the Contribution Phase, Note: As you might have already realized this is not mandatory. Also note that in this phase miners don't generate blocks, these are just shares, Contribution Shares that should wait for the next phase in which a miner (just one miner) may include them in a block for  using their scores both to prove the work and to share the reward.

Phase3: after enough shares have been accumulated for a Merkle root, miners SHOULD start to search for one  final block (with a difficulty being fixed to be at least 2% close to the calculated network difficulty) encompassing:
1- The Merkle root (remember it has one coinbase transaction only rewarding the original miner of the first phase) of one of the blocks mined in the first phase.
2- Anew coinbase transaction, the Shared Coinbase Transaction containing the required shares to prove the work and the weighed distribution of the block rewards as an integrated whole.
3- other usual fields

It is the Finalization Phase.
Quote
Quote

Phrases are devoid of meaning for me. With any key words that really confound me as making no sense are highlighted in bold.

Without being able to understand these, I can’t digest your specification unless I holistically reverse engineer what your intended meaning is. And I am unwilling to expend that effort.

Please translate.

I could figure it out if I really want to. But as I said, I have a lot of things to do and I enough puzzles on my TODO list to solve already.
Did my best. Thanks for the patience/commitment  Smiley
tromp
Legendary
*
Online Online

Activity: 978
Merit: 1082


View Profile
June 12, 2018, 09:13:47 AM
 #33

Verification process involves:
  • Checking both the hash of the finalized block and all of its Shared Coinbase Transaction items to satisfy network difficulty target cumulatively

This is a serious problem with your proposal. The proof of work is not self-contained within the header.
It requires the verifier to obtain up to 10000 additional pieces of data that must all be verified, which is too much overhead in latency, bandwidth, and verification time.
aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 12, 2018, 09:27:57 AM
 #34

  • Verification process involves:
    • Checking both the hash of the finalized block and all of its Shared Coinbase Transaction items to satisfy network difficulty target cumulatively
This is a serious problem with your proposal. The proof of work is not self-contained within the header.
It requires the verifier to obtain up to 10000 additional pieces of data that must all be verified, which is too much overhead in latency, bandwidth, and verification time.[/list]
Shared Coinbase transaction typically is 32 kB data (an average of 4500 items)  and doesn't need any further verification, like checking UTXO, mempool, whatever.
Although shares have to be verified to have the required difficulty (being hashed and examined) it is a cpu/bound task  and ways faster than the block itself to be verified.

Note: verifying a block takes a lot of communication, accessing the mempool in hard disk, querying/fetching the missing transactions from the peers, verifying transaction signatures (which is hell of a processing although not bein I/O bound), accessing the hard disk to check each transaction against the UTXO , ...

According to my assessments, this verification will be done with adding zero or a very small latency because verifier is multitasking and the job will be done in cpu idle times.
aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 12, 2018, 09:33:19 AM
Last edit: June 12, 2018, 09:48:41 AM by aliashraf
 #35

Additionally I think I found another game theory flaw in his design.

The design presumes that the leadership (for finding the 0.05 * Prepared blocks) can’t be attacked and subdivide the rest of the hashrate because you assume they would need 50+% to get a lead, but AFAICT that is not true because of selfish mining.

The 33% attacker can mine on his hidden Prepared block and then release it right before the rest of the network catches up.

Thanks for the comment,  I have to analyse it more thoroughly, I am very glad to see you guys are approaching that good. Will be back in like half an hour with the analysis and possible mitigations.
alfaenzo
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
June 12, 2018, 10:19:26 AM
 #36

i like it
tromp
Legendary
*
Online Online

Activity: 978
Merit: 1082


View Profile
June 12, 2018, 10:31:37 AM
 #37

Shared Coinbase transaction typically is 32 kB data (an average of 4500 items)  and doesn't need any further verification, like checking UTXO, mempool, whatever.

Since PoW should be considered an essential part of the header, what you are proposing then is to increase header size from 80 bytes upto 72 KB (worst case 10000 items), a nearly 1000 fold increase...
aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 12, 2018, 11:21:12 AM
 #38

@anunymint

As of classical selfish attack itself, I personally disagree to call it an attack at all. I rather see it as a fallacy, a straw man fallacy.
My reasoning:
PoW has nothing to do with announcement. Once a miner prefers to keep his block secret it is his choice and his right as well, he is risking his block to become orphan in exchange for a possible advantage against the rest of the network in mining for the next block.

Although Like PoW, this proposal is not about prohibiting people from selfish mining, there is a point to rephrase the above reasoning somehow different, this proposal is about reducing the pooling pressure and helping the network to become more decentralized by increasing the number of miners. How? By reducing the variance of mining rewards that is one of the 2 important factors for this pressure (I will come back to the second factor, soon).

So, it might be a reasonable expectation from PoCW to have something to do with selfish mining.

It has, but first of all it is worth mentioning, according to the protocol, miners are free to choose not to collaborate and go solo if they wish although by keeping the costs of participation very low and the benefits high enough, this practice is discouraged.

PoCW improves this situation by reducing the likelihood of pools to take place, eliminating one of the most important factors that makes their existence possible at all.

Your second objection but happens to be about the second important factor for pooling pressure: proximity.

It is about taking advantage of having access to information (a freshly mined block for instance) and taking advantage of it or not having access to such an information and wasting resources (mining stall blocks) because of it. Even with completely loyal nodes in bitcoin and other PoW based networks, there is always a proximity premium for the nodes nearer to the source (lucky finder of the fresh block) compared to other nodes.

I have to accept that by pushing for more information being circulating around, PoCW, this proposal, is suspected to enforcing this second pressure for pooling.

I have been investigating it for a while and my analysis suggests otherwise. It is a bit complicated and deserves to be considered more cautiously I need to remind that proximity premium is known flaw for PoW's decentralization agenda.

For a traditional winner-takes-all PoW network, like bitcoin there is just one pieces of information (the fresh block) that causes the problem, true, but the weight of this information and resulting premium is very high and it is focused in one spot, the lucky miner in the focal point and its neighbors in the hot zone.

For this proposal, this premium is distributed more evenly, tens of thousands times.

OOps! there is almost no proximity premium flaw in Proof of Contributive Work!

Without a proximity premium and a mining variance flaw, there will be no mining pressure, no threat to centralization. It is how selfish mining concerns (again not a flaw) are addressed too. It turns to become a simple, innocent solo mining.

As of @tromp's and your concerns about share validation overhead, I have already addressed it, there is no resource other than a few cpu cycles to be consumed for it, not a big deal according to my analysis and by distributing the proximity premium almost evenly, it does more than enough to compensate  Wink

aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 12, 2018, 11:26:55 AM
 #39

Shared Coinbase transaction typically is 32 kB data (an average of 4500 items)  and doesn't need any further verification, like checking UTXO, mempool, whatever.

Since PoW should be considered an essential part of the header, what you are proposing then is to increase header size from 80 bytes upto 72 KB (worst case 10000 items), a nearly 1000 fold increase...

This is more significant when considered in conjunction with the 0.02 * threshold on finishing a block. That threshold means it’s more likely that two blocks will be finished closer together than for 10 minute block periods and thus the increased propagation and verification (for the up to 10,000 block solutions) can be significant relative to the spacing between duplicate finished blocks. As I wrote in my prior post, all of this contributes to amplifying the selfish mining attack.
Well, @tromp is not on the point, neither you @anunymint:

The Shared Transaction Coinbase is not a part of the Header, its hash(id) is,
The transaction itself is part of the block, like conventional coinbase transaction and other transactions. The block size remains the same as what protocol dictates, plus the size of this transaction it implies an almost 5% increase (worst case) which is not a big deal.
aliashraf (OP)
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
June 12, 2018, 05:45:51 PM
 #40

The Shared Transaction Coinbase is not a part of the Header, its hash(id) is,

All the small proof-of-work solutions have to communicated and calculated before the winning block can be communicated. So that is up to 10,000 (if difficulty target is 0.0001) multiplied by the 64B size of a SHA256 hash, which is 640KB of data that must be communicated across the network. That’s not factoring in if the network is subdivided and miners are mining on two or more leader Prepared blocks, in which case the network load can be double or more of that.
You are mixing up heterogenous things imo:
As I have  said before, Shared Coinbase Transaction is just a transaction with a size as small as 60 bytes (likely, implementation dependent) up to as large as a maximum of 60,000 bytes with normal distribution of probabilities and an average of 30,000 bytes. This is it. There is just one SHA256(2) hash that is committed to block header.
This special transaction is verified by checking the asserted score and reward of each row (from 1 to 10,000 rows out there) by computing the hash of this row appended to previous block hash. There is no need to attach this hash to each row neither in the storage nor in the communication.

As of the need for fetching this special transaction by peers to be able to verify the finalized block, it is very common.
After BIP 152 peers check whether they have the corresponding transaction committed to the Merkle hash of the under validation block, is present in their version of mempool or not. In the latter case,  they fetch the transaction from the peer and validate it.

For ordinary transactions, as I have declared before, the validation process is by no means a trivial process, it involves ECDSA signature verification and UTXO consistency check for each input of each transaction which both are difficult jobs in orders of magnitude compared to what should be done for the (output)rows of our special transaction under consideration, Shared Coinbase Transaction.

For each row of this transaction there is only few processor cycles needed to compute the hash and it is not even the case for all of the rows, just for the rows missing from the memory of the node.

Conclusion: I maintain my previous assertion of zero computation over head and an average of 32 KB block size increase.
Quote
Now I do understand that these proof-of-work share solutions are communicated continuously and not all at once at the Finalized block, but you’ve got at least three possible issues:

1. As I told you from the beginning of this time wasting discussion, the small miners have to verify all the small proof-of-work solutions otherwise they’re trusting the security to the large miner which prepares the Finalized block. If they trust, then you do have a problem about non-uniform hashrate which changes the security model of Bitcoin. And if they trust you also have a change to the security model of Bitcoin.

Easy dude, it is not time wasting, and if it is, why in the hell we should keep doing this, nobody reads our posts, people are busy with more imporatnt issues, no body is going to be the president of bitcoin or anything.

I'm somewhat shocked  reading this post tho.
We have discussed it exhaustively before. It is crystal clear, imo.

First of all (I have to repeat) mining have nothing to do with verifying shares, blocks, whatever ... Miners just perform zillions times of nonce incrementation  and hash computation to find a good hash, It is  a full node's job to verify whatever it should. Agree?

Now, full nodes busy I/O operations, stuff that need extensive networking and disk access,  have a lot of cpu power free and a modern os can utilize it to perform hundreds of thousands of SHA256 hashes without hesitation and any bad performance consequence, just like nothing happened ever.

Is that hard to keep in mind and forget about what have been said in other context (infamous block size debate) please concentrate.

In that debate core team was against the block size increase because they were worried about transaction verification being an I/O bound task, with your share verification nightmare, we are dealing with a cpu bound task, it is not the same issue, don't worry about it.
Pages: « 1 [2] 3 4 5 6 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!